text
stringlengths
59
500k
subset
stringclasses
6 values
Analytical Description of the Dynamics of Planar Pulses Propagating in the Mode of Tunnel Ionization V. A. Khalyapin1,2 & A. N. Bugay3 Bulletin of the Russian Academy of Sciences: Physics volume 86, pages 1355–1360 (2022)Cite this article The method of moments is used to investigate the propagation of a planar pulse in the mode of tunnel ionization. A system of equations is obtained for the parameters of a signal and the conditions for its quasi-stable propagation are found according to Lyapunov. Working on a manuscript? Avoid the most common mistakes and prepare your manuscript for journal editors. It is of interest to study the dynamics of intense pulses propagating in the mode of ionization for both fundamental research and application, since it is used in a variety of fields that include remote sensing of the atmosphere [1], control of lightning [2], supercontinuum generation [3], and generating terahertz radiation [4]. It is known that the solution to the nonlinear Schrödinger equation (NLSE) is stable only in one-dimensional case \(D = 1\), which corresponds to purely spatial or temporal signals. The solutions are unstable at dimensionalities of D = 2 (which corresponds to beams or planar spatiotemporal pulses) and D = 3 (which corresponds to optical bullets) [5]. Several mechanisms have been proposed for stabilizing signals when D > 1: saturating nonlinearity [6], competing nonlinearities [7], higher-order diffraction or dispersion [8], gradient waveguide [9], and second-harmonic generation [10]. It was shown [11–18] that ionization can also stabilize a signal, due to the balance between self-focusing, diffraction, and plasma divergence. It is known that ionization shifts the pulse spectrum toward higher frequencies [19–21] due to the generation of free electrons. This results in a negative refractive index and thus a blue shift of the signal spectrum. This phenomenon is opposite to the familiar stimulated Raman self-scattering (SRS) [22–26]. The ionization of a dielectric by an intense light pulse can be described using the Keldysh formula [27]. There are two limiting modes, depending on the Keldysh parameter: multiphoton and tunneling ionization. It is of interest to analyze pulses propagating in the mode of tunneling ionization, since the region of the anomalous group velocity dispersion in most media belongs to the infrared transmission band [28]. The limit of tunneling ionization was studied in [28] mostly through experiments and numerical modeling, due to the mathematical difficulties in describing the contribution of the tunneling ionization to the signal dynamics. The Keldysh formula for the mode of tunneling ionization was approximated by a linear function in [19], and Talepbour et al. proposed using the power dependence of the rate of ionization on intensity for this purpose [29]. A drawback of these approaches is that the parameters of the approximate formulas for the rate of ionization have to be selected each time, depending on the considered range of intensity. In this work, we propose an approach to approximating the contribution from tunneling ionization that is free of the above shortcomings. METHOD OF MOMENTS In this work, we consider the dynamics of planar signals propagating in the mode of tunneling ionization using the method of moments [25, 30, 31]. The equations that describe the dynamics have the form $$\begin{gathered} \frac{{\partial \psi }}{{\partial z}} + \frac{{i{{\beta }_{2}}}}{2}\frac{{{{\partial }^{2}}\psi }}{{\partial {{\tau }^{2}}}} - \frac{{{{\beta }_{3}}}}{6}\frac{{{{\partial }^{3}}\psi }}{{\partial {{\tau }^{3}}}} - i\gamma \psi {{\left| \psi \right|}^{2}} + \frac{\gamma }{\omega }\frac{\partial }{{\partial \tau }}\left( {\psi {{{\left| \psi \right|}}^{2}}} \right) \\ + \,\,i\gamma {{T}_{R}}\psi \frac{{\partial {{{\left| \psi \right|}}^{2}}}}{{\partial \tau }} + i\eta \psi \int\limits_{ - \infty }^\tau W d\tau {\kern 1pt} ' - \frac{{i\mu }}{2}\frac{{{{\partial }^{2}}\psi }}{{\partial {{x}^{2}}}} = 0. \\ \end{gathered} $$ Here, ω is the central frequency of the signal; \(k\) is the wavenumber at the central frequency; \(z\) is the coordinate of signal propagation; \(x\) is a transverse coordinate; \(\tau = t - {z \mathord{\left/ {\vphantom {z {{{\vartheta }_{g}}}}} \right. \kern-0em} {{{\vartheta }_{g}}}}\) is the time spent in the co-moving coordinate system; \({{\vartheta }_{g}}\) is the group velocity at frequency ω; μ = 1/k; \(\eta = {{s\omega {{\tau }_{c}}{{N}_{0}}} \mathord{\left/ {\vphantom {{s\omega {{\tau }_{c}}{{N}_{0}}} 2}} \right. \kern-0em} 2}\) is the parameter related to the electron plasma; \(s = {{{{k}_{0}}\omega {{\tau }_{c}}} \mathord{\left/ {\vphantom {{{{k}_{0}}\omega {{\tau }_{c}}} {{{n}_{0}}{{N}_{c}}\left( {1 + {{\omega }^{2}}\tau _{c}^{2}} \right)}}} \right. \kern-0em} {{{n}_{0}}{{N}_{c}}\left( {1 + {{\omega }^{2}}\tau _{c}^{2}} \right)}}\) is the avalanche ionization cross section; \({{N}_{c}} = {{{{\varepsilon }_{0}}{{m}_{e}}{{\omega }^{2}}} \mathord{\left/ {\vphantom {{{{\varepsilon }_{0}}{{m}_{e}}{{\omega }^{2}}} {{{e}^{2}}}}} \right. \kern-0em} {{{e}^{2}}}}\) is the critical plasma density above which the plasma is not transparent; \({{\varepsilon }_{0}}\) is the susceptibility of a vacuum; \({{\tau }_{c}}\) is the period of electron collision; \(e\) and \(m\) are the elementary charge and electron mass, respectively; \({{N}_{0}}\) is the concentration of nonionized molecules; \(W\)is the degree of ionization, determined using the Keldysh formula in the limit of small Keldysh parameters (the mode of tunnel ionization); \({{\beta }_{2}}\) is the coefficient of group velocity dispersion (GVD); \({{\beta }_{3}}\) is a positive parameter that determines third-order dispersion; \(\gamma \) is the coefficient of cubic nonlinearity; and \({{T}_{R}}\) characterizes the contribution from SRS. Coefficient \({{\beta }_{2}}\) is positive if the central frequency of the pulse lies in the region of the normal group velocity dispersion, and negative if it does not. The dependence of ionization \(W\) on intensity I for a dielectric in the limit of tunneling ionization is determined using the Keldysh formula [27] $$W = {{W}_{0}}{{\left( {\frac{{\left| \psi \right|}}{{{{B}_{T}}}}} \right)}^{{5{\text{/}}2}}}\exp \left( { - \frac{{{{B}_{T}}}}{{\left| \psi \right|}}} \right).$$ Here, \({{W}_{0}} = {{\sqrt \pi U_{i}^{{5{\text{/}}2}}{{m}^{{3{\text{/}}2}}}} \mathord{\left/ {\vphantom {{\sqrt \pi U_{i}^{{5{\text{/}}2}}{{m}^{{3{\text{/}}2}}}} {9\sqrt 8 {{\hbar }^{4}}{{N}_{0}}}}} \right. \kern-0em} {9\sqrt 8 {{\hbar }^{4}}{{N}_{0}}}},\) BT = \({{\pi {{m}^{{1{\text{/}}2}}}U_{i}^{{3{\text{/}}2}}} \mathord{\left/ {\vphantom {{\pi {{m}^{{1{\text{/}}2}}}U_{i}^{{3{\text{/}}2}}} {2e\hbar }}} \right. \kern-0em} {2e\hbar }},\) \(\hbar \) is the Planck constant, and \({{U}_{i}}\) is the ionization potential. The dynamics of the parameters of a pulse was analyzed by the method of moments. We chose a trial solution in the form $$\psi = B\exp \left[ { - \frac{1}{2}{{{\left( {\frac{\tau }{{{{\tau }_{p}}}}} \right)}}^{2}} - \frac{1}{2}{{{\left( {\frac{x}{a}} \right)}}^{2}} + i\left( {\phi - \frac{{C{{\tau }^{2}}}}{{2\tau _{p}^{2}}} - \frac{{\varepsilon {{x}^{2}}}}{{2{{a}^{2}}}}} \right)} \right],$$ where \(B\) is the signal amplitude, \({{\tau }_{p}}\) is the length of the signal, \(C\) is the parameter of frequency modulation, \(\phi \) is the phase, \(a\) is a parameter proportional to the width of a planar signal, and \(\varepsilon \) describes the curvature of the wave surfaces. All parameters depend on coordinate \(z.\) We define angular momentum as $$E = \int\limits_{ - \infty }^\infty {\int\limits_{ - \infty }^\infty {{{{\left| \psi \right|}}^{2}}d\tau dx} } ,$$ $$\tau _{p}^{2} = \frac{2}{E}\int\limits_{ - \infty }^\infty {\int\limits_{ - \infty }^\infty {{{\tau }^{2}}{{{\left| \psi \right|}}^{2}}d\tau } dx} ,$$ $${{a}^{2}} = \frac{2}{E}\int\limits_{ - \infty }^\infty {\int\limits_{ - \infty }^\infty {{{x}^{2}}{{{\left| \psi \right|}}^{2}}d\tau } dx} ,$$ $$C = \frac{i}{E}\int\limits_{ - \infty }^\infty {\int\limits_{ - \infty }^\infty {\tau \left( {\psi {\text{*}}\frac{{\partial \psi }}{{\partial \tau }} - \psi \frac{{\partial \psi {\text{*}}}}{{\partial \tau }}} \right){{{\left| \psi \right|}}^{2}}d\tau } dx} ,$$ $$\varepsilon = \frac{i}{E}\int\limits_{ - \infty }^\infty {\int\limits_{ - \infty }^\infty {x\left( {\psi {\text{*}}\frac{{\partial \psi }}{{\partial x}} - \psi \frac{{\partial \psi {\text{*}}}}{{\partial x}}} \right){{{\left| \psi \right|}}^{2}}d\tau } dx} .$$ Using the method \({\text{of}}\) moments, we obtain the system of equations $$E = {{B}^{2}}\pi {{\tau }_{p}}a = {\text{const}},$$ $$\frac{{d\nu }}{{dz}} = - \frac{C}{{{{L}_{d}}\nu }},$$ $$\frac{{d\rho }}{{dz}} = - \frac{\varepsilon }{{{{L}_{D}}\rho }},$$ $$\frac{{dC}}{{dz}} = - \frac{{\left( {1 + {{C}^{2}}} \right)}}{{{{L}_{d}}{{\nu }^{2}}}} + \frac{1}{{{{L}_{N}}\nu \rho }},$$ $$\frac{{d\varepsilon }}{{dz}} = - \frac{1}{{{{L}_{D}}{{\rho }^{2}}}}\left( {1 + {{\varepsilon }^{2}}} \right) + \frac{1}{{{{L}_{N}}\nu \rho }} - \frac{{\exp \left( { - \sqrt {{{{{I}_{T}}\nu \rho } \mathord{\left/ {\vphantom {{{{I}_{T}}\nu \rho } {{{I}_{0}}}}} \right. \kern-0em} {{{I}_{0}}}}} } \right)}}{{{{L}_{\eta }}{{\nu }^{{3{\text{/}}4}}}{{\rho }^{{7{\text{/}}4}}}}}.$$ In system (10)–(13), we introduced dimensionless parameters \(\nu = {{{{\tau }_{p}}} \mathord{\left/ {\vphantom {{{{\tau }_{p}}} {{{\tau }_{0}}}}} \right. \kern-0em} {{{\tau }_{0}}}}\) and \(\rho = {a \mathord{\left/ {\vphantom {a {{{a}_{0}}}}} \right. \kern-0em} {{{a}_{0}}}},\) where \({{\tau }_{0}}\) and \({{a}_{0}}\) are the initial values of the corresponding parameters. The characteristic lengths of dispersion, diffraction, nonlinearity, and ionization are expressed as \({{L}_{d}} = {{\tau _{0}^{2}} \mathord{\left/ {\vphantom {{\tau _{0}^{2}} {\left| {{{\beta }_{2}}} \right|}}} \right. \kern-0em} {\left| {{{\beta }_{2}}} \right|}},\) \({{L}_{D}} = {{a_{0}^{2}} \mathord{\left/ {\vphantom {{a_{0}^{2}} \mu }} \right. \kern-0em} \mu },\) \({{L}_{N}} = {{c{{n}_{0}}} \mathord{\left/ {\vphantom {{c{{n}_{0}}} {4\pi \gamma {{I}_{0}}}}} \right. \kern-0em} {4\pi \gamma {{I}_{0}}}},\) and \({{L}_{\eta }} = {{I_{T}^{{7{\text{/}}4}}} \mathord{\left/ {\vphantom {{I_{T}^{{7{\text{/}}4}}} {2\sqrt \pi \eta {{W}_{0}}{{\tau }_{0}}I_{0}^{{7{\text{/}}4}}}}} \right. \kern-0em} {2\sqrt \pi \eta {{W}_{0}}{{\tau }_{0}}I_{0}^{{7{\text{/}}4}}}},\) where we write \({{I}_{{0,T}}} = {{c{{n}_{0}}B_{{0,T}}^{2}} \mathord{\left/ {\vphantom {{c{{n}_{0}}B_{{0,T}}^{2}} {8\pi }}} \right. \kern-0em} {8\pi }}.\) In Eq. (13), we used the estimate \({{{{I}_{T}}} \mathord{\left/ {\vphantom {{{{I}_{T}}} I}} \right. \kern-0em} I}\sim {{\left( {{{{{U}_{i}}} \mathord{\left/ {\vphantom {{{{U}_{i}}} {\hbar w}}} \right. \kern-0em} {\hbar w}}} \right)}^{2}} \gg 1\) and the key approximation \(\exp \left( {{{{{B}_{T}}} \mathord{\left/ {\vphantom {{{{B}_{T}}} {\left| \psi \right|}}} \right. \kern-0em} {\left| \psi \right|}}} \right)\, \approx \) \(\exp \left( {{{{{B}_{T}}} \mathord{\left/ {\vphantom {{{{B}_{T}}} {{{B}_{0}}}}} \right. \kern-0em} {{{B}_{0}}}}\, - \,\left( {{{{{B}_{T}}} \mathord{\left/ {\vphantom {{{{B}_{T}}} {2{{B}_{0}}}}} \right. \kern-0em} {2{{B}_{0}}}}} \right)\left( {{{{{r}^{2}}} \mathord{\left/ {\vphantom {{{{r}^{2}}} {{{R}^{2}}}}} \right. \kern-0em} {{{R}^{2}}}}\, + \,{{{{\tau }^{2}}} \mathord{\left/ {\vphantom {{{{\tau }^{2}}} {\tau _{p}^{2}}}} \right. \kern-0em} {\tau _{p}^{2}}}} \right)} \right)\) to consider the ionization term, which allowed us to consider the contribution from tunneling ionization to the signal dynamics. STATIONARY SOLUTION AND ITS STABILITY To determine the parameters of the stationary state and conditions for its stability, we rewrite Eqs. (10)–(13) in the form $$\frac{{d\nu }}{{d\xi }} = \frac{{{{P}_{\nu }}}}{{{{m}_{\nu }}}},$$ $$\frac{{d{{P}_{\nu }}}}{{d\xi }} = - \frac{{\partial U}}{{\partial \nu }},$$ $$\frac{{d\rho }}{{d\xi }} = \frac{{{{P}_{\rho }}}}{{{{m}_{\rho }}}},$$ $$\frac{{d{{P}_{\rho }}}}{{d\xi }} = - \frac{{\partial U}}{{\partial \rho }} + \tilde {F}.$$ Here, \({{P}_{\nu }} = {{{{m}_{\nu }}\partial \nu } \mathord{\left/ {\vphantom {{{{m}_{\nu }}\partial \nu } {\partial \xi }}} \right. \kern-0em} {\partial \xi }} = {{ - C} \mathord{\left/ {\vphantom {{ - C} \nu }} \right. \kern-0em} \nu },\) Pρ = \({{{{m}_{\rho }}\partial \rho } \mathord{\left/ {\vphantom {{{{m}_{\rho }}\partial \rho } {\partial \xi }}} \right. \kern-0em} {\partial \xi }} = {{ - \varepsilon } \mathord{\left/ {\vphantom {{ - \varepsilon } \rho }} \right. \kern-0em} \rho },\) \({{m}_{\nu }} = 1,\) \({{m}_{\rho }} = {{{{L}_{D}}} \mathord{\left/ {\vphantom {{{{L}_{D}}} {{{L}_{d}}}}} \right. \kern-0em} {{{L}_{d}}}},\) and \(\xi = {z \mathord{\left/ {\vphantom {z {{{L}_{d}}}}} \right. \kern-0em} {{{L}_{d}}}}.\) System (14)–(17) can be interpreted as a mechanical analogy describing the motion of a particle over a surface with coordinate axes \(\nu \) and \(\rho \) in potential field $$U = \frac{1}{{2{{\nu }^{2}}}} + \frac{{{{L}_{d}}}}{{2{{L}_{D}}{{\rho }^{2}}}} - \frac{{{{L}_{d}}}}{{{{L}_{N}}\nu \rho }}.$$ In this case, the particle mass depends on the direction of motion. The role of an external force acting along coordinate \(\rho \) is played by ionization term \(\tilde {F} = {{{{L}_{d}}\exp ( - \sqrt {{{{{I}_{T}}\nu \rho } \mathord{\left/ {\vphantom {{{{I}_{T}}\nu \rho } {{{I}_{0}}}}} \right. \kern-0em} {{{I}_{0}}}}} )} \mathord{\left/ {\vphantom {{{{L}_{d}}\exp ( - \sqrt {{{{{I}_{T}}\nu \rho } \mathord{\left/ {\vphantom {{{{I}_{T}}\nu \rho } {{{I}_{0}}}}} \right. \kern-0em} {{{I}_{0}}}}} )} {{{L}_{\eta }}{{\nu }^{{3{\text{/}}4}}}{{\rho }^{{11{\text{/}}4}}}}}} \right. \kern-0em} {{{L}_{\eta }}{{\nu }^{{3{\text{/}}4}}}{{\rho }^{{11{\text{/}}4}}}}}.\) The stationary solution for this system of equations can be written as $${{L}_{d}} = {{L}_{N}},$$ $$\frac{1}{{{{L}_{D}}}} = \frac{1}{{{{L}_{N}}}} - \frac{{\exp \left( { - \sqrt {{{{{I}_{T}}} \mathord{\left/ {\vphantom {{{{I}_{T}}} {{{I}_{0}}}}} \right. \kern-0em} {{{I}_{0}}}}} } \right)}}{{{{L}_{\eta }}}}.$$ Equations (19) and (20) can be rewritten in the form $${{\tau }_{0}} = \sqrt {\frac{{2\left| {{{\beta }_{2}}} \right|}}{{{{k}_{0}}{{n}_{2}}{{I}_{0}}}}} ,$$ $$\frac{1}{{{{a}_{0}}}} = \sqrt {{{k}_{0}}{{n}_{0}}\left( {\frac{{{{k}_{0}}{{n}_{2}}{{I}_{T}}}}{{2X}} - 2\sqrt \pi \eta {{W}_{0}}\sqrt {\frac{{2\left| {{{\beta }_{2}}} \right|}}{{{{k}_{0}}{{n}_{2}}{{I}_{T}}}}} \frac{{\exp ( - \sqrt X )}}{{{{X}^{{5{\text{/}}4}}}}}} \right)} .$$ Here, X = IT/I0, and we use the relation \({{8\pi \gamma } \mathord{\left/ {\vphantom {{8\pi \gamma } {c{{n}_{0}}}}} \right. \kern-0em} {c{{n}_{0}}}} \equiv {{k}_{0}}{{n}_{2}},\) where \({{n}_{2}}\) is the nonlinear refractive index of the medium. It follows from (21) and (22) that as the intensity of the signal grows, its length is reduced and its width rises. The radicand in Eq. (25) must be positive (a zero radicand corresponds to an infinite width of the signal). Using this condition, we obtain $${{I}_{0}} < {{4{{I}_{T}}} \mathord{\left/ {\vphantom {{4{{I}_{T}}} {{{{\left( {\tilde {W}\left( {2{{A}^{2}}} \right)} \right)}}^{2}}}}} \right. \kern-0em} {{{{\left( {\tilde {W}\left( {2{{A}^{2}}} \right)} \right)}}^{2}}}},$$ where \(A = {{4\eta {{W}_{0}}\sqrt {2\pi \left| {{{\beta }_{2}}} \right|} } \mathord{\left/ {\vphantom {{4\eta {{W}_{0}}\sqrt {2\pi \left| {{{\beta }_{2}}} \right|} } {{{{\left( {{{k}_{0}}{{n}_{2}}{{I}_{T}}} \right)}}^{{3{\text{/}}2}}}}}} \right. \kern-0em} {{{{\left( {{{k}_{0}}{{n}_{2}}{{I}_{T}}} \right)}}^{{3{\text{/}}2}}}}}\) and \(\tilde {W}\left( x \right)\) is the Lambert function [32]. Let us consider the stability of stationary solutions (21), (22) for system (14)–(17). We obtain four eigenvalues according to Lyapunov [33]: $$\lambda = \pm \sqrt {\frac{{ - \left( {{{U}_{{,\nu \nu }}}{{m}_{2}} - d{{m}_{1}}} \right) \pm \sqrt {{{{\left( {{{U}_{{,\nu \nu }}}{{m}_{2}} - d{{m}_{1}}} \right)}}^{2}} - 4{{m}_{1}}{{m}_{2}}\left( {h{{U}_{{,\nu \rho }}} - d{{U}_{{,\nu \nu }}}} \right)} }}{{2{{m}_{1}}{{m}_{2}}}}} .$$ Here, the subscripts after the decimal point denote the derivative with respect to the respective variables \(h = {{\tilde {F}}_{{,\nu }}} - {{U}_{{,\nu \rho }}},\) and \(d = {{\tilde {F}}_{{,\rho }}} - {{U}_{{,\rho \rho }}}.\) The stationary solution will be stable if \(\lambda \) has no positive real part. We can easily show that \(\lambda \) will be purely imaginary if the conditions $$h{{U}_{{,\nu \rho }}} - d{{U}_{{,\nu \nu }}} > 0,$$ $${{U}_{{,\nu \nu }}}{{m}_{2}} - d{{m}_{1}} > 0.$$ are met. The real part is zero, which means that the signal parameters will oscillate in the vicinity of stationary values under the action of weak perturbations. If we remove the ionization term \(\left( {\tilde {F} = 0} \right),\) (25) and (26) can be written in the form of the minimum condition \({{U}_{{,\nu \nu }}}{{U}_{{,\rho \rho }}} - {{\left( {{{U}_{{,\nu \rho }}}} \right)}^{2}} > 0,\) \({{U}_{{,\nu \nu }}} > 0.\) As expected, the potential function in this case has no minimum. Figure 1a shows a view of potential surface \(U\) with no ionization, while Fig. 1b shows corresponding vector field \({{F}_{\nu }},\) \({{F}_{\rho }}.\) If we allow for ionization, approximation (\({{{{I}_{T}}} \mathord{\left/ {\vphantom {{{{I}_{T}}} {{{I}_{0}}}}} \right. \kern-0em} {{{I}_{0}}}} \gg 1\)), shows that conditions (25) and (26) are satisfied at any intensities. This result agrees with the conclusion reached for planar signals in [34], where a linear approximation of the Keldysh formula in the tunnel limit was used. It should be noted that when considering stability, we ignored the shift of the signal's frequency. Allowing for this effect would bring the system out of equilibrium. However, since the shift in frequency is included in the system of equations for the signal parameters through higher dispersion \(\left( {{{\beta }_{3}}} \right)\) and nonlinearity \(\left( {{\gamma \mathord{\left/ {\vphantom {\gamma \omega }} \right. \kern-0em} \omega }} \right)\) [34], this instability at the initial stage of dynamics can be ignored. As was shown in [28, 35, 36], ionization-induced absorption also means the signal propagates in the mode of a light bullet at distances of several millimeters in a dielectric. Let us examine the region of intensities that satisfies condition (23) as a function of the signal central wavelength. We take sapphire as a material. For sapphire (\({{U}_{I}} = 7.3\,\,{\text{eV}},\) \(~{{N}_{0}} = 2.36 \times {{10}^{{28}}}\,\,{{{\text{m}}}^{{ - 3}}},\) and \({{\tau }_{c}} = 1.59 \times {{10}^{{ - 15{\text{\;}}}}}\,\,{\text{s}}\) the approximate dependence of the refractive index on the wavelength in the mid-infrared range can be written as [37] $${{n}_{0}} = 1.75\sqrt {1 + \frac{{1.73{{{\left( {\lambda [\mu {\text{m}}]} \right)}}^{2}}}}{{{{{\left( {\lambda [\mu {\text{m}}]} \right)}}^{2}} - {{{18}}^{2}}}}} ,$$ and the GVD coefficient is determined by the dependence $${{\beta }_{2}} = \frac{{{{\lambda }^{3}}}}{{2\pi {{c}^{2}}}}\frac{{{{d}^{2}}{{n}_{0}}}}{{d{{\lambda }^{2}}}}.$$ (a) Potential field \(U\left( {\nu ,\rho } \right),\) which determines the dynamics of relative length \(\nu \) and width \(\rho \) of a planar bullet (without ionization contribution \({{L}_{\eta }} = \infty \)). (b) Corresponding field of forces whose projections are determined by the expressions \({{F}_{\nu }} = {{ - \partial U} \mathord{\left/ {\vphantom {{ - \partial U} {\partial \nu }}} \right. \kern-0em} {\partial \nu }}\) and \({{F}_{\rho }} = {{ - \partial U} \mathord{\left/ {\vphantom {{ - \partial U} {\partial \rho }}} \right. \kern-0em} {\partial \rho }}\). The nonlinear refractive index is approximated by the relation [38] $$\begin{gathered} {{n}_{2}} = n_{2}^{0} + {{N}_{1}}\exp \left( { - \frac{{\left( {\lambda - {{\lambda }_{0}}} \right)}}{{{{\lambda }_{1}}}}} \right) \\ + \,\,{{N}_{2}}\exp \left( { - \frac{{\left( {\lambda - {{\lambda }_{0}}} \right)}}{{{{\lambda }_{2}}}}} \right). \\ \end{gathered} $$ Here, \(n_{2}^{0} = 2.5 \times {{10}^{{ - 16}}}\,\,{{{\text{c}}{{{\text{m}}}^{2}}} \mathord{\left/ {\vphantom {{{\text{c}}{{{\text{m}}}^{2}}} {\text{W}}}} \right. \kern-0em} {\text{W}}}{\kern 1pt} {\text{,}}\) N1 = \(2.3 \times {{10}^{{ - 16}}}\,\,{{{\text{c}}{{{\text{m}}}^{2}}} \mathord{\left/ {\vphantom {{{\text{c}}{{{\text{m}}}^{2}}} {\text{W}}}} \right. \kern-0em} {\text{W}}}{\text{,}}\) \({{\lambda }_{0}} = 266.0~\,\,{\text{nm}}{\text{,}}\) \({{\lambda }_{1}} = 46.6\,\,~{\text{nm}}{\text{,}}\) and \({{\lambda }_{2}} = 1086.3~\,\,{\text{nm}}{\text{.}}\) Substituting (27)–(29) into (23), we find the desired dependence (Fig. 2). For the region lying to the right of the dashed line, condition \(\omega {{\tau }_{p}} > 10\) is satisfied, at which the slowly varying envelope approximation used is valid. With \({{I}_{0}} = 2.8 \times {{10}^{{13}}}\,\,{{\text{W}} \mathord{\left/ {\vphantom {{\text{W}} {{{{\text{m}}}^{2}}}}} \right. \kern-0em} {{{{\text{m}}}^{2}}}}\) and \(\lambda = 5\,\,\mu {\text{m}},\) we find from (21) and (22) estimates of the planar pulse parameters: \({{\tau }_{0}} = 30\,\,{\text{fs}},\) and \(a = 10\,\,\mu {\text{m}}.\) The solid line is described by the right-hand side of condition (23), which determines the upper limit of intensity for the existence of a stationary solution in sapphire. The region to the right of the dashed line, which determines the intensity according to formula (21) for \({{\tau }_{p}}\omega = 10,\) corresponds to the range of applicability of slowly varying envelopes \(\left( {{{\tau }_{p}}\omega \geqslant 10} \right).\) Above, we found that conditions (25) and (26) for the stability of a planar bullet against perturbations of its length and width are satisfied at any intensity. This means that at minor deviations of a particle from its equilibrium position (we use a mechanical analogy), it is affected by restoring forces directed to the equilibrium point \(\nu = 1,\) \(\rho = 1.\) Analysis of the vector field with regard to ionization showed (see Fig. 3) that the restoring forces fall exponentially upon a drop in signal intensity (see the expression for the quantity \(\tilde {F}\) describing the ionization contribution), and the restoring forces were such that the vector field resembled the vector field without ionization (Fig. 1) as early as intensity \({{I}_{0}} = 1.5 \times {{10}^{{13{\text{\;}}}}}\,\,{{\text{W}} \mathord{\left/ {\vphantom {{\text{W}} {{\text{c}}{{{\text{m}}}^{2}}}}} \right. \kern-0em} {{\text{c}}{{{\text{m}}}^{2}}}}\) (at a wavelength of \(\lambda = 5\,\,\mu {\text{m}}\) (Fig. 3а). So even though the conditions of stability are formally satisfied at such intensity and the restoring forces are nonzero, small perturbations related to the effects that we have ignored will in fact quickly knock the system out of equilibrium. Analysis of the vector field suggests that the quasi-stability of a planar bullet at a wavelength of \(\lambda = 5\,\,\mu {\text{m}}\) can be expected starting at an intensity of \({{I}_{0}} = 2 \times {{10}^{{13}}}\,\,{{\text{W}} \mathord{\left/ {\vphantom {{\text{W}} {{\text{c}}{{{\text{m}}}^{2}}}}} \right. \kern-0em} {{\text{c}}{{{\text{m}}}^{2}}}}.\) A similar situation was observed in studying the region of stability of three-dimensional light bullets [39]. In contrast to a planar bullet, the region of stability of a three-dimensional bullet is also limited in intensity from below. Here, we also analyzed the vector field and showed that the actual region of stability lies above the lower boundary of the formal window of stability due to absorption effects and terms of the next order of smallness. Vector field \({{F}_{\nu }} = {{ - \partial U} \mathord{\left/ {\vphantom {{ - \partial U} {\partial \nu }}} \right. \kern-0em} {\partial \nu }},\) \({{F}_{\rho }} = {{ - \partial U} \mathord{\left/ {\vphantom {{ - \partial U} {\partial \rho + \tilde {F}}}} \right. \kern-0em} {\partial \rho + \tilde {F}}}\) (with regard to ionization) that determines the dynamics of a planar pulse with central wavelength \(\lambda = 5\,\,\mu {\text{m}},\) propagating in sapphire with intensities of I0 = (a) 1.5 × 1013, (b) 2 × 1013, (c) 2.3 × 1013, and (d) 2.8 × 1013 W/cm2. An analytical description was obtained for the propagation of planar pulses in the mode of mutual compensation for the effects of diffraction and ionization divergence on the one hand, and self-focusing on the other. The balance of the temporal dynamics was determined from the compensation for dispersive broadening by cubic nonlinearity. The dynamics of a planar signal in the mode of tunneling ionization was analyzed using the method of moments. Analytical expressions were obtained for the quasi-stationary length and width of a planar pulse. Conditions for quasi-stable propagation were found according to Lyapunov. It should be noted that allowing for the shift of frequency to the red region of the spectrum caused by stimulated Raman self-scattering, or to the blue region if ionization effects dominate, will slowly unbalance the system and create oscillations. Equilibrium will also be disturbed by the absorption of photons during ionization, which was ignored in this work. The equilibrium investigated in this work is therefore quasi-stable. Rairoux, P., Schillinger, H., Niedermeier, S., et al., Appl. Phys. B, 2000, vol. 71, p. 573. Diels, J.-C., Bernstein, R., Stahlkopf, K., et al., Sci. Am., 1997, vol. 277, p. 50. Alfano, R.R., The Supercontinuum Laser Source, New York: Springer, 1989. D'Amico, C., Houard, A., Franco, M., et al., Phys. Rev. Lett., 2007, vol. 98, 235002. Kivshar, Yu.S. and Agrawal, G.P., Optical Solitons: From Fibers to Photonic Crystals, New York: Academic, 2003. Edmundson, D.E. and Enns, R.H., Opt. Lett., 1992, vol. 17, p. 586. Mihalache, D., Mazilu, D., Crasovan, L.-C., et al., Phys. Rev. Lett., 2002, vol. 88, 073902. Fibich, G. and Ilan, B., Opt. Lett., 2004, vol. 29, p. 887. Raghavan, S. and Agrawal G.P., Opt. Commun., 2000, vol. 180, p. 377. Sazonov, S.V., Kalinovich, A.A., Komissarova, M.V., et al., Phys. Rev. A, 2019, vol. 100, 033835. Couairon, A., Eur. Phys. J. D, 1996, vol. 27, p. 159. Henz, S. and Herrmann, J., Phys. Rev. E, 2006, vol. 53, p. 4092. Sprangle, P., Penano, J.R., and Hafizi, B., Phys. Rev. E, 2002, vol. 66, 046418. Sprangle, P., Esarey, E., and Krall, J., Phys. Rev. E, 1996, vol. 54, p. 4211. Penano, J., Palastro, J.P., Hafizi, B., et al., Phys. Rev. A, 2017, vol. 96, 013829. Couairon, A. and Mysyrowicz, A., Phys. Rep., 2007, vol. 441, p. 47. Chekalin, S.V., Dokukina, E.A., Dormidonov, A.E., et al., J. Phys. B, 2015, vol. 48, 094008. Voronin, A.A. and Zheltikov, A.M., Phys.—Usp., 2016, vol. 59, p. 869. Saleh, M.F., Chang, W., Hölzer, P., et al., Phys. Rev. Lett., 2011, vol. 107, 203902. Hölzer, P., Chang, W., Travers, J., et al., Phys. Rev. Lett., 2011, vol. 107, 203901. Facao, M., Carvalho, M.I., and Almeida, P., Phys. Rev. A, 2013, vol. 87, 063803. Dianov, E.M., Karasik, A.Y., Mamyshev, P.V., et al., JETP Lett., 1985, vol. 41, no. 6, p. 294. Mitschke, F.M. and Mollenauer, L.F., Opt. Lett., 1986, vol. 11, p. 659. Gordon, J.P., Opt. Lett., 1986, vol. 11, p. 662. Santhanam, J. and Agraval, G., Opt. Commun., 2003, vol. 222, p. 413. Bugay, A.N. and Khalyapin, V.A., Phys. Lett. A, 2017, vol. 381, p. 399. Keldysh, L.V., Zh. Eksp. Teor. Fiz., 1965, vol. 20, p. 1307. MathSciNet Google Scholar Zaloznaya, E.D., Dormidonov, A.E., Kompanets, V.O., et al., JETP Lett., 2021, vol. 113, no. 12, p. 817. Talepbour, A., Yang, J., and Chin, S.L., Opt. Commun., 1999, vol. 163, p. 29. Vlasov, S.N., Petrishchev, V.A., and Talanov, V.I., Radiophys. Quantum Electron., 1971, vol. 14, no. 9, p. 1062. Maimistov, A.I., J. Exp. Theor. Phys., 1993, vol. 77, p. 727. Corless, R., Gonnet, G., Hare, D.E.G., et al., Adv. Comput. Math., 1996, vol. 5, p. 329. Lyapunov, A.M., Obshchaya zadacha ob ustoichivosti dvizheniya (General Problem of Motion Stability), Moscow: Gostekhizdat, 1950. Khalyapin, V.A., and Bugay, A.N., Bull. Russ. Acad. Sci.: Phys., 2021, vol. 85, no. 12, p. 1424. Dormidonov, A.E., Kompanets, V.O., Chekalin, S.V., et al., Quantum Electron., 2018, vol. 48, no. 4, p. 372. Bugay, A.N. and Khalyapin, V.A., Laser Phys., 2022, vol. 32, 025401. Malitson, I.H., J. Opt. Soc. Am., 1962, vol. 52, p. 1377. Major, A., Yoshino, F., Nikolakakos, I., et al., Opt. Lett., 2004, vol. 15, p. 602. Bugay, A.N. and Khalyapin, V.A., Chaos Solitons Fractals, 2022, vol. 156, 111799. The work of V.A. Khalyapin was supported by the RF Ministry of Science and Higher Education, project no. 075-02-2022-872. Immanuel Kant Baltic Federal University, 236041, Kaliningrad, Russia V. A. Khalyapin Kaliningrad State Technical University, 236022, Kaliningrad, Russia Joint Institute for Nuclear Research, 141980, Dubna, Moscow oblast, Russia A. N. Bugay Correspondence to V. A. Khalyapin. Translated by E. Bondareva This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team. Khalyapin, V.A., Bugay, A.N. Analytical Description of the Dynamics of Planar Pulses Propagating in the Mode of Tunnel Ionization. Bull. Russ. Acad. Sci. Phys. 86, 1355–1360 (2022). https://doi.org/10.3103/S1062873822110144 Revised: 15 July 2022 Issue Date: November 2022 DOI: https://doi.org/10.3103/S1062873822110144
CommonCrawl
Publication Info. Bulletin of the Korean Mathematical Society (대한수학회보) Korean Mathematical Society (대한수학회) This journal endeavors to publish significant research of broad interests in pure and applied mathematics. One volume is published each year, and each volume consists of six issues (January, March, May, July, September, November). http://bkms.kms.or.kr/submission KSCI KCI SCOPUS SCIE A GENERALIZED ADDITIVE-QUARTIC FUNCTIONAL EQUATION AND ITS STABILITY HENGKRAWIT, CHARINTHIP;THANYACHAROEN, ANURK 1759 https://doi.org/10.4134/BKMS.2015.52.6.1759 PDF KSCI We determine the general solution of the generalized additive-quartic functional equation f(x + 3y) + f(x - 3y) + f(x + 2y) + f(x - 2y) + 22f(x) - 13 [f(x + y) + f(x - y)] + 24f(y) - 12f(2y) = 0 without assuming any regularity conditions on the unknown function f : ${\mathbb{R}}{\rightarrow}{\mathbb{R}}$ and its stability is investigated. ON THE LOWER SEMICONTINUITY OF THE SOLUTION SETS FOR PARAMETRIC GENERALIZED VECTOR MIXED QUASIVARIATIONAL INEQUALITY PROBLEMS HUNG, NGUYEN VAN 1777 In this paper, we establish sufficient conditions for the solution set of parametric generalized vector mixed quasivariational inequality problem to have the semicontinuities such as the inner-openness, lower semicontinuity and Hausdorff lower semicontinuity. Moreover, a key assumption is introduced by virtue of a parametric gap function by using a nonlinear scalarization function. Then, by using the key assumption, we establish condition ($H_h$(${\gamma}_0$, ${\lambda}_0$, ${\mu}_0$)) is a sufficient and necessary condition for the Hausdorff lower semicontinuity, continuity and Hausdorff continuity of the solution set for this problem in Hausdorff topological vector spaces with the objective space being infinite dimensional. The results presented in this paper are different and extend from some main results in the literature. DISCRETE MEASURES WITH DENSE JUMPS INDUCED BY STURMIAN DIRICHLET SERIES KWON, DOYONG 1797 Let ($S_{\alpha}(n))_{n{\geq}1}$ be the lexicographically greatest Sturmian word of slope ${\alpha}$ > 0. For a fixed ${\sigma}$ > 1, we consider Dirichlet series of the form ${\nu}_{\sigma}({\alpha})$ := ${\Sigma}_{n=1}^{\infty}s_{\alpha}(n)n^{-{\sigma}}$. This paper studies the singular properties of the real function ${\nu}_{\sigma}$, and the Lebesgue-Stieltjes measure whose distribution is given by ${\nu}_{\sigma}$. ON CERTAIN GENERALIZED q-INTEGRAL OPERATORS OF ANALYTIC FUNCTIONS PUROHIT, SUNIL DUTT;SELVAKUMARAN, KUPPATHAI APPASAMY 1805 In this article, we first consider a linear multiplier fractional q-differintegral operator and then use it to define new subclasses of p-valent analytic functions in the open unit disk U. An attempt has also been made to obtain two new q-integral operators and study their sufficient conditions on some classes of analytic functions. We also point out that the operators and classes presented here, being of general character, are easily reducible to yield many diverse new and known operators and function classes. ON A MULTI-PARAMETRIC GENERALIZATION OF THE UNIFORM ZERO-TWO LAW IN L1-SPACES MUKHAMEDOV, FARRUKH 1819 Following an idea of Ornstein and Sucheston, Foguel proved the so-called uniform "zero-two" law: let T : $L^1$(X, $\mathcal{F}$, ${\mu}$) ${\rightarrow}$ $L^1$(X, $\mathcal{F}$, ${\mu}$) be a positive contraction. If for some $m{\in}{\mathbb{N}}{\cup}\{0\}$ one has ${\parallel}T^{m+1}-T^m{\parallel}$ < 2, then $\lim_{n{\rightarrow}{\infty}}{\parallel}T^{m+1}-T^m{\parallel}=0$. There are many papers devoted to generalizations of this law. In the present paper we provide a multi-parametric generalization of the uniform zero-two law for $L^1$-contractions. HYPERSTABILITY OF THE GENERAL LINEAR FUNCTIONAL EQUATION PISZCZEK, MAGDALENA 1827 We give some results on hyperstability for the general linear equation. Namely, we show that a function satisfying the linear equation approximately (in some sense) must be actually the solution of it. A GENERALIZED COMMON FIXED POINT THEOREM FOR TWO FAMILIES OF SELF-MAPS PHANEENDRA, T. 1839 Brief developments in metrical fixed point theory are covered and a significant generalization of recent results obtained in [18], [27], [32] and [33] is established through an extension of the property (EA) to two sequences of self-maps using the notions of weak compatibility and implicit relation. MULTICOMPLEXES, BOUNDED COHOMOLOGY AND ADDITIVITY OF SIMPLICIAL VOLUME KUESSNER, THILO 1855 We discuss some additivity properties of the simplicial volume for manifolds with boundary: we give proofs of additivity for glueing amenable boundary components and of superadditivity for glueing amenable submanifolds of the boundary, and we discuss doubling of 3-manifolds. ON FIXED POINT THEOREMS FOR MULTIVALUED MAPPINGS OF FENG-LIU TYPE ALTUN, ISHAK;MINAK, GULHAN 1901 In the present paper, considering the Jleli and Samet's technique we give many fixed point results for multivalued mappings on complete metric spaces without using the Pompeiu-Hausdorff metric. Our results are real generalization of some related fixed point theorems including the famous Feng and Liu's result in the literature. We also give some examples to both illustrate and show that our results are proper generalizations of the mentioned theorems. ON A CLASS OF TERNARY CYCLOTOMIC POLYNOMIALS ZHANG, BIN;ZHOU, YU 1911 A cyclotomic polynomial ${\Phi}_n(x)$ is said to be ternary if n = pqr for three distinct odd primes p < q < r. Let A(n) be the largest absolute value of the coefficients of ${\Phi}_n(x)$. If A(n) = 1 we say that ${\Phi}_n(x)$ is flat. In this paper, we classify all flat ternary cyclotomic polynomials ${\Phi}_{pqr}(x)$ in the case $q{\equiv}{\pm}1$ (mod p) and $4r{\equiv}{\pm}1$ (mod pq). A NOTE ON CONVEXITY OF CONVOLUTIONS OF HARMONIC MAPPINGS JIANG, YUE-PING;RASILA, ANTTI;SUN, YONG 1925 In this paper, we study right half-plane harmonic mappings $f_0$ and f, where $f_0$ is fIxed and f is such that its dilatation of a conformal automorphism of the unit disk. We obtain a sufficient condition for the convolution of such mappings to be convex in the direction of the real axis. The result of the paper is a generalization of the result of by Li and Ponnusamy [11], which itself originates from a problem posed by Dorff et al. in [7]. FEKETE-SZEGÖ PROBLEM FOR CERTAIN SUBCLASSES OF UNIVALENT FUNCTIONS VASUDEVARAO, ALLU 1937 For $1{\leq}{\alpha}<2$, let $\mathcal{F}({\alpha})$ denote the class of locally univalent normalized analytic functions $f(z)=z+{\Sigma}_{n=2}^{\infty}{a_nz^n}$ in the unit disk ${\mathbb{D}}=\{z{\in}{\mathbb{C}}:{\left|z\right|}<1\}$ satisfying the condition $Re\(1+{\frac{zf^{{\prime}{\prime}}(z)}{f^{\prime}(z)}}\)>{\frac{{\alpha}}{2}}-1$. In the present paper, we shall obtain the sharp upper bound for Fekete-$Szeg{\ddot{o}}$ functional $|a_3-{\lambda}a_2^2|$ for the complex parameter ${\lambda}$. THE DELTA STANDING WAVE SOLUTION FOR THE LINEAR SCALAR CONSERVATION LAW WITH DISCONTINUOUS COEFFICIENTS USING A SELF-SIMILAR VISCOUS REGULARIZATION LI, XIUMEI;SHEN, CHUN 1945 This paper is mainly concerned with the formation of delta standing wave for the scalar conservation law with a linear flux function involving discontinuous coefficients by using the self-similar viscosity vanishing method. More precisely, we use the self-similar viscosity to smooth out the discontinuous coefficient such that the existence of approximate viscous solutions to the delta standing wave for the Riemann problem is established and then the convergence to the delta standing wave solution is also obtained when the viscosity parameter tends to zero. In addition, the Riemann problem is also solved with the standard method and the instability of Riemann solutions with respect to the specific small perturbation of initial data is pointed out in some particular situations. MAPPING PRESERVING NUMERICAL RANGE OF OPERATOR PRODUCTS ON C*-ALGEBRAS MABROUK, MOHAMED 1963 Let $\mathcal{A}$ and $\mathcal{B}$ be two unital $C^*$-algebras. Denote by W(a) the numerical range of an element $a{\in}\mathcal{A}$. We show that the condition W(ax) = W(bx), ${\forall}x{\in}\mathcal{A}$ implies that a = b. Using this, among other results, it is proved that if ${\phi}$ : $\mathcal{A}{\rightarrow}\mathcal{B}$ is a surjective map such that $W({\phi}(a){\phi}(b){\phi}(c))=W(abc)$ for all a, b and $c{\in}\mathcal{A}$, then ${\phi}(1){\in}Z(B)$ and the map ${\psi}={\phi}(1)^2{\phi}$ is multiplicative. MULTIDIMENSIONAL OPEN SYSTEM FOR VALVELESS PUMPING JUNG, EUNOK;KIM, DO WAN;LEE, JONGGUL;LEE, WANHO 1973 In this study, we present a multidimensional open system for valveless pumping (VP). This system consists of an elastic tube connected to two open tanks filled with a fluid under gravity. The two-dimensional elastic tube model is constructed based on the immersed boundary method, and the tank model is governed by a system of ordinary differential equations based on the work-energy principle. The flows into and out of the elastic tube are modeled in terms of the source/sink patches inside the tube. The fluid dynamics of this system is generated by the periodic compress-and-release action applied to an asymmetric region of the elastic tube. We have developed an algorithm to couple these partial differential equations and ordinary differential equations using the pressure-flow relationship and the linearity of the discretized Navier-Stokes equations. We have observed the most important feature of VP, namely, the existence of a unidirectional net flow in the system. Our computations are focused on the factors that strongly influence the occurrence of unidirectional flows, for example, the frequency, compression duration, and location of pumping. Based on these investigations, some case studies are performed to observe the details of the ow features. SOME IDENTITIES FOR BERNOULLI NUMBERS OF THE SECOND KIND ARISING FROM A NON-LINEAR DIFFERENTIAL EQUATION KIM, DAE SAN;KIM, TAEKYUN 2001 In this paper, we give explicit and new identities for the Bernoulli numbers of the second kind which are derived from a non-linear differential equation. ONE-HOMOGENEOUS WEIGHT CODES OVER FINITE CHAIN RINGS SARI, MUSTAFA;SIAP, IRFAN;SIAP, VEDAT 2011 This paper determines the structures of one-homogeneous weight codes over finite chain rings and studies the algebraic properties of these codes. We present explicit constructions of one-homogeneous weight codes over finite chain rings. By taking advantage of the distance-preserving Gray map defined in [7] from the finite chain ring to its residue field, we obtain a family of optimal one-Hamming weight codes over the residue field. Further, we propose a generalized method that also includes the examples of optimal codes obtained by Shi et al. in [17]. A CHARACTERIZATION OF SOME PGL(2, q) BY MAXIMUM ELEMENT ORDERS LI, JINBAO;SHI, WUJIE;YU, DAPENG 2025 In this paper, we characterize some PGL(2, q) by their orders and maximum element orders. We also prove that PSL(2, p) with $p{\geqslant}3$ a prime can be determined by their orders and maximum element orders. Moreover, we show that, in general, if $q=p^n$ with p a prime and n > 1, PGL(2, q) can not be uniquely determined by their orders and maximum element orders. Several known results are generalized. SPECIAL CLASSES OF MERIDIAN SURFACES IN THE FOUR-DIMENSIONAL EUCLIDEAN SPACE GANCHEV, GEORGI;MILOUSHEVA, VELICHKA 2035 Meridian surfaces in the Euclidean 4-space are two-dimensional surfaces which are one-parameter systems of meridians of a standard rotational hypersurface. On the base of our invariant theory of surfaces we study meridian surfaces with special invariants. In the present paper we give the complete classification of Chen meridian surfaces and meridian surfaces with parallel normal bundle. JOINING OF CIRCUITS IN PSL(2, ℤ)-SPACE MUSHTAQ, QAISER;RAZAQ, ABDUL 2047 The coset diagrams are composed of fragments, and the fragments are further composed of circuits at a certain common point. A condition for the existence of a certain fragment ${\gamma}$ of a coset diagram in a coset diagram is a polynomial f in ${\mathbb{Z}}$[z]. In this paper, we answer the question: how many polynomials are obtained from the fragments, evolved by joining the circuits (n, n) and (m, m), where n < m, at all points. SIMILAR AND SELF-SIMILAR CURVES IN MINKOWSKI n-SPACE OZDEMIR, MUSTAFA;SIMSEK, HAKAN 2071 In this paper, we investigate the similarity transformations in the Minkowski n-space. We study the geometric invariants of non-null curves under the similarity transformations. Besides, we extend the fundamental theorem for a non-null curve according to a similarity motion of ${\mathbb{E}}_1^n$. We determine the parametrizations of non-null self-similar curves in ${\mathbb{E}}_1^n$. TWISTED QUADRATIC MOMENTS FOR DIRICHLET L-FUNCTIONS LOUBOUTIN, STEPHANE R. 2095 Given c, a positive integer, we set. $$M(f,c):=\frac{2}{{\phi}(f)}\sum_{{\chi}{\in}X^-_f}{\chi}(c)|L(1,{\chi})|^2$$, where $X^-_f$ is the set of the $\phi$(f)/2 odd Dirichlet characters mod f > 2, with gcd(f, c) = 1. We point out several mistakes in recently published papers and we give explicit closed formulas for the f's such that their prime divisors are all equal to ${\pm}1$ modulo c. As a Corollary, we obtain closed formulas for M(f, c) for c $\in$ {1, 2, 3, 4, 5, 6, 8, 10}. We also discuss the case of twisted quadratic moments for primitive characters.
CommonCrawl
\begin{document} \title{$P$-spaces and the Whyburn property} \begin{abstract} \noi We investigate the Whyburn and weakly Whyburn property in the class of $P$-spaces, that is spaces where every countable intersection of open sets is open. We construct examples of non-weakly Whyburn $P$-spaces of size continuum, thus giving a negative answer under CH to a question of Pelant, Tkachenko, Tkachuk and Wilson (\cite{PTTW}). In addition, we show that the weak Kurepa Hypothesis (an assumption weaker than CH) implies the existence of a non-weakly Whyburn $P$-space of size $\aleph_2$. Finally, we consider the behavior of the above-mentioned properties under products; we show in particular that the product of a Lindel\"of weakly Whyburn P-space and a Lindel\"of Whyburn $P$-space is weakly Whyburn, and we give a consistent example of a non-Whyburn product of two Lindel\"of Whyburn $P$-spaces. \end{abstract} \section{Introduction} We assume all spaces to be Hausdorff. All undefined notions can be found in \cite{E} and \cite{K1}. Whyburn and weakly Whyburn spaces have recently been considered by various authors. They provide a natural generalization of Fr\'echet and sequential spaces which is weak enough to offer lots of challenges, and indeed some fundamental questions are still open. They have also turned out to be useful in the $M_3=M_1$ problem (see \cite{GT} and \cite{TY}). Let $X$ be a topological space. A set $F \subseteq X$ is said to be an \emph{almost closed} converging to $x$ if $\overline{F} \setminus F=\{x\}$; if this happens we will write $F \to x$. A space $X$ is said to be \emph{Whyburn} (\emph{weakly Whyburn}) if for each non closed set $A \subseteq X$ and each (some) point $x \in \overline{A} \setminus A$ there is an almost closed set $F \subseteq A$ such that $F \to x$. Fr\'echet spaces are Whyburn and sequential spaces are weakly Whyburn. The class of weakly Whyburn spaces is wide enough to include all regular scattered spaces. Recall that a space is radial (pseudoradial) if for each $A \subseteq X$ and every (some) point $x \in \overline{A} \setminus A$ there is a transfinite sequence $\{x_\alpha: \alpha \in \kappa\} \subseteq A$ which converges to $x$. The radial character of a pseudoradial space $X$ is the smallest cardinal $\kappa $ such that the definition of pseudoradiality for $X$ works by considering sequences of length at most $\kappa$. The (weakly) Whyburn property behaves very nicely on compact spaces; namely, every countably compact Whyburn space is Fr\'echet and every compact weakly Whyburn space is pseudoradial. A space is called a \emph{$P$-space} if every $G_\delta$ set is open. A Lindel\"of space is one in which every open cover has a countable subcover. Lindel\"of $P$-spaces exhibit a behaviour that is somewhat very close to that of compact spaces; for example, every Lindel\"of subspace of a Lindel\"of $P$-space is closed and every Lindel\"of Hausdorff $P$-space is normal. Notice, however, that the class of Lindel\"of $P$-spaces is only finitely productive. The study of the Whyburn and weakly Whyburn property on $P$-spaces was initiated in \cite{PTTW}, where it is proved that every $P$-space of character $\omega_1$ is Whyburn and that not every regular Lindel\"of $P$-space is Whyburn, and a bunch of interesting problems is left open. Whyburn and weakly Whyburn spaces were formerly known as AP (WAP) spaces (see \cite{S1} and \cite{PT}). Although this terminology is still in use today (see for example \cite{GT}), we prefer to adopt the new name, because it sounds better and gives credit to a paper of Whyburn where these spaces were first studied (\cite{W}). \section{Toward a small non weakly Whyburn $P$-space} The main purpose of this section is to give several examples of non weakly Whyburn $P$-spaces. No example of this kind has been exhibited so far. We are particularly concerned with finding examples of minimal cardinality or with further restrictions (for example, Lindel\"of examples). Problem 3.6 in \cite{PTTW} asks whether every regular $P$-space of cardinality $\aleph_1$ is (weakly) Whyburn. Let's start by settling the weak form of this problem with a very simple ZFC construction. \begin{Ex} There is a zero-dimensional, non-Whyburn regular (hence, completely regular) $P$-space of cardinality $\omega_1$. \end{Ex} \begin{proof} For each $\alpha \in \omega_1$ let $L_\alpha$ be a copy of the one-point lindel\"ofication of a discrete space of cardinality $\omega_1$, $p_\alpha$ being the unique non isolated point of $L_\alpha$ (whose neighbourhoods are all co-countable subsets of $L_\al$ containing the point $p_\al$ itself). Let $Y=\bigoplus_{\alpha\in \omega_1} L_\alpha$ and choose a point $\infty \notin Y$. Topologize $X=Y \cup \{\infty\}$ in the following way. Neighbourhoods of the points of $Y$ are as usual, while a neighbourhood of $\infty$ is $\{\infty\} \cup \bigcup_{\alpha \geq \beta} T_\alpha$ for some $\beta \in \omega_1$ and some choice of a neighbourhood $T_\al$ of $p_\al$ in $L_\al$ for every $\al\ge\beta$. It is clear that $X$ is a zero-dimensional regular $P$-space of cardinality $\omega_1$. To show that it is not Whyburn consider the set $A=Y\setminus \{p_\alpha : \alpha \in \omega_1\}$ and note that $\infty \in \overline{A}$ and if $B \subseteq A$ is a set having the point $\infty$ in its closure then $\overline{B}$ contains infinitely many points $p_\alpha$. \end{proof} Notice that the existence of a non-Whyburn $P$-space of cardinality $\aleph_1$ and character less than $2^{\aleph_1}$ is not provable in ZFC, see Theorem 3 in \cite{BS} and the remark at the end of the section. We now come to the main dish of this section. \begin{Ex} \label{Ex: nonwwh} There is a completely regular non-weakly Whyburn $P$-space of cardinality $\mathfrak{c}$. \end{Ex} \begin{proof} Let $Y=\mom$, $Z=\omm$ and $X=Y\cup Z$. We will introduce a topology on $X$ in the following way. Set $\Gamma={}^Y{\omega}_1={}^{(\mom)}{\omega}_1$, and for every $g\in\Gamma$ and $\fs\in Y$ put \[ V_{\fs,g}=\{\psi\in X\,\vert\,\psi\text{ is an extension }\text{of }\fs \wedge\ \fore n{(\mathop{\mathrm{dom}}\psi{\setminus}\mathop{\mathrm{dom}}\fs)}\psi(n)\ge g(\psi\uph_n)\}\eqno(\ast) \] (observe, in particular, that $\fs$ always belongs to $V_{\fs,g}$).\par First of all, we want to show that associating to every $\fs\in Y$ the collection $\{V_{\fs,g} \,\colon\,g\in\Gamma\}$ as a fundamental system of (open) neighbourhoods, and letting every $\fs\in Z$ to be isolated, gives rise to a topology of $P$-space on $X$. Actually, we have to show that \begin{enumerate} \item $\fore\fs Y\fore g\Gamma\fore\psi{V_{\fs,g}\cap Y}\ext{g'}\Gamma V_{\psi,g'}\subseteq V_{\fs,g}$; \item $\fore\fs Y\fore{\Gamma'}{[\Gamma]^{\le{\omega}}}\ext g\Gamma V_{\fs,g}\subseteq\bigcap_{g'\in\Gamma'}V_{\fs,g'}$. \end{enumerate}\par To prove (1), simply take $g'=g$; we claim that $V_{\psi,g}\subseteq V_{\fs,g}$. Indeed, if $\eta\in V_{\psi,g}$, then first of all $\eta$ extends $\psi$; since $\psi$, in turn, extends $\fs$ (as $\psi\in V_{\fs,g}$), we have that $\eta$ extends $\fs$. Therefore, to prove that $\eta\in V_{\fs,g}$, we only have to show that $\eta(n)\ge g(\eta\uph _n)$ for every $n\in\mathop{\mathrm{dom}}\eta{\setminus}\mathop{\mathrm{dom}}\fs$. Now, if $n\in\mathop{\mathrm{dom}}\eta{\setminus}\mathop{\mathrm{dom}}\fs$, then either $n\in\mathop{\mathrm{dom}}\psi{\setminus}\mathop{\mathrm{dom}}\fs$, in which case $\eta(n)=\psi(n)\ge g(\psi\uph_n)=g(\eta\uph_n)$ (as $\eta$ extends $\psi$ and $\psi\in V_{\fs,g}$), or $n\in\mathop{\mathrm{dom}}\eta{\setminus}\mathop{\mathrm{dom}}\psi$, in which case we have $\eta(n)\ge g(\eta\uph_n)$ simply because $\eta\in V_{\psi,g}$.\par As for property (2), let $\hat\fs\in Y$ and $\Gamma'$ be a countable subset of $\Gamma$, and define $g\in\Gamma$ by letting $g(\fs)=\sup\fin{g'(\fs)}{g'\in\Gamma'}$. Now, for each $\fs\in Y$, if $\psi$ is an arbitrary element of $V_{\hat\fs,g}$ and $\hat g'$ is an arbitrary element of $\Gamma'$, then to prove that $\psi\in V_{\hat\fs,\hat g'}$ we just have to show that $\psi(n)\ge\hat g'(\psi\uph_n)$ for every $n\in \mathop{\mathrm{dom}}\psi{\setminus}\mathop{\mathrm{dom}}\hat\fs$. Indeed, if $n\in \mathop{\mathrm{dom}}\psi{\setminus}\mathop{\mathrm{dom}}\hat\fs$, then from $\psi\in V_{\hat\fs,g}$ it follows that $\psi(n)\ge g(\psi\uph_n)=\sup\fin{g'(\psi\uph_n)}{g'\in\Gamma'}\ge\hat g'(\psi\uph_n)$.\par Let $\tau$ be the topology generated on $X$ by associating to every $\fs\in X$ the fundamental system of (open) neighbourhoods $\cI_\fs$, defined as \[ \cI_\fs= \begin{cases} \fin{V_{\fs,g}}{g\in\Gamma}&\text{if $\fs\in Y$;}\\ \big\{\{\fs\}\big\}&\text{if $\fs\in Z$.} \end{cases} \] We want to prove now that $(X,\tau)$ is a completely regular space. To this end, we will first show that $(X,\tau)$ is T$_0$, and then that for every $\fs\in X$ and every $V\in \cI_\fs$, $V$ is a closed (and open) set. As for the T$_0$ property, let $\fs',\fs''$ be distinct elements of $X$; then one of them, say $\fs'$, has domain not less than that of the other element. This implies in particular (as $\fs'\neq\fs''$) that $\fs''$ cannot be an extension of $\fs'$; therefore, fixing any $g'\in\Gamma$, it follows from $(\ast)$ that $V_{\fs',g'}$ is a neighbourhood of $\fs'$ which does not contain $\fs''$. Let now show that every collection $\cI_\fs$ consists of closed sets. If $\fs\in Z$, then $\{\fs\}$ is the only element of $\cI_\fs$; thus, consider an arbitrary $\psi\in X{\setminus}\{\fs\}$. Of course, if $\psi\in Z$ then $\{\psi\}$ is a neighbourhood of $\psi$ disjoint from $\{\fs\}$; therefore, we may assume $\psi\in Y$, and consider an $\hn\in{\omega}$ with $\hn\ge\mathop{\mathrm{dom}}\psi$. Also, fix an element $\hg$ of $\Gamma$ such that $\hg(\fs\uph_\hn)>\fs(\hn)$; then it follows from $(\ast)$ that $\fs\notin V_{\psi,\hg}$, i.e. $\{\fs\}\cap V_{\psi,\hg}={\emptyset}$. Suppose now that $\fs\in Y$. Let $g$ be any element of $\Gamma$, and consider an arbitrary $\psi\in X{\setminus} V_{\fs,g}$---also, since the case $\psi\in Z$ is again trivial, we may assume that $\psi\in Y{\setminus} V_{\fs,g}$. By $(\ast)$, the fact that $\psi\notin V_{\fs,g}$ means that $\psi$ is not an extension of $\fs$, or that there exists $\hn\in\mathop{\mathrm{dom}}\psi{\setminus}\mathop{\mathrm{dom}}\fs$ with $\psi(\hn)<g(\psi\uph_\hn)$. If we are in this second case, then taking any $g^\ast\in\Gamma$ we see that every $\psi'\in V_{\psi,g^\ast}$ is in particular an extension of $\psi$, so that $\hn\in\mathop{\mathrm{dom}}\psi{\setminus}\mathop{\mathrm{dom}}\fs\subseteq\mathop{\mathrm{dom}}\psi'{\setminus}\mathop{\mathrm{dom}}\fs$ and $\psi'(\hn)=\psi(\hn)<g(\psi\uph_\hn)=g(\psi'\uph_\hn)$; as a consequence, $V_{\psi,g^\ast}\cap V_{\fs,g}={\emptyset}$. Suppose now that $\psi$ is not an extension of $\fs$. If $\fs$ is not an extension of $\psi$, either, then there exists $\hn\in\mathop{\mathrm{dom}}\fs\cap\mathop{\mathrm{dom}}\psi$ with $\fs(\hn)\neq\psi(\hn)$; of course, this entails that $\fs$ and $\psi$ cannot have any common extension, hence in particular taking any $g^\ast\in\Gamma$ it will follow again that $V_{\psi,g^\ast}\cap V_{\fs,g}={\emptyset}$. If, on the contrary, $\fs$ is a (proper) extension of $\psi$, then let $\hn=\mathop{\mathrm{dom}}\psi$ and fix a $g^\ast\in\Gamma$ such that $g^\ast(\psi)>\fs(\hn)$; since every $\eta\in V_{\fs,g}$ is in particular an extension of $\fs$, we will have the inequality $\eta(\hn)=\fs(\hn)<g^\ast(\psi)=g^\ast(\fs\uph_\hn)=g^\ast(\eta\uph_\hn)$---so that, by $(\ast)$, $\eta\notin V_{\psi,g^\ast}$. Therefore, we still see that $V_{\psi,g^\ast}\cap V_{\fs,g}={\emptyset}$.\par Finally, we show that $(X,\tau)$ is not weakly Whyburn. First of all, notice that $Z$ is not closed in $(X,\tau)$; actually, $Z$ turns out to be even dense. Indeed, let $\fs$ be any element of $Y$, and $V_{\fs,g}$ (with $g\in\Gamma$) be an arbitrary basic neighbourhood of $\fs$. Then put $\hn=\mathop{\mathrm{dom}}\fs$, and define by induction an $\al_n\in{\omega}_1$ for every $n\ge\hn$ in the following way.\par\noi --- $\al_\hn=g(\fs)$;\par\noi --- $\al_{n+1}=g(\fs\,{}^{\smallfrown}\langle\al_\hn,\al_{\hn+1},\dots,\al_n\rangle)$.\par\noi Now, let $\psi\in Z$ be defined by \[ \psi(n)= \begin{cases} \fs(n)&\text{for $n<\hn$;}\\ \al_n&\text{for $n\ge\hn$;} \end{cases} \] clearly, $\psi$ extends $\fs$, and for every $n\ge\hn$ (i.e., for every $n\in\mathop{\mathrm{dom}}\psi{\setminus}\mathop{\mathrm{dom}}\fs$) we have that $\psi(n)=\al_n=g(\psi\uph_n)$. Therefore, it follows from $(\ast)$ that $\psi\in V_{\fs,g}$, so that $V_{\fs,g}\cap Z\neq{\emptyset}$.\par Thus, we will conclude the proof that $(X,\tau)$ is not weakly Whyburn by showing that if $M\subseteq Z$ and $\hat\fs\in Y\cap\overline M$, then there exists an element of $Y{\setminus}\{\hat\fs\}$ which belongs to $\overline M$. Actually, we will show more precisely that there exists an $\al\in{\omega}_1$ such that $\hat\fs\,{}^{\smallfrown}\langle\al\rangle\in\overline M$. As a matter of fact, toward a contradiction, suppose that for every $\al\in{\omega}_1$ there exists $g_\al\in\Gamma$ such that \[ V_{\hat\fs\,{}^{\smallfrown}\langle\al\rangle,g_\al}\cap M={\emptyset}.\eqno(\sharp) \] Then put $\hn=\mathop{\mathrm{dom}}\hat\fs$ and consider a $\hg\in\Gamma$ such that \[ \fore\fs Y\big(\mathop{\mathrm{dom}}\fs\ge\hn+1\Longrightarrow\hg(\fs)=g_{\fs(\hn)}(\fs)\big).\eqno(\spadesuit) \] Since $\hat\fs\in\overline M$, there must exist an $\eta\in V_{\hat\fs,\hg}\cap M$, and this $\eta$ will be in particular an extension of $\hat\fs$---hence also of $\hat\fs\,{}^{\smallfrown}\langle\eta(\hn)\rangle$. However, since $V_{\hat\fs\,{}^{\smallfrown}\langle\eta(\hn)\rangle, g_{\eta(\hn)}}\cap M={\emptyset}$ (by $(\sharp)$), we have in particular that $\eta\notin V_{\hat\fs\,{}^{\smallfrown}\langle\eta(\hn)\rangle, g_{\eta(\hn)}}$. But $\eta$ is an extension of $\hat\fs\,{}^{\smallfrown}\langle\eta(\hn)\rangle$, thus by $(\ast)$ there must exist an $n^\ast\in{\omega}{\setminus}\mathop{\mathrm{dom}}\big( \hat\fs\,{}^{\smallfrown}\langle\eta(\hn)\rangle\big)={\omega}{\setminus}(\hn+1)$ such that $\eta(n^\ast)<g_{\eta(\hn)}\big(\eta\uph_{n^\ast}\big)$. Thus, letting $\fs^\ast=\eta\uph_{n^\ast}$, it follows from $(\spadesuit)$ (as $\mathop{\mathrm{dom}}\fs^\ast=n^\ast\ge\hn+1$) that $\eta(n^\ast)< g_{\eta(\hn)}\big(\eta\uph_{n^\ast}\big)=g_{\fs^\ast(\hn)}(\fs^\ast)=\hg(\fs^\ast)=\hg(\eta\uph_{n^\ast})$. Clearly, this contradicts the fact that $\eta\in V_{\hat\fs,\hg}$ (take again $(\ast)$ into account). \end{proof} Here is another way to construct a non weakly Whyburn $P$-space of cardinality $\mathfrak{c}$. Murtinov\'a (\cite{M}) has come up independently with a similar example. We will exploit the following example of a Baire $P$-space and a trick that Gruenhage and Tamano (\cite{GT}) used to get a countable stratifiable non-weakly Whyburn space. \begin{Ex} \label{meager} There exists a $T_1$ regular dense-in-itself $P$-space of cardinality continuum in which every meager set is nowhere dense. \end{Ex} \begin{proof} Recall that the \emph{support} of a function is the greatest subset of its domain where the function never vanishes. Let $X=\{f\colon\omega_1\to\omega\ \colon\,f\mathrm{\ has \ \emph{countable}\ support}\}$. Set $B(f, \alpha)=\{g \in X\ \colon\ \forall\beta\leq\alpha\colon g(\beta)=f(\beta)\}$. We define a topology on $X$ by declaring $\{B(f,\alpha): \alpha \in \omega_1\}$ to be a fundamental system of neighbourhoods at $f$. It is easy to see that $X$ is a dense-in-itself $P$-space and that $X$ is $T_1$ and zero-dimensional. Let $\{N_n: n \in \omega \}$ be a countable family of nowhere dense subsets of $X$. Let $f \in X$ and $\alpha \in \omega_1$. Since $N_0$ is nowhere dense, there must be a function $g_0 \in B(f, \alpha)$ and an ordinal $\alpha_0$ such that $B(g_0, \alpha_0) \cap N_0=\emptyset$. Since $B(f, \alpha)$ is an open set we can suppose $B(g_0, \alpha_0) \subset B(f, \alpha)$, and since $\{B(g_0, \gamma) : \gamma \in \omega_1 \}$ is a decreasing family, we can suppose that $\alpha_0>\alpha$. Suppose we have found points $\{g_i : i<k \}$ and increasing ordinals $\{\alpha_i : i <k \}$ in such a way that $B(g_j, \alpha_j) \subset B(g_{j-1}, \alpha_{j-1})$ and $B(g_j, \alpha_j) \cap N_j=\emptyset$ for each $j<k$. Since $N_k$ is nowhere dense there must be a point in $g_k \in B(g_{k-1}, \alpha_{k-1})$ such that $g_k \notin \overline{N_k}$, and thus there must be $\alpha_k$ such that $B(g_k, \alpha_k) \cap N_k=\emptyset$. Again we can suppose $B(g_{k}, \alpha_k) \subset B(g_{k-1}, \alpha_{k-1})$ and $\alpha_k>\alpha_{k-1}$. At the end of the induction the set $\bigcap \{B(g_i, \alpha_i) : i \in \omega \}$ will be a nonempty open set contained in $B(f, \alpha)$, and disjoint from $\bigcup \{N_n : n \in \omega \}$ (to check that $\bigcap\fin{B(g_i,\al_i)}{i\in{\omega}}\neq{\emptyset}$, consider that such a set includes $B(\hat g,{\hat\al})$, where ${\hat\al}=\sup\fin{\al_i}{i\in{\omega}}$ and $\hat g$ is any element of $X$ such that $\hat g(\al')=g_i(\al')$ for every $\al'\in{\omega}_1$ and $i\in{\omega}$ with $\al'\le\al_i$). Picking any point $h \in \bigcap \{B(g_i, \alpha_i) \,\colon\, i \in \omega \}$ we see that $h$ is not in the closure $\bigcup_{n \in \omega} N_n$, therefore $B(f, \alpha)$ cannot be contained in $\overline{\bigcup_{n \in \omega} N_n}$. Since the choice of $f$ and $\alpha$ was arbitrary it turns out that $\bigcup_{n \in \omega} N_n$ is nowhere dense. \end{proof} Recall that a \emph{weak $P$-space} is a space in which every countable set is closed. \begin{Ex} \label{Ex: nonwwh2} There are a non-weakly Whyburn $P$-space of cardinality $\mathfrak{c}$ and a non weakly-Whyburn weak $P$-space of cardinality $\omega_1$ in ZFC. \end{Ex} \begin{proof} Let $X_1$ and $X_2$ be copies of the space $X$ in Example $\ref{meager}$, $Y_1$ and $Y_2$ be copies of the space $Y$ described in the above remark and $f$ be a bijection between either $X_1$ and $X_2$ or $Y_1$ and $Y_2$ (the context will clarify which spaces we mean). Put $Z=X_1 \cup X_2$ and $W=Y_1 \cup Y_2$. Take each point of $X_2$ ($Y_2$) to be isolated and declare a neighbourhood of $x$ in $X_1$ ($Y_1$) to be of the form $U \cup f(U)\setminus f(N)$, where $U$ is a neighbourhood of $x$ in $X$ ($Y$), and $N$ is a nowhere dense set in $X_1$ ($Y_1$). \emph{$Z$ is a $P$-space:} Let $\mathcal{U}=\{U_n \cup (f(U_n) \setminus f(N_n)) : n \in \omega\}$ be a countable family of neighbourhoods of a point of $z \in X_1$. Then $\bigcap \mathcal{U}= \bigcap_{n \in \omega} U_n \cap \bigcap_{i \in \omega} f(U_i) \setminus \bigcup_{k \in \omega} f(N_k)$. The fact that $X$ is a $P$-space where every meager set is nowhere dense implies that the last set is a neighbourhood of $z$ in $Z$. Thus every point of $Z$ is a $P$-point, i.e., $Z$ is a $P$-space. \emph{$W$ is a weak $P$-space:} Indeed, let $C \subset W$ be a countable set and $x \in Y_1{\setminus} C$. Let $C_1=C \cap Y_1$ and $C_2=C \cap Y_2$. If $x \notin f^{-1}(C_2)$ then let $U$ be a neighbouthood of $x$ in $Y_1$ disjoint from $C_1 \cup f^{-1}(C_2)$. Then $U \cup f(U)$ will be a neighbourhood of $x$ in $W$ disjoint from $C$. If $x \in f^{-1}(C_2)$ let $U$ be a neighbourhood of $x$ disjoint from $C_1$, and as $Y$ is a $P$-space, the countable set $f^{-1}(C_2)$ will be nowhere dense in it (in fact discrete) so $U \cup (f(U) \setminus C_2)$ is a neighbourhood of $x$ in $W$ disjoint from $C$. \emph{$Z$ and $W$ are not weakly Whyburn:} The set $X_2$ is not closed, however there is no almost closed set converging outside $X_2$. In fact if $A \subset X_2$ is a non-closed set in $Z$ then $f^{-1}(A)$ cannot be nowhere dense, hence $Int(\overline{f^{-1}(A)}) \neq \emptyset$. Moreover this last set must be uncountable and dense-in-itself, or otherwise $X$ would have isolated points. By the definition of the topology on $Z$, every point of $Int(\overline{f^{-1}(A)})$ will be in the closure of $A$. This proof also works for the space $W=Y_1 \cup Y_2$. \end{proof} Thus, under [CH] we get a consistent negative answer to Problem 3.6 in \cite{PTTW}. \begin{Quest} \label{quest} Does there exist a model of ZFC + not CH in which there is a Baire $P$-space of cardinality $\omega_1$? \end{Quest} It is easy to realize that the fact that every countable union of nowhere dense sets is nowhere dense is equivalent to being a Baire space in the realm of $P$-spaces. There is no hope of getting a ZFC example like in Question $\ref{quest}$ because the non-existence of a Baire space of cardinality $\omega_1$ is known to be consistent (see \cite{ST}). By means of the contruction of Example $\ref{Ex: nonwwh2}$, a positive answer to the previous question would guarantee the existence of a non-weakly Whyburn $P$-space of cardinality $\omega_1$ in a model where the continuum hypotheses fails. Yet, the following problem would still be open. \begin{Quest} Is there a ZFC example of a non-weakly Whyburn $P$-space of cardinality $\aleph_1$? \end{Quest} In \cite{PTTW} it was remarked that every Lindelof $P$-space of cardinality $\aleph_1$ is Whyburn. One may wonder if the same applies to weak $P$-spaces. \begin{Th}[{[CH]}] There exists a regular Lindel\"of non weakly Whyburn weak $P$-space of cardinality and character $\aleph_1$. \end{Th} \begin{proof} Let $Y$ be the real line with the density topology (see for example \cite{T}). This space is known to be a Tychonoff weak $P$-space. Also, since it is a dense-in-itself ccc Baire space having $\pi$-weight $\aleph_1$, by a well-known argument (see for example \cite{R}) it has a Luzin subspace $Z \subset Y$, i.e. a subspace in which every nowhere dense set is countable. Let $X$ be the Alexandroff duplicate of $Z$. It is clear that $X$ is a weak $P$-space, it is Lindel\"of because $Z$ is, and an argument similar to the one in Example~\ref{Ex: nonwwh2} shows that $X$ is not weakly Whyburn. \end{proof} \begin{Quest} Can CH be removed from the previous theorem? \end{Quest} Going back to non weakly Whyburn $P$-spaces, we are now going to produce an example of cardinality $\aleph_2$ by using a set theoretic axiom weaker than CH. A tree $T$ of cardinality $\omega_1$ with $\omega_2$-many uncountable branches is called a {\it weak Kurepa tree}. Using a weak Kurepa tree $T$ is non difficult to construct a regular $P$-space $X$ of weight $\omega_1$ and cardinality $\omega_2$; take as points of $X$ the uncountable branches and as a base the sets of the form $\hat t=\{B\in X :t\in B\}$ for each $t\in T$. Unfortunately, the existence of a weak Kurepa tree is not provable in ZFC (see Section 8 in \cite {We}). The {\it weak Kurepa Hypothesis}, briefly wKH, is the assertion that there exists a weak Kurepa tree. In \cite{We} it is pointed out that CH implies wKH, but wKH is somehow much weaker than CH. \begin{Ex}[{[{\sf wKH}]}]\label{Ex: nonwwh2'} There is a non-weakly Whyburn regular $P$-space of cardinality $\omega_2$. \end{Ex} \begin{proof} Let $X$ be a regular $P$-space of weight $\omega_1$ and cardinality $\omega_2$ and let $Y=\omega_2\cup\{p\}$, topologized in such a way that every subset of $\omega_2$ is open and a set $U\ni p$ is open whenever $|Y\setminus U|\le \omega_1$. Fix a one-to-one mapping $f:Y\rightarrow X$ and let $Z=Y\times X$. Obviously $Z$ is a regular $P$-space of cardinality $\omega_2$. We claim that $Z$ does not have the weak Whyburn property. Let $A=\{(\alpha ,f(\alpha )) : \alpha \in \omega_2\}\subseteq Z$. Since the space $X$ has weight $\omega_1$, the set $T$ of all complete accumulation points of the set $\{f(\alpha ) :\alpha \in \omega_2\}$ has cardinality $\omega_2$. It is easy to check that $\{p\}\times T\subseteq \overline A$ and so $A$ is not closed in $Z$. Let $B=\{(\alpha ,f(\alpha )) :\alpha \in C\}$ be a subset of $A$. A point can be in $\overline B\setminus A$ only if it is of the form $(p,x)$ and this may happen only if $|B|=\omega_2$. As before, the set $T'$ of all complete accumulation points of the set $f(C)$ has cardinality $\omega_2$ and $\{p\}\times T'\subseteq \overline B\setminus A$. Therefore, no almost closed subset of $A$ can converge to a point outside $A$ and we conclude that $Z$ does not have the weak Whyburn property. \end{proof} \begin{Quest} Is there a ZFC example of a non-weakly Whyburn $P$-space of cardinality $\omega_2$? \end{Quest} The next result, which is of independent interest, shows that a positive answer to the previous question would require a different approach than that of our example $\ref{Ex: nonwwh2}$. We are going to use the following simple lemma. \begin{Lemma}\label{1wkt} Every regular $P$-space $X$ with $w(X)\le\omega_1$ has a base of cardinality not greater than $\omega_1$, consisting of clopen sets. \end{Lemma} \begin{proof} By \cite[Theorem~1.1.15]{E}, it suffices to show that the clopen sets form a base for $X$, and this holds because every regular $P$-space is $0$-dimensional. \end{proof} \begin{Prop} If there exists a regular $P$-space $X$ with $\vert X\vert\ge{\omega}_2$ and $w(X)\le{\omega}_1$, then there exists a weak Kurepa tree. \end{Prop} \begin{proof} By the above lemma, there exists a base $\cL=\fin{L_\al}{\al\in{\omega}_1}$ for $X$, consisting of clopen sets. Using transfinite induction, we will construct an ${\omega}_1$-sequence $\big\{\cA_\al\big\}_{\al\in{\omega}_1}$ of collections of nonempty subsets of $X$, with the following properties: \begin{enumerate} \item $\fore\al{{\omega}_1}\cA_\al\text{\ is\ a\ (cl)open\ partition\ of\ }X$ not containing the empty set; \item $\fore\al{{\omega}_1}\forall\al'<\al\colon\cA_{\al}\text{\ is\ a\ refinement\ of\ }\cA_{\al'}$. \end{enumerate} \par Let $\cA_0=\{X\}$. If $\cA_\al$ has been defined for a given $\al\in{\omega}_1$, then let \[ \cA_{\al+1}=(\fin{A\cap L_\al}{A\in\cA_\al}\cup\fin{A{\setminus} L_\al}{A\in\cA_\al}){\setminus}\{{\emptyset}\}. \] Finally, if $\lambda\in{\omega}_1$ is limit and $\cA_\al$ has been defined for $\al<\lambda$, then put \[ \cA_\lambda=\Bfin{\bigcap_{\al\in\lambda}A_\al}{\fore\al\lambda A_\al\in\cA_\al\ \ \wedge\ \ \fore{\al',\al''}\lambda(\al'<\al''\Rightarrow A_{\al'}\supseteq A_{\al''})}{\setminus}\{{\emptyset}\}. \] We prove both property (1) and (2) by transfinite induction on $\al\in{\omega}_1$. As for (1), it is trivial when $\al=0$, and if it holds for a given ${\hat\al}\in{\omega}_1$ then it is easily seen to hold also for ${\hat\al}+1$. Consider now a nonzero limit $\lambda\in{\omega}_1$. Given an arbitrary $x\in X$, take for every $\al<\lambda$ an $A_\al\in\cA_\al$ such that $x\in A_\al$; since (by the inductive hypothesis) (2) holds for every $\al<\lambda$, we have that for every $\al'<\al<\lambda$ there exists a $\hat A\in\cA_{\al'}$ such that $A_\al\subseteq\hat A$. This implies (as $\cA_{\al'}$, still by the inductive hypothesis, is a partition of $X$) that $A_\al\cap A={\emptyset}$ for every $A\in\cA_{\al'}{\setminus}\big\{\hat A\big\}$; thus, since $A_{\al'}\in\cA_{\al'}$ and $A_\al\cap A_{\al'}\neq{\emptyset}$ (as $x$ belongs to both sets), we must have that $A_{\al'}=\hat A$, hence $A_\al\subseteq A_{\al'}$. Therefore, since of course $\bigcap_{\al\in\lambda}A_\al\neq{\emptyset}$ (as such a set contains $x$), we have by the definition of $\cA_\lambda$ that $\bigcap_{\al\in\lambda}A_\al\in\cA_\lambda$, so that we have found an element of $\cA_\lambda$ which contains $x$. Therefore, $\cA_\lambda$ covers $X$. Since ${\emptyset}\notin\cA_\lambda$ by definition, and every element of $\cA_\lambda$ is open as a countable intersection of open sets in a $P$-space, it only remains to show that $A'\cap A''={\emptyset}$ for distinct $A',A''\in\cA_\lambda$. Thus, suppose $A'=\bigcap_{\al\in\lambda}A'_\al$ and $A''=\bigcap_{\al\in\lambda}A''_\al$, with $A'_\al,A''_\al\in\cA_\al$ for $\al\in\lambda$, and $\al\mapsto A'_\al$ and $\al\mapsto A''_\al$ decreasing; of course, the fact that $A'\neq A''$ implies that for at least one ${\hat\al}\in\lambda$ we have that $A'_{\hat\al}\neq A''_{\hat\al}$. Since $\cA_{\hat\al}$, by the inductive hypothesis, is a partition, this implies in turn that $A'_{\hat\al}\cap A''_{\hat\al}={\emptyset}$; hence $A'\cap A''=\big(\bigcap_{\al\in\lambda}A'_\al\big)\cap\big(\bigcap_{\al\in\lambda}A''_\al \big)={\emptyset}$, too.\par Now we prove (2). Of course, for $\al=0$ there is nothing to prove. If (2) holds for every $\al$ less than a successor ordinal ${\hat\al}+1\in{\omega}_1$, and $\al^\ast<{\hat\al}+1$, then either $\al^\ast={\hat\al}$, in which case it is straightforward to realize from the general definition of $\cA_{\al+1}$ that $\cA_{{\hat\al}+1}$ is a refinement of $\cA_{\hat\al}=\cA_{\al^\ast}$, or $\al^\ast<{\hat\al}$, in which case since $\cA_{{\hat\al}+1}$ is a refinement of $\cA_{\hat\al}$ and $\cA_{\hat\al}$ (by the inductive hypothesis) is a refinement of $\cA_{\al^\ast}$ we also have that $\cA_{{\hat\al}+1}$ is a refinement of $\cA_{\al^\ast}$. Finally, if $\lambda\in{\omega}_1$ is a limit ordinal, and $A$ is an arbitrary element of $\cA_\lambda$, then $A=\bigcap_{\al\in\lambda}A_\al$ with $A_\al\in\cA_\al$ for every $\al<\lambda$; therefore, for every $\al\in\lambda$, $A_\al$ is an element of $\cA_\al$ which include $A$.\par Let us also point out a further property of the partitions $\cA_\al$ which immediately follows from the general definition of $\cA_{\al+1}$, and which will play a momentous r\^ole below: \[ \fore\al{{\omega}_1}\fore A{\cA_{\al+1}}(A\cap L_\al={\emptyset}\ \vee\ A\subseteq L_\al).\eqno(\ast) \]\par Now, set $T=\fin{(A,\al)}{\al\in{\omega}_1\ \wedge\ A\in\cA_\al}$, and let $\sqsubseteq$ be the binary relation on $T$ defined by \[ (A',\al')\sqsubseteq(A'',\al'')\Longleftrightarrow(\al'\le\al''\ \wedge\ A'\supseteq A''). \] Observe that, since $\sqsubseteq$ is the restriction to $T$ of the product order of the two ordered sets $\big(\wp(X),\supseteq\big)$ and $({\omega}_1,\le)$, we have by general reasons that $\sqsubseteq$ is a (partial) order on $T$. We will further prove the following: \vskip0.2cm\noi {\bf Fact.} $(T,\sqsubseteq)$ is a tree of height ${\omega}_1$, and for every $\al\in{\omega}_1$ we have that $\text{Lev}_\al(T)=\bfin{(A,\al)}{A\in\cA_\al}$.\par\noi {\bf Proof.} Let $(\hat A,{\hat\al})$ be an arbitrary element of $T$ (so that $\hat A\in \cA_{\hat\al}$). By property (2), we have that for every $\al<{\hat\al}$ there exists an $A_\al\in\cA_\al$ with $\hat A\subseteq A_\al$; since each $\cA_\al$ is a partition and $\hat A$, as an element of $\cA_{\hat\al}$, is nonempty, it is apparent that for every $\al<{\hat\al}$ the set $A_\al$ is the only element of $\cA_\al$ including $\hat A$. Then it easily follows that $\bfin{t\in T}{t\sqsubset(\hat A,{\hat\al})}=\fin{(A_\al,\al)}{\al<{\hat\al}}$; moreover, by an argument similar to one of those used to prove property (1) when $\al$ is a limit ordinal $\lambda$, we may show that for every $\al',\al''$ with $\al'<\al''<{\hat\al}$ we have that $A_{\al''}\subseteq A_{\al'}$ (actually, we know that $A_{\al''}$ must be included in some element $A'$ of $\cA_{\al'}$, and since $\cA_{\al'}$ is a partition and $A_{\al''}\cap A_{\al'}\supseteq\hat A\neq{\emptyset}$, we conclude that $A'$ necessarily coincides with $A_{\al'}$). Therefore, the above-considered set $\bfin{t\in T}{t\sqsubset(\hat A,{\hat\al})}=\fin{(A_\al,\al)}{\al<{\hat\al}}$, endowed with the order induced by $\sqsubseteq$, is similar to the ordinal ${\hat\al}$, and this proves at the same time that $(T,\sqsubseteq)$ is a tree and that $(\hat A,{\hat\al})\in\text{Lev}_{\hat\al}(T)$. On the other hand, every element of $\text{Lev}_{\hat\al}(T)$ must have the second component equal to ${\hat\al}$---as for each $(A,\al)\in T$ with $\al\neq{\hat\al}$ we may prove as above that it belongs to the set $\text{Lev}_\al(T)$, which is disjoint from $\text{Lev}_{\hat\al}(T)$. Therefore $\text{Lev}_{\hat\al}(T)=\bfin{(A,{\hat\al})}{A\in\cA_{\hat\al}}$, and this holds for each ${\hat\al}\in{\omega}_1$. Of course, ${\omega}_1$ turns out to be the height of $(T,\sqsubseteq)$. \vskip0.2cm Now we finish the proof of the proposition. First of all, observe that $\big\vert\cA_\al\big\vert\le{\omega}_1$ for every $\al\in{\omega}_1$; this is an elementary consequence of the fact that each $\cA_\al$ is an open partition of $X$ consisting of nonempty sets, and that $d(X)\le w(X)\le{\omega}_1$. Since $\big\vert\text{Lev}_\al(T)\big\vert=\big\vert\bfin{(A,\al)}{A\in\cA_\al}\big\vert=\big\vert\cA_\al\big\vert\le{\omega}_1$ for every $\al\in{\omega}_1$, to show that $(T,\sqsubseteq)$ is a weak Kurepa tree we only have to prove that it has at least ${\omega}_2$ branches.\par For every $x\in X$ and $\al\in{\omega}_1$, denote by $A_{x,\al}$ the only element of $\cA_\al$ containing $x$; notice that, since $x\in A_{x,\al'}\cap A_{x,\al''}\neq{\emptyset}$ for every $\al',\al''\in{\omega}_1$, we may prove as before that $A_{x,\al''}\subseteq A_{x,\al'}$ for $\al'<\al''<{\omega}_1$. Let, for every $x\in X$, $\Pi_x=\bfin{(A_{x,\al},\al)}{\al\in{\omega}_1}$. Then $(A_{x,\al'},\al')\sqsubset(A_{x,\al''},\al'')$ for $\al'<\al''<{\omega}_1$, so that $\Pi_x$ is a chain in $T$ intersecting all levels, i.e., $\Pi_x$ is a branch. Therefore, if we can prove that the association $x\mapsto \Pi_x$ is one-to-one, it will follow that in $T$ there are at least ${\omega}_2$ branches. Indeed, suppose that ${\bar x},{\bar y}$ are arbitrary distinct elements in $X$; since $\cL$ is a base for $X$, which is T$_1$, we may consider an ${\hat\al}\in{\omega}_1$ such that ${\bar x}\in L_{\hat\al}$ and ${\bar y}\notin L_{\hat\al}$. Observe that, from ${\bar x}\in A_{{\bar x},{\hat\al}+1}$, we have the inequality $L_{{\hat\al}}\cap A_{{\bar x},{\hat\al}+1}\neq{\emptyset}$, and this implies by $(\ast)$ that $A_{{\bar x},{\hat\al}+1}\subseteq L_{\hat\al}$; thus ${\bar y}\notin A_{{\bar x},{\hat\al}+1}$ (as ${\bar y}\notin L_{\hat\al}$), while of course ${\bar y}\in A_{{\bar y},{\hat\al}+1}$. Then it follows that $A_{{\bar x},{\hat\al}+1}\neq A_{{\bar y},{\hat\al}+1}$, and since $(A_{{\bar y},{\hat\al}+1},{\hat\al}+1)$ is the only element of $\Pi_{\bar y}$ having the second component equal to ${\hat\al}+1$, we conclude that $(A_{{\bar x},{\hat\al}+1},{\hat\al}+1)\notin \Pi_{\bar y}$. Therefore $\Pi_{\bar x}\neq\Pi_{\bar y}$ (as the former set contains $A_{{\bar x},{\hat\al}+1}$ while the latter does not). \end{proof} We have looked hard for \emph{small} non weakly Whyburn $P$-spaces in ZFC, but all we have are partial results. Here is another way to construct such an example, provided that the following innocent-looking question can be answered in the positive. \begin{Quest} Does there exist in ZFC a dense-in-itself $P$-space of cardinality $\aleph_1$ every relatively discrete subset of which is closed? \end{Quest} Let $X$ be such a space and $Y=\omega_1\cup\{p\}$ be the one-point Lindel\"ofication of the discrete space $\omega_1$ and put $Z=X\times Y$. Fix an injective mapping $f:X\to \omega_1$ and let $A=\{(x,f(x)): x \in X\}\subseteq Z$. It is easy to realize that $A$ is not closed in $Z$. If for some $B \subseteq A$ we have $\overline B\setminus A\neq{\emptyset}$, then $(x,p)\in \overline B$ for some $x\in X$. Without any loss of generality, we may assume $x \notin \pi_X(B)$. Since the set $\pi_X(B)$ is not closed, it cannot be discrete so we may fix some $z\in\pi_X(B)$ which is not isolated in $\pi_X(B)$. It is not difficult to see that $(z,p) \in \overline{B}$ so $Z$ is not weakly Whyburn. Before going on to discuss Lindel\"of spaces we would like to recall another problem from \cite{PTTW} about which we know very little. Consider the space $\beta \omega$; under CH this space has character $\omega_1$ so its $\omega$-modification is even Whyburn, but what happens in a model where CH fails? \begin{Quest} (\cite{PTTW}) Is the $\omega$-modification of $\beta \omega$ weakly Whyburn? \end{Quest} Problem 3.5 in \cite{PTTW} asks whether every (regular Lindel\"of) $P$-space is weakly Whyburn. Koszmider and Tall constructed (\cite{KT}) a model of ZFC+CH where there exists a regular Lindel\"of $P$-space of cardinality $\aleph_2$ without Lindel\"of subspaces of size $\omega_1$. Such a space cannot be weakly Whyburn, because of the following easy fact. \begin{Prop} Every Lindel\"of weakly Whyburn $P$-space $X$ of uncountable cardinality has a Lindel\"of subspace of cardinality $\omega_1$. \end{Prop} \begin{proof} Pick a set $A \subseteq X$ with $\vert A\vert=\omega_1$. If $A$ is closed, then we are done. Otherwise, there is some almost closed $B\subseteq A$ converging outside it. Since in a $P$-space countable sets are closed, the set $B$ must have cardinality $\omega_1$. Thus $\overline{B}$ is a Lindel\"of subspace of $X$ of cardinality $\omega_1$. \end{proof} At least consistently, the assumption that $X$ is a $P$-space can be dropped in the above proposition. \begin{Prop}[{[{\sf CH}]}] Every Lindel\"of weakly Whyburn space $X$ of uncountable cardinality has a Lindel\"of subspace of cardinality $\omega_1$. \end{Prop} \begin{proof} We will show that $X$ has a closed subset of size $\omega_1$. Suppose that it is not so and let $A \subseteq X$ be a set of size $\omega_1$. Then every almost closed set converging outside $A$ is countable, so that the Whyburn-closure of $A$ has cardinality $\omega_1^\omega=\omega_1$. Let $A^\alpha$ be the $\alpha$-iterate of the Whyburn closure of $A$. Then $B=\bigcup_{\alpha \in \omega_1} A^\alpha$ has cardinality $\omega_1$; if this last set were non-closed then we could find some almost closed $P$ converging outside it. Thus, $P$ would be countable and, by the regularity of $\omega_1$, there would be some $\beta \in \omega_1$ such that $P \subseteq \bigcup_{\alpha\in\beta}A^\alpha$. Hence the limit of $P$ would be inside $B$, which is a contradiction. \end{proof} As a corollary we get a proposition estabilished in \cite{BT} with the aid of elementary submodels: \begin{Th}[{[{\sf CH}]}] (\cite{BT}) Every Hausdorff Lindel\"of sequential space of uncountable cardinality has a Lindel\"of subspace of cardinality $\omega_1$. \end{Th} \begin{Quest} Is it true in ZFC that every Lindel\"of weakly Whyburn space of uncountable cardinality has a Lindel\"of subspace of cardinality $\omega_1$? \end{Quest} This is known to be true in the special case of regular Lindel\"of scattered spaces by a result of Ofelia T. Alas (see \cite[Theorem~10]{BT}). It is known (see \cite{B}) that compact weakly Whyburn spaces are pseudoradial. We don't know whether the same holds for Lindel\"of weakly Whyburn $P$-spaces, but we prove the following partial result. \begin{Th} Every regular Lindel\"of weakly Whyburn $P$-space of pseudocharacter less than $\aleph_\omega$ is pseudoradial. \end{Th} \begin{proof} Let $A \subseteq X$ be a non closed set and $B \subseteq A$ such that $\overline{B} \setminus A=\{x\}$ for some $x \in X$. Every regular $P$-space is zero-dimensional, so let $\mathcal{U}=\{U_\alpha: \alpha < \kappa \}$ be a family of clopen sets in $\overline{B}$, such that $\bigcap_{\alpha \in \kappa} U_\alpha=\{x\}$, and having the minimal cardinality among all families with the previous properties. Since we are in a $P$-space and $x$ is an accumulation point of $B$, the cardinal $\kappa$ must be uncountable. Use minimality of $\mathcal{U}$ to pick $x_\gamma \in \bigcap_{\alpha <\gamma} U_\alpha \setminus \{x\}$ for each $\gamma \in \kappa$. Since $\overline{B}$ is Lindel\"of and $\kappa$ is a regular uncountable cardinal, the sequence $\{x_\gamma : \gamma<\kappa\} \subseteq A$ converges to $x$. \end{proof} \begin{Quest} Does there exist a Lindel\"of weakly Whyburn non-pseudoradial $P$-space? \end{Quest} A negative answer to the above question would exhibit an interesting pathology in the class of Lindel\"of $P$-spaces. We now suggest a possible way to look for a counterexample. Kunen has constructed a compact space $X$ and a point $p \in X$ with the following properties: \begin{enumerate} \item{$\chi(p, X)>\omega$.} \item{No sequence of uncountable regular length converges to $p$ in $X$.} \end{enumerate} Suppose that a compact \emph{scattered} space $Y$ having the above properties exists, then it is easy to construct a weakly Whyburn non pseudoradial Lindelof $P$-space. Indeed, since $Y$ is compact scattered its omega-modification $Y_\delta$ is a Lindel\"of $P$-space (see \cite{A}, Lemma II.7.14). Since $Y_\delta$ is regular and scattered it is weakly Whyburn. By 2) no sequence will converge to $p$ in $Y_\delta$. (Since character equals pseudocharacter in compact spaces 1) will guarantee that $p$ is not isolated in $Y_\delta$). \section{Product of weakly Whyburn $P$-spaces} The behaviour of the weakly Whyburn property under the product operation is not very clear. The first listed author has proved that the product of a compact semiradial space and a compact weakly Whyburn space is weakly Whyburn but we are not aware of a non weakly Whyburn product of compact weakly Whyburn spaces. We don't know whether the same holds replacing compact space with Lindel\"of $P$-space but at least we prove that the product of Lindel\"of weakly Whyburn $P$-space and a Lindel\"of Whyburn $P$-space is weakly Whyburn (see Corollary~\ref{uno}). Recall that a space has countable extent if each of its closed discrete subsets is countable. Every Lindel\"of space has countable extent. So the next lemma is a strengthening of Theorem 2.5 in \cite{PTTW}. \begin{Lemma} \label{lem: extent} If $X$ is a Whyburn $P$-space of countable extent, then $X$ is a radial space of radial character $\aleph_1$. \end{Lemma} \begin{proof}Assume that $A\subseteq X$ and $x \in \overline{A} \setminus A$. Fix an almost closed $P\subseteq A$ such that $x \in \overline{P}$. Take a maximal disjoint family $\gamma$ of open subsets of $P$ whose closures do not contain $x$. Then $x \in \overline {\bigcup \gamma}$ and hence there is an almost closed $Q \subseteq \bigcup \gamma$ such that $x \in \overline{Q}$. Let $\gamma'=\{U \in \gamma: U \cap Q \neq \emptyset \}$. Since $X$ is a $P$-space, it is easy to see that $\gamma'$ is uncountable. For each $U \in \gamma'$ take an $x_U \in U \cap Q$. The set $B=\{x_U: U \in \gamma'\}$ is discrete and $(\overline{B} \setminus B) \cap (\bigcup \gamma)=\emptyset$. This means $B$ is closed in $Q$ and therefore $x$ has to be the only accumulation point of $B$ in the space $Q \cup \{x\}$. $B \cup\{x\}$ is an uncountable space with the unique non-isolated point $x$. Being of countable extent, such a space must be Lindel\"of and of cardinality $\aleph_1$. Hence $B$ is a sequence of length $\aleph_1$ which converges to $x$. \end{proof} \begin{Prop} A $P$-space $X$ of cardinality $\aleph_1$ and countable extent is Lindel\"of and radial. \end{Prop} \begin{proof} We first prove that $X$ is Lindel\"of. Toward a contradiction, suppose there exists an open cover $\cA$ of $X$ with no countable subcover. Of course, since $\vert X\vert={\omega}_1$, we may assume that $\vert\cA\vert={\omega}_1$. Also, since $\cA$ has no countable subcover, we can further assume (up to passing to a suitable subcover) that $\cA$ is indexed as $\fin{A_\al}{\al\in{\omega}_1}$, in such a way that \[ \forall\al\in{\omega}_1\colon A_\al{\setminus}\bigcup_{\al'<\al}A_{\al'}\neq{\emptyset}. \] For every $\al\in{\omega}_1$, pick an $x_\al\in A_\al{\setminus}\bigcup_{\al'<\al}A_{\al'}$; we claim that the set $D=\fin{x_\al}{\al\in{\omega}_1}$ is closed and discrete, and this will contradict the fact that $X$ has countable extent---as clearly $\al\mapsto x_\al$ is one-to-one.\par To prove the discreteness of $D$, note that for every $\al\in{\omega}_1$ the set $A_\al$ is a neighbourhood of $x_\al$ missing all points $x_{\al'}$ with $\al'>\al$; also, since the set $C_\al=\fin{x_{\al'}}{\al'<\al}$ is closed (because $X$ is a $P$-space), we see that $A_\al{\setminus} C_\al$ is an open neighbourhood of $x_\al$ whose intersection with $D$ gives $\{x_\al\}$. Suppose now ${\hat x}\in X{\setminus} D$; then ${\hat x}\in A_{\hat\al}$ for some ${\hat\al}\in{\omega}_1$. Arguing essentially as before, we see that $A_{\hat\al}{\setminus}\fin{x_{\al'}}{\al'\le{\hat\al}}$ is an open neighbourhood of ${\hat x}$ missing $D$; thus $D$ is closed.\par Notice that $X$ is Hausdorff and, being a Lindel\"of $P$-space, it is also regular. To prove that $X$ is radial, due to Lemma~\ref{lem: extent} it will suffice to show that $X$ has Whyburn property. Let $a\in X$ and $M\subseteq X$ be such that $a\in\overline M{\setminus} M$; index $X{\setminus}\{a\}$ as $\fin{x_\al}{\al\in{\omega}_1}$ and consider a decreasing ${\omega}_1$-sequence $\{V_\al\}_{\al\in{\omega}_1}$ of open neighbourhoods of $a$ such that \begin{equation}\label{val} \fore\al{{\omega}_1}\ \overline{V_\al}\cap\fin{x_{\al'}}{\al'\le\al}={\emptyset}; \end{equation} let also $y_\al$ be an element of $V_\al\cap M$ for every $\al\in{\omega}_1$, and set $L=\fin{y_\al}{\al\in{\omega}_1}$. We claim that $L$ is an almost-closed set converging to $a$. Indeed, on the one hand if $z$ is an arbitrary element of $X{\setminus}(L\cup\{a\})$ then $z=x_{\hat\al}$ for some ${\hat\al}\in{\omega}_1$, so that $z\in X{\setminus}\overline{V_{\hat\al}}$; since $z\notin L$, it also follows that $U=X{\setminus}(\fin{y_\al}{\al<{\hat\al}}\cup\overline{V_{\hat\al}})$ is an open neighbourhood of $z$, and $U$ is disjoint from $L$ as $L=\fin{y_\al}{\al\in{\omega}_1}\subseteq\fin{y_\al}{\al<{\hat\al}}\cup\overline{V_{\hat\al}}$. On the other hand, to show that $a\in\overline L$, consider an arbitrary neighbourhood $W$ of $a$; clearly, if we can prove that $V_\al\subseteq W$ for some $\al\in{\omega}_1$, then the inequality $W\cap L\neq{\emptyset}$ will follow. Actually, consider the closed subset $T=X{\setminus} W$ of $X$, and let $\cA=\fin{X{\setminus}\overline{V_\al}}{\al\in{\omega}_1}$. We see that $\cA$ covers $T$, as every element of $T$ must be $x_\al$ for some $\al\in{\omega}_1$, and $x_\al\in X{\setminus}\overline{V_\al}$ by \eqref{val}. Since $T$ is Lindel\"of, there must exist a countable subset $A$ of ${\omega}_1$ such that $X{\setminus} W\subseteq\bigcup_{\al\in A}(X{\setminus}\overline{V_\al})=X{\setminus}\bigcap_{\al\in A}\overline{V_\al}$; using the decreasing characters of the neighbourhoods $V_\al$, it also follows that $X{\setminus} W\subseteq X{\setminus}\overline{V_{\hat\al}}$---where ${\hat\al}=\sup A$---whence $V_{\hat\al}\subseteq\overline{V_{\hat\al}}\subseteq W$. \end{proof} \begin{Th} \label{th: prod} The product of a pseudoradial $P$-space $X$ of radial character $\omega_1$ with a weakly Whyburn $P$-space $Y$ of countable extent is weakly Whyburn. \end{Th} \begin{proof} Let $A \subseteq X\times Y$ be a non-closed subset. Take $(x,y) \in \overline {A} \setminus A$. If $A \cap(\{x\}\times Y)$ were non-closed, we could immediately apply the weak Whyburn property of $Y$ to find a set $B\subseteq A$ satisfying $\overline{B}\setminus A=\{(x,y)\}$. So, we may assume that $A \cap(\{x\}\times Y)$ is closed. By passing to a suitable subset, we may further assume that $A \cap(\{x\}\times Y)= \emptyset$. Since $x \in \overline {\pi_X (A)}$, it follows that $\pi_X (A)$ is not closed in $X$. So, there is a convergent sequence $\{x_\alpha : \alpha \in \omega_1\} \subseteq \pi_X (A)$ with limit outside $\pi_X (A)$. For each $\alpha \in \omega_1$ take a point $y_\alpha \in Y$ in such a way that $(x_\alpha , y_\alpha ) \in A$. \noindent Case 1: The set $\{y_\alpha :\alpha \in \omega_1\}$ is countable. There exists an uncountable subset $S\subseteq \omega_1$ such that $y_\alpha=y_\beta$ for any $\alpha ,\beta \in S$. Then, we put $B=\{(x_\alpha ,y_\alpha ) :\alpha \in S\}$. \noindent Case 2: The set $\{y_\alpha :\alpha \in \omega_1\}$ is uncountable. Since $Y$ has countable extent, we may assume---up to removing one point---that the set $\{y_\alpha :\alpha \in \omega_1\}$ is not closed. Therefore, there exists an uncountable set $S\subseteq \omega_1$ such that the set $C=\{y_\alpha :\alpha \in S\}$ is almost closed. As the set $cl_Y (C)$ has still countable extent, by Lemma $\ref{lem: extent}$ there exists an (uncountable) set $T\subseteq S$ such that the set $\{y_\alpha :\alpha \in T\}$ is a convergent sequence. Then put $B=\{(x_\alpha, y_\alpha ) :\alpha \in T\}$. In both of the previous cases, the set $B$ is a convergent sequence which is also an almost closed subset of $A$. \end{proof} \begin{Cor}\label{uno} The product of a Whyburn Lindel\"of $P$-space with a weakly Whyburn Lindel\"of $P$-space is weakly Whyburn. \end{Cor} \begin{Quest} Is the product of two Lindel\"of weakly Whyburn $P$-spaces always weakly Whyburn? \end{Quest} Constructing a non-Whyburn product of two Lindel\"of $P$-spaces turned out not to be so easy. Since every compact Whyburn space is Fr\'echet, the analogous problem for compact spaces is the construction of two compact Fr\'echet spaces whose product is not Fr\'echet. This is usually tackled using the one-point compactification of Mrowka spaces on suitable almost disjoint families on $\omega$ (see \cite{BR} and \cite{S2}), so it was natural to try and extend that approach. We first recall that a family $\mathcal{A}$ of uncountable subsets of $\omega_1$ is said to be \emph{almost disjoint} if $|a \cap b|<\omega_1$ for all distinct $a,b \in \mathcal{A}$; it is said to be a MAD family if it is maximal with respect to this property. Define now a topology on $X=\omega_1 \cup \mathcal{A}$ in the following way. Each point of $\omega_1$ is isolated and a neighbourhood of $p \in \mathcal{A}$ is $\{p\} \cup p$ minus a countable set. The space $X$ is locally Lindel\"of so we may take its one-point Lindel\"ofication $\mathcal{L}(\mathcal{A})$; in concrete terms, $\mathcal{L}(\mathcal{A})$ is obtained by adding a point $\infty$ to $\cA$, and is endowed with a topology whose restriction to $X$ coincides with the original topology, while the basic neighbourhoods of $\infty$ are all sets of the form $\{\infty\}\cup(\cA{\setminus}\cF)\cup\big(\bigcup(\cA{\setminus}\cF){\setminus} F\big)$, where $\cF\in[\cA]^{<\omega_1}$ and $F\in[\omega_1]^{<\omega_1}$. Notice that, this way, $\cL( \cA)$ turns out to be a Lindel\"of (regular) P-space.\par Let $\mathcal{A}$ be an almost disjoint family on $\omega_1$ and let $\mathcal{I}(\mathcal{A})$ be the ideal of those uncountable subsets of $\omega_1$ which can be {\it almost-covered} by a countable subfamily of $\mathcal{A}$ (which means that the set-theoretic difference between the former and the union of the latter one is countable); let also $\cI^+(\cA)=[{\omega}_1]^{{\omega}_1}{\setminus}\cI(\cA)$. Furthermore, let $\mathcal{M}(\mathcal{A})$ be the set of those $X \in [\omega_1]^{\omega_1}$ such that $\mathcal{A}_X=\{X \cap p : p \in \mathcal{A},\,\text{$X\cap p$ is uncountable} \}$ is a MAD family on $X$. We introduce the following: \begin{Def} An almost disjoint family $\mathcal{A}$ on $\omega_1$ is said to be \emph{nowhere MAD} if $\mathcal{M}(\mathcal{A}) \subseteq \mathcal{I}(\mathcal{A})$. \end{Def} Now we will link some topological features of $\mathcal{L}(\mathcal{A})$ to the combinatorial structure of $\mathcal{A}$ via the previous definition. \begin{Th} Let $\mathcal{A}$ be an almost disjoint family on $\omega_1$. The space $\mathcal{L}(\mathcal{A})$ is Whyburn if and only if $\mathcal{A}$ is nowhere MAD. \end{Th} \begin{proof} Suppose first that there is some $X \in \mathcal{M}(\mathcal{A}) \cap{\mathcal I}^+(\mathcal{A})$, then $\infty \in \overline{X}$ (where $\infty$ is the point added to $\cA$, to obtain $\cL(\cA)$). We will show that $X$ contains no almost closed set converging to $\infty$. Indeed let $Y \subseteq X$ be such that $\infty \in \overline{Y}$; then $Y$ is clearly uncountable, hence---since the family $\mathcal{A}_X$ is MAD on $X$---there must be some $p \in \mathcal{A}$ such that $p \cap Y$ is uncountable. So $p \in \overline{Y}$ and we are done. To prove the converse note that at each $q \in \mathcal{A}$ the Whyburn property is clearly satisfied, therefore we only need to check it at $\infty$. Suppose that $\mathcal{A}$ is nowhere MAD and pick some $X\subseteq{\omega}_1\cup\cA$ such that $\infty \in \overline{X}$. If $\infty\in\overline{X\cap\cA}$ then clearly $\overline{X\cap\cA}= (X\cap\cA)\cup\{\infty\}$, so we may restrict ourselves to the case where $X\subseteq{\omega}_1$. Now, $\infty\in\overline{X}$ implies that $X\notin\cI(\cA)$ (in particular, of course, $X$ is uncountable), so by nowhere MADness of $\mathcal{A}$ we can find some uncountable $Y \subseteq X$ such that $Y \cap p$ is countable for each $p \in \mathcal{A}$. Thus $Y$ is an almost closed set such that $Y \to \infty$. \end{proof} \begin{Lemma}\label{lem} Let $\cA$ be a MAD family on a set $X$ of cardinality $\omega_1$, and $\{\cA_1,\cA_2\}$ be a partition of $\cA$, with $\cA_1$ nowhere MAD on $X$ and $\vert\cA_1\vert\ge\omega_1$. Suppose also to have associated to every $A\in\cA_1$ an element $M(A)$ of $[A]^{\omega_1}$. Then there exists $\tilde A\in\cA_2$ such that $\tilde A\cap\big(\bigcup_{A\in\cA'}M(A)\big)\neq{\emptyset}$. \end{Lemma} \begin{proof} Let, for the sake of simplicity, $M^\ast=\bigcup_{A\in\cA_1}M(A)$, and notice that $M^\ast\in\cJ^+(\cA_1)$. Indeed, if $\cF$ is a countable subfamily of $\cA_1$, then there exists an $\hat A\in\cA_1{\setminus}\cF$. Since $\cA_1\subseteq\cA$ and $\cA$ is almost disjoint, it follows that $\big\vert\hat A\cap\big(\bigcup\cF\big)\big\vert<\omega_1$. Therefore, in particular, $M\big(\hat A\big){\setminus}\bigcup\cF\neq{\emptyset}$, whence also $M^\ast{\setminus}\bigcup\cF\neq{\emptyset}$.\par Now, since $\cA_1$ is nowhere MAD, there must exist $S\in[M^\ast]^{\omega_1}$ such that the set $S\cap(A\cap M^\ast)=S\cap A$ is countable for every $A\in\cA_1$. On the other hand, since $S\in[M^\ast]^{\omega_1}\subseteq[X]^{\omega_1}$ and $\cA$ is MAD on $X$, there will be $\tA\in\cA$ with $\big\vert\tA\cap S\big\vert=\omega_1$; therefore, such $\tA$ must belong to $\cA_2$, and of course we have in particular $\tA\cap M^\ast\supseteq\tA\cap S\neq{\emptyset}$. \end{proof} \begin{Cor}\label{cor} Let, as in the previous lemma, $\cA$ be a MAD family on a set $X$ of cardinality $\omega_1$, and $\{\cA_1,\cA_2\}$ be a partition of $\cA$, with $\cA_1$ nowhere MAD on $X$ and of cardinality $\omega_1$. Then there exists $A\in\cA_1$ such that $\big\vert A\cap\big(\bigcup\cA_2\big)\big\vert=\omega_1$. \end{Cor} \begin{proof} Toward a contradiction, assume $\big\vert A\cap\big(\bigcup\cA_2\big)\big\vert<\omega_1$ for every $A\in\cA_1$; letting, for every $A\in\cA_1$, $M(A)=A{\setminus}\big(\bigcup\cA_2\big)$, we see that each $M(A)$ is uncountable, hence by Lemma~\ref{lem} it follows that $\tA\cap\big(\bigcup_{A\in\cA_1}M(A)\big)\neq{\emptyset}$ for some $\tA\in\cA_2$. Clearly, this is a contradiction, as each $M(A)$ is disjoint from $\bigcup\cA_2$. \end{proof} \begin{Th} Let $\mathcal{A}$ be an almost disjoint family on $\omega_1$, and $\{\cA_1,\cA_2\}$ be a partition of $\cA$ into two uncountable subfamilies. If $\mathcal{A}$ is not nowhere MAD while $\cA_1$ is nowhere MAD, then $\mathcal{L}(\mathcal{A}_1) \times\mathcal{L}(\mathcal{A}_2)$ is not a Whyburn space. \end{Th} \begin{proof} Let $X \in \mathcal{M}(\mathcal{A}) \cap \mathcal{I}^+(\mathcal{A})$. Consider the diagonal $\Delta=\{(\alpha, \alpha) : \alpha \in X \}$; then $(\infty, \infty) \in \overline{\Delta}$. To verify this fact, consider a basic neighbourhood $W$ of $(\infty,\infty)$ in $\mathcal{L}(\mathcal{A}_1)\times\mathcal{L}(\mathcal{A}_2)$ of the form \[ W=\Big(\{\infty\}\cup(\cA_1{\setminus}\cF_1)\cup\big(\textstyle{\bigcup}(\cA_1{\setminus}\cF_1){\setminus} F_1\big)\Big)\times\Big(\{\infty\}\cup(\cA_2{\setminus}\cF_2)\cup\big(\textstyle{\bigcup}(\cA_2{\setminus}\cF_2){\setminus} F_2\big)\Big), \] where $\cF_i\in[\cA_i]^{<{\omega}_1}$ and $F_i\in[{\omega}_1]^{<{\omega}_1}$ for $i=1,2$; since $X\notin\cI(\cA)$, the set $X{\setminus}\big(\big(\bigcup\cF_1\big)\cup\big(\bigcup\cF_2\big)\cup F_1\cup F_2\big)$ must be nonempty, and taking any $\al$ in it we easily see that $(\al,\al)\in\Delta\cap W$.\par Now, let $B=\{(\alpha, \alpha) : \alpha \in Y \} \subseteq \Delta$ be such that $(\infty, \infty) \in \overline{B}$, with $Y \subseteq X$. Then $Y$ must be uncountable, and the family $\cA_Y=\fin{A\cap Y}{A\in\cA,\,\vert A\cap Y\vert=\omega_1}$ is MAD on $Y$ (as the restriction of $\cA$ to $X$ is MAD on $X$), while the family $\cA_1'=\big\{A\cap Y\,\colon\,A\in\cA_1,\,\vert A\cap Y\vert=\omega_1\big\}$ is nowhere MAD on $Y$ (as $\cA_1$ is nowhere MAD on $\omega_1$). Letting also $\cA_2'=\fin{A\cap Y}{A\in\cA_2,\,\vert A\cap Y\vert=\omega_1}$, we see that $\{\cA_1',\cA_2'\}$ is a partition of $\cA_Y$, so that by Corollary~\ref{cor} there exists $A^\ast\in\cA_1'$ such that $\big\vert A^\ast\cap\big(\bigcup\cA_2'\big)\big\vert=\omega_1$. Let $p\in\cA$ be such that $A^\ast=p\cap Y$; we claim that $(p,\infty)\in\overline B$.\par Indeed, consider a basic neighbourhood of $(p,\infty)$ of the form \[ \big(\{p\}\cup(p{\setminus} F_1)\big)\times\Big(\{\infty\}\cup(\cA_2{\setminus}\cF_2)\cup\big(\textstyle{\bigcup}(\cA_2{\setminus}\cF_2){\setminus} F_2\big)\Big), \] with $\cF_2\in[\cA_2]^{<\omega_1}$ and $F_1,F_2\in[\omega_1]^{<\omega_1}$. Then from $\big\vert A^\ast\cap\big(\bigcup\cA_2'\big)\big\vert=\omega_1$ it follows that $\big\vert p\cap\big(\bigcup\cA_2\big)\cap Y\big\vert=\omega_1$; moreover, since $\vert A\cap p\vert<\omega_1$ for every $A\in\cF_2$, it also follows that $\big\vert p\cap\big(\bigcup(\cA_2{\setminus}\cF_2)\big)\cap Y\big\vert=\omega_1$, and finally that $\big\vert (p{\setminus} F_1)\cap\big(\bigcup(\cA_2{\setminus}\cF_2){\setminus} F_2\big)\cap Y\big\vert=\omega_1$. Then taking ${\hat\al}\in(p{\setminus} F_1)\cap\big(\bigcup(\cA_2{\setminus}\cF_2){\setminus} F_2\big)\cap Y$, we see that \[ ({\hat\al},{\hat\al})\in\Big(\big(\{p\}\cup(p{\setminus} F_1)\big)\times\Big(\{\infty\}\cup(\cA_2{\setminus}\cF_2)\cup\big(\textstyle{\bigcup}(\cA_2{\setminus}\cF_2){\setminus} F_2\big)\Big)\Big)\cap B. \] \end{proof} Thus to find a non-Whyburn product of Lindel\"of Whyburn $P$-spaces it suffices to construct a MAD family on $\omega_1$ which can be split into two nowhere MAD families. This can be done under a proper set-theoretic assumption, by a well-known line of reasoning (see \cite{BR}). \begin{Th}[{[$2^{\aleph_1}=\aleph_2$]}] There is a MAD family on $\omega_1$ which can be split into two nowhere MAD subfamilies. \end{Th} \begin{proof} Let $\mathcal{A}=\{A_\alpha : \alpha<\omega_2\}$ be an enumeration of $[{\omega}_1]^{{\omega}_1}$. We will construct almost disjoint families $\{\mathcal{S}_\alpha: \alpha < \omega_2\}$ of uncountable subsets of $\omega_1$ with the property that for each $\beta \leq \alpha$ either $A_\beta$ is covered by a countable subcollection of $\mathcal{S}_\alpha$ or there is some $S \in\mathcal{S}_\alpha$ such that $S \subseteq A_\beta$. Let $\mathcal{S}_0'$ be any partition of $\omega_1$ into $\omega_1$ many uncountable sets. If $A_0$ is covered by a countable subfamily of $\mathcal{S}_0'$ , let $\mathcal{S}_0=\mathcal{S}_0'$, else let $\mathcal{S}_0'=\{T_\alpha: \alpha \in \omega_1\}$ be an enumeration; then for each $\beta \in \omega_1$ the set $A_0 \setminus \bigcup_{\alpha \leq \beta} T_\alpha$ is uncountable. Pick $a_0 \in A_0 \setminus T_0$ and for each $\beta \neq 0$ pick $$a_\beta \in A_0 \setminus\Big(\bigcup_{\alpha \leq \beta} T_\alpha \cup \{a_\alpha : \alpha < \beta\}\Big).$$ Let $S_0=\{a_\alpha : \alpha \in \omega_1\}$ and set $\mathcal{S}_0=\mathcal{S}_0' \cup \{S_0\}$. Clearly, $S_0\subseteq A_0$ and $\cS_0$ is almost disjoint.\par If $\mathcal{S}_\beta$ is chosen for each $\beta<\alpha$, $\mathcal{S}_\alpha'=\bigcup_{\beta<\alpha} \mathcal{S}_\beta$. If $A_\alpha$ is not covered by a countable subfamily of $\mathcal{S}_\alpha'$ let $\mathcal{S}_\alpha'=\{T^{\alpha}_\gamma : \gamma \in \omega_1\}$ be an enumeration; then, for each $\beta \in \omega_1$ the set $A_\alpha \setminus \bigcup_{\gamma \leq \beta} T^{\alpha}_\gamma$ is uncountable. Pick $a_0^\alpha \in A_\alpha \setminus T_0^\alpha$ and for each $\beta \neq 0$ pick $$a_\beta^\alpha \in A_\alpha \setminus\Big(\bigcup_{\gamma \leq \beta} T_\beta^\alpha \cup \{a_\gamma^\alpha : \gamma<\beta\}\Big).$$ Let $S_\alpha=\{a_\gamma^\alpha : \gamma \in \omega_1\}$ and let $\mathcal{S}_\al=\mathcal{S}_\alpha' \cup \{S_\alpha\}$. It is clear that each $\mathcal{S}_\alpha$ has the property stated at the beginning and is almost disjoint; note also that $\mathcal{S}_\beta \subseteq\mathcal{S}_\alpha$ whenever $\beta<\alpha$. Put $\mathcal{S}'=\bigcup_{\alpha \in \omega_2} \mathcal{S}_\alpha$; since $\mathcal{S}'$ is almost disjoint there is some MAD family $\mathcal{S}$ such that $\mathcal{S}' \subseteq \mathcal{S}$. The family $\mathcal{S}$ inherits from $\mathcal{S}'$ the property that if $X \subseteq \omega_1$ then either $X$ is covered by a countable subfamily of $\mathcal{S}$ or there is some $S \in\cS$ such that $S \subseteq X$. For each $S \in \mathcal{S}$ let $\{S^+,S^\ast\}$ be a partition of $S$ into two sets of cardinality ${\omega}_1$; then set $\mathcal{S}^\ast=\{S^\ast : S \in \mathcal{S}\}$ and let $\mathcal{S}^+=\{S^+ : S \in \mathcal{S}\}$. We see that $\mathcal{T}=\mathcal{S}^\ast \cup \mathcal{S}^+$ is a MAD family on $\omega_1$ with the property that if $X \subseteq \omega_1$ is not covered by a countable subfamily of $\mathcal{T}$ then there are disjoint $S_1 \in \mathcal{S}^\ast$ and $S_2 \in \mathcal{S}^+$ with $S_1 \cup S_2 \subseteq X$. This clearly implies that $\mathcal{S}^\ast$ and $\mathcal{S}^+$ are nowhere MAD. \end{proof} \begin{Cor}[{[$2^{\aleph_1}=\aleph_2$]}] There exist two Lindel\"of Whyburn P-spaces whose product is not Whyburn. \end{Cor} \begin{Quest} Does there exist in ZFC a non-Whyburn product of Lindel\"of Whyburn $P$-spaces? \end{Quest} \end{document}
arXiv
\begin{document} \title{Semantical conditions for the definability of functions and relations} \author{M. Campercholi, D. Vaggione} \maketitle \begin{abstract} Let $\mathcal{L}\subseteq \mathcal{L}^{\prime }$ be first order languages, let $R\in \mathcal{L}^{\prime }-\mathcal{L}$ be a relation symbol, and let $ \mathcal{K}$ be a class of $\mathcal{L}^{\prime }$-structures. In this paper we present semantical conditions equivalent to the existence of an $\mathcal{ L}$-formula $\varphi \left( \vec{x}\right) $ such that $\mathcal{K}\vDash \varphi (\vec{x})\leftrightarrow R(\vec{x})$, and $\varphi $ has a specific syntactical form (e.g., quantifier free, positive and quantifier free, existential horn, etc.). For each of these definability results for relations we also present an analogous version for the definability of functions. Several applications to natural definability questions in universal algebra have been included; most notably definability of principal congruences. The paper concludes with a look at term-interpolation in classes of structures with the same techniques used for definability. Here we obtain generalizations of two classical term-interpolation results: Pixley's theorem \cite{Pixley} for quasiprimal algebras, and the Baker-Pixley Theorem \cite{ba-pi} for finite algebras with a majority term. \end{abstract} \section*{Introduction} Let $\mathcal{L}$ be a first order language and $\mathcal{K}$ a class of $ \mathcal{L}$-structures. If $R\in \mathcal{L}$ is an $n$-ary relation symbol, we say that a formula $\varphi (\vec{x})$ \emph{defines} $R$ \emph{in } $\mathcal{K}$ if \begin{equation*} \mathcal{K}\vDash \varphi (\vec{x})\leftrightarrow R(\vec{x})\text{.} \end{equation*} Let $f_{1},\ldots ,f_{m}\in \mathcal{L}$ be $n$-ary function symbols. Given an $\mathcal{L}$-structure $\mathbf{A}$, let $\vec{f}^{\mathbf{A} }:A^{n}\rightarrow A^{m}$ be the function defined by \begin{equation*} \vec{f}^{\mathbf{A}}\left( \vec{a}\right) =(f_{1}^{\mathbf{A}}(\vec{a} ),\ldots ,f_{m}^{\mathbf{A}}(\vec{a}))\text{.} \end{equation*} We say that a formula $\varphi (\vec{x},\vec{z})$ \emph{defines} $\vec{f}$ \emph{in} $\mathcal{K}$ if \begin{equation*} \mathcal{K}\vDash \varphi (\vec{x},\vec{z})\leftrightarrow \vec{f}(\vec{x})= \vec{z}\text{.} \end{equation*} Let $\mathcal{L}\subseteq \mathcal{L}^{\prime }$ be first order languages, let $R\in \mathcal{L}^{\prime }-\mathcal{L}$ be an $n$-ary relation symbol (resp. $f_{1},\ldots ,f_{m}\in \mathcal{L}^{\prime }-\mathcal{L}$ be $n$-ary function symbols), and let $\mathcal{K}$ be a class of $\mathcal{L}^{\prime } $-structures. Let $S$ be any of the following sets: \begin{itemize} \item $\{$finite conjunctions of atomic $\mathcal{L}$-formulas$\}$, \item $\{$positive open $\mathcal{L}$-formulas$\}$, \item $\{$open Horn $\mathcal{L}$-formulas$\}$, \item $\{$open $\mathcal{L}$-formulas$\}$, \item $\{$primitive positive $\mathcal{L}$-formulas$\}$, \item $\{$existential positive $\mathcal{L}$-formulas$\}$, \item $\{$existential Horn $\mathcal{L}$-formulas$\}$, \item $\{$existential $\mathcal{L}$-formulas$\}$. \end{itemize} In this paper we give semantical conditions characterizing when $R$ (resp. $ \vec{f}$) is definable in $\mathcal{K}$ by a formula of $S$. The results obtained provide a natural and unified way to handle familiar questions on definability of functions and relations in classes of structures. Being able to look at a great variety of definability questions within the same framework allows for a deeper understanding of the definability phenomena in general. Evidence of this is our finding of several new results on definability of principal congruences, and the generalizations of the Baker-Pixley Theorem \cite{ba-pi} and of the Pixley Theorem \cite{Pixley}. The applications throughout the paper provide a good sample of how the results are put to work, in some cases providing direct proofs of known facts and in others discovering new theorems. In Section 2 we study the definability of functions and relations by (positive) open formulas. We give some immediate applications to definability of relative principal congruences in quasivarieties (Proposition \ref{aplicacion a dpc} and Corollary \ref{aplicacion a dpc 1}). In Section 3 we study the definability by open Horn formulas. In Section 4 we focus on the definability by conjunctions of atomic formulas. We give some consequences on definability of principal congruences and the Fraser-Horn property (Proposition \ref{aplicacion a dpc 2} and Corollary \ref {aplicacion a dpc 3}). We also apply the characterizations to give new natural proofs of two results on the translation of (positive) open formulas to conjunctions of equations (Proposition \ref{traduccion} and Corollary \ref {traduccion para simples}). In Section 5 we address definability by existential formulas. Subsection 5.1 is concerned with primitive positive functions. As an application we characterize primitive positive functions in Stone algebras and De Morgan algebras. Section 6 is devoted to term interpolation of functions in classes. First we apply the previous results to characterize when a function is term valued by cases in a class (Theorems \ref{term valued with open cases} and \ref{term valued with positive cases} ). We use this to give generalizations of Pixley's theorem \cite{Pixley} characterizing quasiprimal algebras as those finite algebras in which every function preserving the inner isomorphisms is a term function (Theorems \ref {generalizacion imp no trivial de pixley} and \ref{Pixley}). We conclude the section giving two generalizations of the Baker-Pixley Theorem \cite{ba-pi} on the existence of terms representing functions in finite algebras with a majority term (Theorems \ref{Baker-Pixley para clases caso localmente finitas} and \ref{Baker-Pixley para aritmeticas}). Even though most results in the paper are true in more general contexts (via the same ideas), we have preferred to write the results in a more concise manner. The intention is to provide the non-specialist a more accessible presentation, with the hope that he/she can find further natural applications in universal algebra. \section{Notation} As usual, $\mathbb{I}(\mathcal{K})$, $\mathbb{S}(\mathcal{K})$, $\mathbb{P}( \mathcal{K})$ and $\mathbb{P}_{u}(\mathcal{K})$ denote the classes of isomorphic images, substructures, direct products and ultraproducts of elements of $\mathcal{K}$. We write $\mathbb{P}_{\mathrm{fin}}(\mathcal{K})$ to denote the class $\{\mathbf{A}_{1}\times \ldots \times \mathbf{A} _{n}:n\geq 1$ and each $\mathbf{A}_{i}\in \mathcal{K}\}$. For a class of algebras $\mathcal{K}$ let $\mathbb{Q}(\mathcal{K})$ (resp. $\mathbb{V}( \mathcal{K})$) denote the \textit{quasivariety }(resp. \textit{variety}) \textit{generated by }$\mathcal{K}$. If $\mathcal{L}\subseteq \mathcal{L} ^{\prime }$ are first order languages, for an $\mathcal{L}^{\prime }$ -structure $\mathbf{A}$ we use $\mathbf{A}_{\mathcal{L}}$ to denote the reduct of $\mathbf{A}$ to the language $\mathcal{L}$. If $\mathbf{A},\mathbf{ B}$ are $\mathcal{L}$-structures, we write $\mathbf{A}\leq \mathbf{B}$ to express that $\mathbf{A}$ is a substructure of $\mathbf{B}$. Let $S_{1},\ldots ,S_{k}$ be non-empty sets, let $n\in N$. For $i=1,\ldots ,k $, let $f_{i}:S_{i}^{n}\rightarrow S_{i}$ be an $n$-ary operation on $ S_{i}$. We use $f_{1}\times \ldots \times f_{k}$ to denote the function $ f_{1}\times \ldots \times f_{k}:(S_{1}\times \ldots \times S_{k})^{n}\rightarrow S_{1}\times \ldots \times S_{k}$ given by \begin{equation*} f_{1}\times \ldots \times f_{k}((a_{1}^{1},\ldots ,a_{k}^{1}),\ldots ,(a_{1}^{n},\ldots ,a_{k}^{n}))=(f_{1}(a_{1}^{1},\ldots ,a_{1}^{n}),\ldots ,f_{k}(a_{k}^{1},\ldots ,a_{k}^{n}))\text{.} \end{equation*} Also, if $R_{i}\subseteq S_{i}^{n}$ are $n$-ary relations on $S_{i}$, then we write $R_{1}\times \ldots \times R_{k}$ to denote the $n$-ary relation given by \begin{equation*} R_{1}\times \ldots \times R_{k}=\{(a_{1}^{1},\ldots ,a_{k}^{1}),\ldots ,(a_{1}^{n},\ldots ,a_{k}^{n})):(a_{i}^{1},\ldots ,a_{i}^{n})\in R_{i},\ \ i=1,\ldots ,k\}\text{.} \end{equation*} We observe that if $\mathbf{S}_{1},\ldots ,\mathbf{S}_{k}$ are $\mathcal{L}$ -structures and $f\in \mathcal{L}$ is an $n$-ary operation symbol, then $f^{ \mathbf{S}_{1}}\times \ldots \times f^{\mathbf{S}_{k}}=f^{\mathbf{S} _{1}\times \ldots \times \mathbf{S}_{k}}$. Also, if $R\in \mathcal{L}$ is an $n$-ary relation symbol, then $R^{\mathbf{S}_{1}}\times \ldots \times R^{ \mathbf{S}_{k}}=R^{\mathbf{S}_{1}\times \ldots \times \mathbf{S}_{k}}$. For a quasivariety $\mathcal{Q}$ and $\mathbf{A}\in \mathcal{Q}$, we use $ \mathrm{Con}_{\mathcal{Q}}(\mathbf{A})$ to denote the lattice of relative congruences of $\mathbf{A}$. If $a,b\in A$, with $\mathbf{A}\in \mathcal{Q}$ , let $\theta _{\mathcal{Q}}^{\mathbf{A}}(a,b)$ denote the \emph{relative principal congruence} generated by $(a,b)$. When $\mathcal{Q}$ is a variety we drop the subscript and just write $\theta ^{\mathbf{A}}(a,b)$. The quasivariety $\mathcal{Q}$ has \emph{definable relative principal congruences }if there exists a first order formula $\varphi (x,y,z,w)$ in the language of $\mathcal{Q}$ such that \begin{equation*} \theta _{\mathcal{Q}}^{\mathbf{A}}(a,b)=\{(c,d):\mathcal{Q}\vDash \varphi (a,b,c,d)\} \end{equation*} for any $a,b\in A$, $\mathbf{A}\in \mathcal{Q}$. The quasivariety $\mathcal{Q }$ has the \emph{relative congruence extension property }if for every $ \mathbf{A}\leq \mathbf{B}\in \mathcal{Q}$ and $\theta \in \mathrm{Con}_{ \mathcal{Q}}(\mathbf{A})$ there is a $\delta \in \mathrm{Con}_{\mathcal{Q}}( \mathbf{B})$ such that $\theta =\delta \cap A^{2}$. Let $\mathcal{Q}_{RFSI}$ (resp. $\mathcal{Q}_{RS}$) denote the class of relative finitely subdirectly irreducible (resp. simple) members of $ \mathcal{Q}$. When $\mathcal{Q}$ is a variety, we write $\mathcal{Q}_{FSI}$ in place of $\mathcal{Q}_{RFSI}$, $\mathrm{Con}(\mathbf{A})$ in place of $ \mathrm{Con}_{\mathcal{Q}}(\mathbf{A})$, etc. Let \begin{equation*} \begin{array}{l} \mathrm{At}(\mathcal{L})=\{\text{atomic}\mathrm{\ }\mathcal{L}\text{-formulas }\}\text{,} \\ \mathrm{\pm At}(\mathcal{L})=\mathrm{At}(\mathcal{L})\cup \{\lnot \alpha :\alpha \in \mathrm{At}(\mathcal{L})\}\text{,} \\ \mathrm{Op}(\mathcal{L})=\{\varphi :\varphi \ \text{is\ an\ open}\mathrm{\ } \mathcal{L}\text{-formula}\}\text{,} \\ \mathrm{OpHorn}(\mathcal{L})=\{\varphi :\varphi \ \text{is\ an\ open\ Horn} \mathrm{\ }\mathcal{L}\text{-formula}\}\text{.} \end{array} \end{equation*} If $S$ is a set of formulas, we define \begin{equation*} \begin{array}{l} \left[ \bigwedge S\right] =\{\varphi _{1}\wedge \ldots \wedge \varphi _{n}:\varphi _{1},\ldots ,\varphi _{n}\in S,\ n\geq 1\}\text{,} \\ \left[ \bigvee S\right] =\{\varphi _{1}\vee \ldots \vee \varphi _{n}:\varphi _{1},\ldots ,\varphi _{n}\in S,\ n\geq 1\}\text{,} \\ \left[ \forall S\right] =\{\forall x_{1}\ldots \forall x_{n}\varphi :\varphi \in S,\ n\geq 0\}\text{,} \\ \left[ \exists S\right] =\{\exists x_{1}\ldots \exists x_{n}\varphi :\varphi \in S,\ n\geq 0\}\text{.} \end{array} \end{equation*} \section{Definability by (positive) open formulas} \begin{theorem} \label{(positive) open para relaciones}Let $\mathcal{L}\subseteq \mathcal{L} ^{\prime }$ be first order languages and let $R\in \mathcal{L}^{\prime }- \mathcal{L}$ be an $n$-ary relation symbol. For a class $\mathcal{K}$ of $ \mathcal{L}^{\prime }$-structures, the following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\mathrm{Op}(\mathcal{L})$ (resp. $\left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $) which defines $R$ in $ \mathcal{K}$. \item[(2)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, all isomorphisms (resp. homomorphisms) $\sigma : \mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$, we have that $(a_{1},\ldots ,a_{n})\in R^{\mathbf{A}}$ implies $ (\sigma (a_{1}),\ldots ,\sigma (a_{n}))\in R^{\mathbf{B}}$. \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $n$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (2). \end{theorem} \begin{proof} (1)$\Rightarrow $(2). Suppose that $\varphi (\vec{x})\in \mathrm{Op}( \mathcal{L})$ (resp. $\varphi (\vec{x})\in \left[ \bigvee \bigwedge \mathrm{ At}(\mathcal{L})\right] $) defines $R$ in $\mathcal{K}$. Note that $\varphi ( \vec{x})$ defines $R$ in $\mathbb{P}_{u}(\mathcal{K})$. Suppose that $ \mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, $ \sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ is an isomorphism (resp. a homomorphism), and fix $a_{1},\ldots ,a_{n}\in A_{0}$ such that $ (a_{1},\ldots ,a_{n})\in R^{\mathbf{A}}$. Since \begin{equation*} \mathbf{A}\vDash \varphi (\vec{a}) \end{equation*} and $\varphi (\vec{x})\in \mathrm{Op}(\mathcal{L})$, we have that \begin{equation*} \mathbf{A}_{0}\vDash \varphi (\vec{a})\text{.} \end{equation*} Since $\sigma $ is an isomorphism (resp. $\sigma $ is a homomorphism and $ \varphi (\vec{x})\in \left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $), we have that \begin{equation*} \mathbf{B}_{0}\vDash \varphi (\sigma (a_{1}),\ldots ,\sigma (a_{n}))\text{.} \end{equation*} As $\varphi (\vec{x})\in \mathrm{Op}(\mathcal{L})$, it follows that \begin{equation*} \mathbf{B}\vDash \varphi (\sigma (a_{1}),\ldots ,\sigma (a_{n}))\text{,} \end{equation*} and thus $(\sigma (a_{1}),\ldots ,\sigma (a_{n}))\in R^{\mathbf{B}}$. (2)$\Rightarrow $(1). Let $\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})$ and $ \vec{a}\in R^{\mathbf{A}}$. Define \begin{equation*} \Delta ^{\vec{a},\mathbf{A}}=\{\alpha (\vec{x}):\alpha \in \mathrm{\pm At}( \mathcal{L})\text{, }\mathbf{A}\vDash \alpha (\vec{a})\} \end{equation*} (resp. $\Delta ^{\vec{a},\mathbf{A}}=\{\alpha (\vec{x}):\alpha \in \mathrm{At }(\mathcal{L})$ and $\mathbf{A}\vDash \alpha (\vec{a})\}$). Take $\mathbf{B} \in \mathbb{P}_{u}(\mathcal{K})$ and $\vec{b}\in B^{n}$ such that $\mathbf{B} \vDash \Delta ^{\vec{a},\mathbf{A}}(\vec{b})$. Let \begin{equation*} \mathbf{A}_{0}=\text{the substructure of }\mathbf{A}_{\mathcal{L}}\text{ generated by }a_{1},\ldots ,a_{n}\text{,} \end{equation*} \begin{equation*} \mathbf{B}_{0}=\text{the substructure of }\mathbf{B}_{\mathcal{L}}\text{ generated by }b_{1},\ldots ,b_{n}\text{.} \end{equation*} Since $\mathbf{B}\vDash \Delta ^{\vec{a},\mathbf{A}}(\vec{b})$, we have that \begin{equation*} a_{1}\mapsto b_{1},\dots ,a_{n}\mapsto b_{n},\ \ f_{1}^{\mathbf{A}}(\vec{a} )\mapsto c_{1},\dots ,f_{m}^{\mathbf{A}}(\vec{a})\mapsto c_{m} \end{equation*} extends to an isomorphism (resp. homomorphism) from $\mathbf{A}_{0}$ onto $ \mathbf{B}_{0}$, which by (2) says that $\vec{b}\in R^{\mathbf{B}}$. So we have proved that \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash \left( \dbigwedge\limits_{\alpha \in \Delta ^{\vec{a},\mathbf{A}}}\alpha (\vec{x})\right) \rightarrow R\left( \vec{x}\right) \text{.} \end{equation*} By compactness, there is a finite subset $\Delta _{0}^{\vec{a},\mathbf{A} }\subseteq \Delta ^{\vec{a},\mathbf{A}}$ such that \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash \left( \dbigwedge\limits_{\alpha \in \Delta _{0}^{\vec{a},\mathbf{A}}}\alpha (\vec{x})\right) \rightarrow R(\vec{x })\text{.} \end{equation*} Next note that \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash \left( \dbigvee\limits_{\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})\text{, }\vec{a}\in R^{\mathbf{A} }}\dbigwedge\limits_{\alpha \in \Delta _{0}^{\vec{a},\mathbf{A}}}\alpha ( \vec{x})\right) \leftrightarrow R(\vec{x})\text{,} \end{equation*} which by compactness says that \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash \left( \dbigvee\limits_{i=1}^{k}\dbigwedge\limits_{\alpha \in \Delta _{0}^{\vec{a} _{i},\mathbf{A}_{i}}}\alpha (\vec{x})\right) \leftrightarrow R(\vec{x}) \end{equation*} for some $\mathbf{A}_{1},\ldots ,\mathbf{A}_{k}\in \mathbb{P}_{u}(\mathcal{K} )$, $\vec{a}_{1}\in R^{\mathbf{A}_{1}},\ldots ,\vec{a}_{k}\in R^{\mathbf{A} _{k}}$. Now we prove the moreover part. Suppose $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $n$-generated substructures and each one is finite. Thus, there is a finite list of atomic $\mathcal{L}$-formulas $ \alpha _{1}(\vec{x}),\ldots ,\alpha _{k}(\vec{x})$ such that for every atomic $\mathcal{L}$-formula $\alpha (\vec{x})$, there is $j\in \{1,\dots ,k\}$ satisfying $\mathcal{K}\vDash \alpha (\vec{x})\leftrightarrow \alpha _{j}(\vec{x})$. Assume (2) holds without the ultraproduct operator, we prove (1). By considerations similar to the above we have that \begin{equation*} \mathcal{K}\vDash \left( \dbigvee\limits_{\mathbf{A}\in \mathcal{K}\text{, } \vec{a}\in R^{\mathbf{A}}}\dbigwedge\limits_{\alpha \in \Delta ^{\vec{a}, \mathbf{A}}}\alpha (\vec{x})\right) \leftrightarrow R(\vec{x})\text{.} \end{equation*} Since each $\Delta ^{\vec{a},\mathbf{A}}$ can be supposed to be included in \begin{equation*} \{\alpha _{1}(\vec{x}),\ldots ,\alpha _{k}(\vec{x})\}\cup \{\lnot \alpha _{1}(\vec{x}),\ldots ,\lnot \alpha _{k}(\vec{x})\} \end{equation*} (resp. $\{\alpha _{1}(\vec{x}),\ldots ,\alpha _{k}(\vec{x})\}$), we have (after removing redundancies) that, \begin{equation*} \dbigvee\limits_{\mathbf{A}\in \mathcal{K}\text{, }\vec{a}\in R^{\mathbf{A} }}\dbigwedge\limits_{\alpha \in \Delta ^{\vec{a},\mathbf{A}}}\alpha (\vec{x}) \end{equation*} is a first order formula. Finally, note that the remaining implication is already taken care of by (1)$ \Rightarrow $(2) above. \end{proof} Here is a direct consequence of the above theorem. \begin{corollary} \label{(positive) open locally finite para relaciones}Let $\mathcal{K}$ be any class of $\mathcal{L}$-algebras contained in a locally finite variety. Suppose $\mathbf{A}\rightarrow R^{\mathbf{A}}$ is a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary relation $R^{\mathbf{A} }\subseteq A^{n}$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\mathrm{Op}(\mathcal{L})$ (resp. $\left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $) which defines $R$ in $ \mathcal{K}$. \item[(2)] For all $\mathbf{A},\mathbf{B}\in \mathcal{K}$, all $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all isomorphisms (resp. homomorphisms) $\sigma :\mathbf{A} _{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$ we have that $(a_{1},\ldots ,a_{n})\in R^{\mathbf{A}}$ implies $(\sigma (a_{1}),\ldots ,\sigma (a_{n}))\in R^{\mathbf{B}}$. \end{enumerate} \end{corollary} \begin{proof} Apply the moreover part of Theorem \ref{(positive) open para relaciones} to the class $\{(\mathbf{A},R^{\mathbf{A}}):\mathbf{A}\in \mathcal{K}\}$. \end{proof} As we shall see next it is easy to derive the functional version of Theorem \ref{(positive) open para relaciones}. \begin{theorem} \label{(positive) open}Let $\mathcal{L}\subseteq \mathcal{L}^{\prime }$ be first order languages and let $f_{1},\ldots ,f_{m}\in \mathcal{L}^{\prime }- \mathcal{L}$ be $n$-ary function symbols. For a class $\mathcal{K}$ of $ \mathcal{L}^{\prime }$-structures, the following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\mathrm{Op}(\mathcal{L})$ (resp. $\left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $) which defines $\vec{f}$ in $\mathcal{K}$. \item[(2)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, all isomorphisms (resp. homomorphisms) $\sigma : \mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$ such that\newline $f_{1}^{\mathbf{A}}(\vec{a}),\ldots ,f_{m}^{\mathbf{A}}(\vec{a})\in A_{0}$, we have $\sigma (f_{i}^{\mathbf{A}}(\vec{a}))=f_{i}^{\mathbf{B}}(\sigma (a_{1}),\ldots ,\sigma (a_{n}))$ for all $i\in \{1,\ldots ,m\}$. \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+m)$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (2). \end{theorem} \begin{proof} The implication (1)$\Rightarrow $(2) is analogous to (1)$\Rightarrow $(2) of Theorem \ref{(positive) open para relaciones}. We prove (2)$\Rightarrow $(1). Let $R\notin \mathcal{L}^{\prime }$ be an $ (n+m)$-ary relational symbol. For each $\mathbf{A}\in \mathcal{K}$ let \begin{equation*} R^{\mathbf{A}}=\{\left( a_{1},\ldots ,a_{n},f_{1}^{\mathbf{A}}(\vec{a} ),\ldots ,f_{m}^{\mathbf{A}}(\vec{a})\right) :a_{1},\ldots ,a_{n}\in A\} \text{.} \end{equation*} Define the following class of $\mathcal{L}^{\prime }\cup \{R\}$-structures \begin{equation*} \mathcal{K}_{R}=\{\left( \mathbf{A},R^{\mathbf{A}}\right) :\mathbf{A}\in \mathcal{K}\}\text{.} \end{equation*} It is not hard to check that the isomorphism (resp. homomorphism) version of (2) in Theorem \ref{(positive) open para relaciones} holds for the languages $\mathcal{L}\subseteq \mathcal{L}^{\prime }\cup \{R\}$, the relation symbol $ R$ and the class $\mathcal{K}_{R}$. Thus there is $\varphi \left( x_{1},\dots ,x_{n},z_{1},\dots ,z_{m}\right) \in \mathrm{Op}(\mathcal{L})$ (resp. $\left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $) such that $\mathcal{K}_{R}\vDash \varphi \left( \vec{x},\vec{z}\right) \leftrightarrow R\left( \vec{x},\vec{z}\right) $. Now as $\mathcal{K} _{R}\vDash \vec{f}(\vec{x})=\vec{z}\leftrightarrow R\left( \vec{x},\vec{z} \right) $, it immediately follows that $\varphi $ defines $\vec{f}$ in $ \mathcal{K}_{R}$. Hence $\varphi $ defines $\vec{f}$ in $\mathcal{K}$. Next we prove the moreover part. Suppose $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+m)$-generated substructures and each one is finite. Assume (2) without $\mathbb{P}_{u}$. Let $\mathcal{K}_{R}$ be as in the first part of the proof. Note that $\left( \mathcal{K}_{R}\right) _{\mathcal{L}}=\mathcal{K}_{\mathcal{L}}$ and thus we can apply the moreover part of Theorem \ref{(positive) open para relaciones} to obtain $\varphi \in \mathrm{Op}(\mathcal{L})$ (resp. $\left[ \bigvee \bigwedge \mathrm{At}( \mathcal{L})\right] $) defining $R$ in $\mathcal{K}_{R}$. Clearly $\varphi $ defines $\vec{f}$ in $\mathcal{K}$. The remaining implication is immediate by (1)$\Rightarrow $(2). \end{proof} \begin{corollary} \label{open para localy finite}Let $\mathcal{K}$ be any class of $\mathcal{L} $-algebras contained in a locally finite variety. Suppose $\mathbf{A} \rightarrow f^{\mathbf{A}}$ is a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $f^{\mathbf{A}}:A^{n}\rightarrow A$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\mathrm{Op}(\mathcal{L})$ (resp. $\left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $) which defines $f$ in $ \mathcal{K}$. \item[(2)] For all $\mathbf{A},\mathbf{B}\in \mathcal{K}$, all $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all isomorphisms (resp. homomorphisms) $\sigma :\mathbf{A} _{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$ such that $f^{\mathbf{A}}(\vec{a})\in A_{0}$, we have $\sigma (f^{\mathbf{A}}( \vec{a}))=f^{\mathbf{B}}(\sigma (a_{1}),\ldots ,\sigma (a_{n}))$. \end{enumerate} \end{corollary} \begin{proof} Apply the moreover part of Theorem \ref{(positive) open} to the class $\{( \mathbf{A},f^{\mathbf{A}}):\mathbf{A}\in \mathcal{K}\}$. \end{proof} Let $\mathbf{3}=(\{0,1/2,1\},\max ,\min ,^{\ast },0,1)$, where $0^{\ast }=1$ and $(1/2)^{\ast }=1^{\ast }=0$. Of course, $\mathbf{3}$ is the three-element Stone algebra (see \cite{ba-dw}). Note that the only non trivial homomorphism between subalgebras of $\mathbf{3}$ is the map $^{\ast \ast }:\{0,1/2,1\}\rightarrow \{0,1\}$. Thus the above corollary applied to the class $\mathcal{K}=\{\mathbf{3}\}$ says that a function $ f:\{0,1/2,1\}^{n}\rightarrow \{0,1/2,1\}$ is definable in $\mathbf{3}$ by a positive open formula in the language of $\mathbf{3}$ iff $f(x_{1},\ldots ,x_{n})^{\ast \ast }=f(x_{1}^{\ast \ast },\ldots ,x_{n}^{\ast \ast })$, for any $x_{1},\ldots ,x_{n}\in \{0,1/2,1\}$. \subsection*{Applications to definable principal congruences} We apply the above results to give natural proofs of two results on definability of relative principal congruences in quasivarieties. \begin{proposition} \label{aplicacion a dpc}Let $\mathcal{Q}$ be a quasivariety with definable relative principal congruences and let $\mathcal{L}$ be the language of $ \mathcal{Q}$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\left[ \forall \mathrm{Op}(\mathcal{L}) \right] $ defining relative principal congruences in $\mathcal{Q}$. \item[(2)] There is a formula in $\left[ \bigvee \bigwedge \mathrm{At}( \mathcal{L})\right] $ defining relative principal congruences in $\mathcal{Q} $. \item[(3)] $\mathcal{Q}$ has the relative congruence extension property. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow $(3). We need the following fact proved in \cite{bl-pi}. \begin{itemize} \item A quasivariety $\mathcal{Q}$ has the relative congruence extension property if for every $\mathbf{A},\mathbf{B}\in \mathcal{Q}$ with $\mathbf{A} \leq \mathbf{B}$ and for all $a,b\in A$ we have that $\theta _{\mathcal{Q}}^{ \mathbf{A}}\left( a,b\right) =\theta _{\mathcal{Q}}^{\mathbf{B}}\left( a,b\right) \cap A^{2}$. \end{itemize} \noindent Note that it is always the case that $\theta _{\mathcal{Q}}^{ \mathbf{A}}\left( a,b\right) \subseteq \theta _{\mathcal{Q}}^{\mathbf{B} }\left( a,b\right) \cap A^{2}$. So, as formulas in $\left[ \forall \mathrm{Op }(\mathcal{L})\right] $ are preserved by subalgebras, (1) implies that $ \theta _{\mathcal{Q}}^{\mathbf{A}}\left( a,b\right) =\theta _{\mathcal{Q}}^{ \mathbf{B}}\left( a,b\right) \cap A^{2}$ for every $\mathbf{A}\leq \mathbf{B} \in \mathcal{Q}$. Thus the fact cited above yields (3). (3)$\Rightarrow $(2). We use the following well known fact. \begin{enumerate} \item[(i)] For all $\mathbf{A},\mathbf{B}\in \mathcal{Q}$ and all homomorphisms $\sigma :\mathbf{A}\rightarrow \mathbf{B}$ we have that $ (a,b)\in \theta _{\mathcal{Q}}^{\mathbf{A}}\left( c,d\right) $ implies $ (\sigma (a),\sigma (b))\in \theta _{\mathcal{Q}}^{\mathbf{B}}\left( \sigma \left( c\right) ,\sigma \left( d\right) \right) $. \end{enumerate} \noindent For each $\mathbf{A}\in \mathcal{Q}$, let \begin{equation*} R^{\mathbf{A}}=\{(a,b,c,d):(a,b)\in \theta _{\mathcal{Q}}^{\mathbf{A}}\left( c,d\right) \}\text{,} \end{equation*} and define \begin{equation*} \mathcal{K}=\{\left( \mathbf{A},R^{\mathbf{A}}\right) :\mathbf{A}\in \mathcal{Q}\}\text{.} \end{equation*} Since $\mathcal{Q}$ has definable relative principal congruences, $\mathcal{K }$ is a first order class and hence $\mathbb{P}_{u}(\mathcal{K})\subseteq \mathcal{K}$. Thus, in order to prove that (2) of Theorem \ref{(positive) open para relaciones} holds we need to check that: \begin{enumerate} \item[(ii)] For all $\mathbf{A},\mathbf{B}\in \mathcal{K}$, all $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all homomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ and all $a,b,c,d\in A_{0}$, we have that $(a,b,c,d)\in R^{\mathbf{A}}$ implies $(\sigma (a),\sigma (b),\sigma (c),\sigma (d))\in R^{\mathbf{B}}$. \end{enumerate} \noindent Or, equivalently: \begin{enumerate} \item[(iii)] For all $\mathbf{A},\mathbf{B}\in \mathcal{Q}$, all $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all homomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ and all $a,b,c,d\in A_{0}$, we have that $(a,b)\in \theta _{\mathcal{Q}}^{ \mathbf{A}}\left( c,d\right) $ implies $(\sigma (a),\sigma (b))\in \theta _{ \mathcal{Q}}^{\mathbf{B}}(\sigma (c),\sigma (d))$. \end{enumerate} \noindent Since $\mathcal{V}$ has the congruence extension property, we can replace in (iii) the occurrence of "$(a,b)\in \theta _{\mathcal{Q}}^{\mathbf{ A}}\left( c,d\right) $" by "$(a,b)\in \theta _{\mathcal{Q}}^{\mathbf{A} _{0}}(c,d)$". Hence (iii) follows from (i). (2)$\Rightarrow $(1). This is trivial. \end{proof} \begin{corollary} \label{aplicacion a dpc 1}Let $\mathcal{Q}$ be a locally finite quasivariety with the relative congruence extension property. Let $\mathcal{L}$ be the language of $\mathcal{Q}$. Then there is a formula in $\left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $ which defines the relative principal congruences in $\mathcal{Q}$. \end{corollary} \begin{proof} The proof is similar to (3)$\Rightarrow $(2) in the above proof, but applying Corollary \ref{(positive) open locally finite para relaciones} in place of Theorem \ref{(positive) open para relaciones}. \end{proof} The above corollary is proved in \cite{ba-be} for the case in which $ \mathcal{Q}$ is a finitely generated variety. \section{Definability by open Horn formulas} \begin{theorem} \label{open horn para relaciones}Let $\mathcal{L}\subseteq \mathcal{L} ^{\prime }$ be first order languages and let $R\in \mathcal{L}^{\prime }- \mathcal{L}$ be an $n$-ary relation symbol. Let $\mathcal{K}$ be any class of $\mathcal{L}^{\prime }$-structures. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\mathrm{OpHorn}(\mathcal{L})$ which defines $R$ in $\mathcal{K}$. \item[(2)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L} } $, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, all isomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$, we have that $(a_{1},\ldots ,a_{n})\in R^{\mathbf{A}}$ implies $ (\sigma (a_{1}),\ldots ,\sigma (a_{n}))\in R^{\mathbf{B}}$. \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $n$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (2). \end{theorem} \begin{proof} (1)$\Rightarrow $(2). Note that if $\varphi \left( \vec{x}\right) \in \mathrm{OpHorn}(\mathcal{L})$ defines $R$ in $\mathcal{K}$, then $\varphi $ defines $R$ in $\mathbb{P}_{u}\mathbb{P}_{\mathrm{fin}}(\mathcal{K})$ as well. Now we can repeat the argument of (1)$\Rightarrow $(2) in the proof of Theorem \ref{(positive) open para relaciones}. (2)$\Rightarrow $(1). Applying Theorem \ref{(positive) open para relaciones} to the class $\mathbb{P}_{\mathrm{fin}}(\mathcal{K})$ we have that there is an open $\mathcal{L}$-formula $\varphi $ which defines $R$ in $\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$. W.l.o.g. we can suppose that \begin{equation*} \varphi =\bigwedge_{j=1}^{r}\left( \pi _{j}\rightarrow \bigvee_{i=1}^{k_{j}}\alpha _{i}^{j}\right) \wedge \bigwedge_{j=1}^{l}\bigvee_{i=1}^{u_{j}}\lnot \beta _{i}^{j} \end{equation*} with $r,k_{j},u_{j}\geq 1$, $l\geq 0$, each $\pi _{j}$ in $\left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $ and the formulas $\alpha _{i}^{j},\beta _{i}^{j}$ in $\mathrm{At}(\mathcal{L})$. Note that for $\mathbf{A}\in \mathbb{P}_{\mathrm{fin}}(\mathcal{K})$ and $\vec{a}\in R^{\mathbf{A}}$ we have that \begin{equation*} \mathbf{A}\vDash \left( \bigwedge_{j=1}^{l}\bigvee_{i=1}^{u_{j}}\lnot \beta _{i}^{j}\right) (\vec{a})\text{.} \end{equation*} Let \begin{equation*} S=\{(s_{1},\ldots ,s_{r}):1\leq s_{j}\leq k_{j},\text{ \ }j=1,\ldots ,r\} \text{.} \end{equation*} We claim that there is $s\in S$ such that $\bigwedge_{j=1}^{r}\left( \pi _{j}\rightarrow \alpha _{s_{j}}^{j}\right) \wedge \bigwedge_{j=1}^{l}\bigvee_{i=1}^{u_{j}}\lnot \beta _{i}^{j}$ defines $R$ in $\mathcal{K}$. For the sake of contradiction assume that this is not the case. Then for each $s\in S$ there are $\mathbf{A}_{s}\in \mathcal{K}$, $ \vec{a}_{s}=(a_{s1},\dots ,a_{sn})\in R^{\mathbf{A}_{s}}$ and $j_{s}$ such that \begin{equation*} \mathbf{A}_{s}\vDash \lnot \left( \pi _{j_{s}}\rightarrow \alpha _{s_{j_{s}}}^{j_{s}}\right) (\vec{a}_{s})\text{,} \end{equation*} or equivalently \begin{equation} \mathbf{A}_{s}\vDash \pi _{j_{s}}(\vec{a}_{s})\wedge \lnot \alpha _{s_{j_{s}}}^{j_{s}}(\vec{a}_{s})\text{.} \tag{i} \label{ec prueba horn} \end{equation} Let $p_{1}=(a_{s1})_{s\in S},\dots ,p_{n}=(a_{sn})_{s\in S}$. Note that $ \vec{p}\in R^{\Pi _{S}\mathbf{A}_{s}}$, and as $\varphi $ defines $R$ in $ \mathbb{P}_{\mathrm{fin}}(\mathcal{K})$, we have \begin{equation*} \prod_{s\in S}\mathbf{A}_{s}\vDash \varphi (\vec{p})\text{.} \end{equation*} This implies \begin{equation*} \prod_{s\in S}\mathbf{A}_{s}\vDash \bigwedge_{j=1}^{r}\left( \pi _{j}\rightarrow \bigvee_{i=1}^{k_{j}}\alpha _{i}^{j}\right) (\vec{p})\text{.} \end{equation*} By this and (\ref{ec prueba horn}) we have \begin{equation*} \prod_{s\in S}\mathbf{A}_{s}\vDash \left( \bigwedge_{j=1}^{r}\pi _{j}\right) (\vec{p})\text{.} \end{equation*} Hence for each $j\in \{1,\ldots ,r\}$ there is an $s_{j}$ such that \begin{equation*} \prod_{s\in S}\mathbf{A}_{s}\vDash \alpha _{s_{j}}^{j}(\vec{p})\text{.} \end{equation*} Let $s=(s_{1},\ldots ,s_{r})$. Then we have that \begin{equation*} \mathbf{A}_{s}\vDash \alpha _{s_{j_{s}}}^{j_{s}}(\vec{a}_{s})\text{,} \end{equation*} which contradicts (\ref{ec prueba horn}). To prove the moreover part, suppose $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $n$-generated substructures and each one is finite. Assume (2) without the ultraproduct operator. We prove (1). Note that our hypothesis on $\mathcal{K}_{\mathcal{L}}$ implies that there are atomic $\mathcal{L}$-formulas $\alpha _{1}(x_{1},\dots ,x_{n}),\ldots ,\alpha _{k}(x_{1},\dots ,x_{n})$ such that for every atomic $\mathcal{L}$ -formula $\alpha (\vec{x})$ there is $j$ satisfying $\mathcal{K}\vDash \alpha (\vec{x})\leftrightarrow \alpha _{j}(\vec{x})$. Since atomic formulas are preserved by direct products and by direct factors, we have that for every atomic $\mathcal{L}$-formula $\alpha (\vec{x})$, there is $j$ such that $\mathbb{P}_{\mathrm{fin}}(\mathcal{K})\vDash \alpha (\vec{x} )\leftrightarrow \alpha _{j}(\vec{x})$. This implies that $\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})_{\mathcal{L}}$ has finitely many isomorphism types of $n$-generated substructures and each one is finite. By Theorem \ref {(positive) open para relaciones} there is an open $\mathcal{L}$-formula $ \varphi $ which defines $R$ in $\mathbb{P}_{\mathrm{fin}}(\mathcal{K})$. Now we can proceed as in the first part of this proof. \end{proof} By a \emph{trivial }$\mathcal{L}$-structure we mean a structure $\mathbf{A}$ such that $A=\{a\}$ and $(a,\ldots ,a)\in R^{\mathbf{A}}$, for every $R\in \mathcal{L}$. Recall that a \emph{strict Horn formula} is a Horn formula that has exactly one non-negated atomic formula in each of its clauses. Let us write $\mathrm{OpStHorn}(\mathcal{L})$ for the set of open strict Horn $ \mathcal{L}$-formulas. \begin{remark} Theorem \ref{open horn para relaciones} holds if we replace in (1) $\mathrm{ OpHorn}(\mathcal{L})$ by $\mathrm{OpStHorn}(\mathcal{L})$ and add the following requirement to (2): \begin{description} \item For all $\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})_{\mathcal{L}}$ with a trivial substructure $\{a\}$, we have $(a,\ldots ,a)\in R^{\mathbf{A}}$. \end{description} \end{remark} \begin{proof} (1)$\Rightarrow $(2). Observe that formulas in $\mathrm{OpStHorn}(\mathcal{L} )$ are always satisfied in trivial structures. To see (2)$\Rightarrow $(1) note that by Theorem \ref{open horn para relaciones} there is a formula \begin{equation*} \varphi (\vec{x})=\bigwedge_{j=1}^{r}\left( \pi _{j}\rightarrow \alpha _{j}\right) \wedge \bigwedge_{j=1}^{l}\bigvee_{i=1}^{u_{j}}\lnot \beta _{i}^{j} \end{equation*} (with $r,u_{j}\geq 1$, $l\geq 0$, each $\pi _{j}$ in $\left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $ and the formulas $\alpha _{j},\beta _{i}^{j}$ in $\mathrm{At}(\mathcal{L})$) which defines $R$ in $\mathcal{K}$. Note that $\varphi (\vec{x})$ also defines $R$ in $\mathbb{P}_{u}(\mathcal{K} )$. Assume $l\geq 1$ and suppose $\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})_{ \mathcal{L}}$ has a trivial substructure $\{a\}$. Note that the additional condition of (2) says that \begin{equation*} \mathbf{A}\vDash \varphi (a,\ldots ,a)\text{,} \end{equation*} which is absurd since $l\geq 1$. Thus we have proved that there is no trivial substructure in $\mathbb{P}_{u}(\mathcal{K})_{\mathcal{L}}$. Hence \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash \dbigvee\limits_{\alpha (z_{1})\in \mathrm{ At}(\mathcal{L})}\lnot \alpha (z_{1})\text{,} \end{equation*} which by compactness says that \begin{equation*} \mathcal{K}\vDash \lnot \tilde{\alpha}_{1}(z_{1})\vee \ldots \vee \lnot \tilde{\alpha}_{k}(z_{1}) \end{equation*} for some atomic $\mathcal{L}$-formulas $\tilde{\alpha}_{1}(z_{1}),\ldots , \tilde{\alpha}_{k}(z_{1})$. Now it is easy to check that the formula \begin{equation*} \bigwedge_{j=1}^{r}\left( \pi _{j}\rightarrow \alpha _{j}\right) \wedge \bigwedge_{j=1}^{l}\bigwedge_{t=1}^{k}\left( \bigwedge_{i=1}^{u_{j}}\beta _{i}^{j}\rightarrow \tilde{\alpha}_{t}\right) \end{equation*} defines $R$ in $\mathcal{K}$. \end{proof} Here is the functional version of Theorem \ref{open horn para relaciones}. \begin{theorem} \label{openHorn}Let $\mathcal{L}\subseteq \mathcal{L}^{\prime }$ be first order languages and let $f_{1},\ldots ,f_{m}\in \mathcal{L}^{\prime }- \mathcal{L}$ be $n$-ary function symbols. For a class $\mathcal{K}$ of $ \mathcal{L}^{\prime }$-structures, the following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\mathrm{OpHorn}(\mathcal{L})$ (resp. $ \mathrm{OpStHorn}(\mathcal{L})$) which defines $\vec{f}$ in $\mathcal{K}$. \item[(2)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L} } $, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, all isomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$ such that $f_{1}^{\mathbf{A}}(\vec{a}),\ldots ,f_{m}^{\mathbf{A}}( \vec{a})\in A_{0}$, we have that $\sigma (f_{i}^{\mathbf{A}}(\vec{a} ))=f_{i}^{\mathbf{B}}(\sigma (a_{1}),\ldots ,\sigma (a_{n}))$ for all $i\in \{1,\ldots ,m\}$. (For all $\mathbf{A}\in \mathcal{K}_{\mathcal{L}}$, and every trivial substructure $\{a\}$, we have $f_{i}^{\mathbf{A}}(a,\ldots ,a)=a$ for all $i\in \{1,\ldots ,m\}$.) \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+m)$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ in (2). \end{theorem} \begin{proof} This can be proved applying Theorem \ref{open horn para relaciones} in the same way as we applied \ref{(positive) open para relaciones} to prove Theorem \ref{(positive) open}. \end{proof} \begin{corollary} Let $\mathcal{K}$ be any class of $\mathcal{L}$-algebras contained in a locally finite variety. Suppose $\mathbf{A}\rightarrow f^{\mathbf{A}}$ is a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $ f^{\mathbf{A}}:A^{n}$ $\rightarrow A$. The following are equivalent: \begin{enumerate} \item[(1)] There is a conjunction of $\mathcal{L}$-formulas of the form $ \left( \bigwedge_{i}p_{i}=q_{i}\right) \rightarrow r=s$ which defines $f$ in $\mathcal{K}$. \item[(2)] The following conditions hold: \begin{enumerate} \item[(a)] For all $\mathbf{A}\in \mathcal{K}$ and all $\{a\}\leq \mathbf{A}$ , we have $f^{\mathbf{A}}(a,\dots ,a)=a$. \item[(b)] For all $\mathbf{S}_{1}\leq \mathbf{A}_{1}\times \ldots \times \mathbf{A}_{k}$, $\mathbf{S}_{2}\leq \mathbf{B}_{1}\times \ldots \times \mathbf{B}_{l}$, with $\mathbf{A}_{1},\ldots ,\mathbf{A}_{k},\mathbf{B} _{1},\ldots ,$ $\mathbf{B}_{l}\in \mathcal{K}$, all isomorphisms $\sigma : \mathbf{S}_{1}\rightarrow \mathbf{S}_{2}$, and all $p_{1},\ldots ,p_{n}\in S_{1}$ such that $f^{\mathbf{A}_{1}}\times \ldots \times f^{\mathbf{A} _{k}}(p_{1},\ldots ,p_{n})\in S_{1}$, we have\newline $\sigma (f^{\mathbf{A}_{1}}\times \ldots \times f^{\mathbf{A} _{k}}(p_{1},\ldots ,p_{n}))=f^{\mathbf{B}_{1}}\times \ldots \times f^{ \mathbf{B}_{l}}(\sigma (p_{1}),\dots ,\sigma (p_{n}))$. \end{enumerate} \end{enumerate} \end{corollary} \begin{proof} Apply the moreover part of Theorem \ref{openHorn} to the class $\{(\mathbf{A} ,f^{\mathbf{A}}):\mathbf{A}\in \mathcal{K}\}$. \end{proof} \section{\label{Sec: definibilidad por atomicas}Definability by conjunctions of atomic formulas} \begin{theorem} \label{conjuncion de atomicas para relaciones}Let $\mathcal{L}\subseteq \mathcal{L}^{\prime }$ be first order languages and let $R\in \mathcal{L} ^{\prime }-\mathcal{L}$ be an $n$-ary relation symbol. Let $\mathcal{K}$ be any class of $\mathcal{L}^{\prime }$-structures The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $R$ in $\mathcal{K}$. \item[(2)] For all $\mathbf{A}\in \mathbb{P}_{u}\mathbb{P}_{\mathrm{fin}}( \mathcal{K})$, all $\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all homomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$, we have that $(a_{1},\ldots ,a_{n})\in R^{\mathbf{A}}$ implies $(\sigma (a_{1}),\ldots ,\sigma (a_{n}))\in R^{\mathbf{B}}$. \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $n$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (2). \end{theorem} \begin{proof} (1)$\Rightarrow $(2). This is analogous to the proof of (1)$\Rightarrow $(2) in Theorem \ref{open horn para relaciones}. (2)$\Rightarrow $(1). Note that (2) holds also when $\mathbf{B}$ is in $ \mathbb{ISPP}_{u}(\mathcal{K})$. Since $\mathbb{P}_{u}\mathbb{P}_{\mathrm{fin }}(\mathcal{K})\subseteq \mathbb{ISPP}_{u}(\mathcal{K})$, applying Theorem \ref{(positive) open para relaciones} to the class $\mathbb{P}_{\mathrm{fin} }(\mathcal{K})$ we have that there is an $\mathcal{L}$-formula $\varphi =\pi _{1}\vee \ldots \vee \pi _{k}$, with each $\pi _{i}$ a conjunction of atomic formulas, which defines $R$ in $\mathbb{P}_{\mathrm{fin}}(\mathcal{K})$. Using the same argument as in the proof of Theorem \ref{open horn para relaciones} we can prove that there is $j$ such that $\pi _{j}$ defines $R$ in $\mathcal{K}$. The proof of the moreover part is similar to the corresponding part of the proof of Theorem \ref{open horn para relaciones}. \end{proof} \begin{corollary} \label{conjuncion de atomicas para relaciones caso localmente finito}Let $ \mathcal{K}$ be a class of $\mathcal{L}$-algebras contained in a locally finite variety. Suppose $\mathbf{A}\rightarrow R^{\mathbf{A}}$ is a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary relation $R^{ \mathbf{A}}\subseteq A^{n}$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $R$ in $\mathcal{K}$. \item[(2)] For all $k\in \mathbb{N}$, all $\mathbf{A}_{1},\ldots ,\mathbf{A} _{k},\mathbf{A}\in \mathcal{K}$, all $\mathbf{S}\leq \mathbf{A}_{1}\times \ldots \times \mathbf{A}_{k}$, all homomorphisms $\sigma :\mathbf{S} \rightarrow \mathbf{A}$, and all $p_{1},\dots ,p_{n}\in S$, we have that $ (p_{1},\ldots ,p_{n})\in R^{\mathbf{A}_{1}}\times \ldots \times R^{\mathbf{A} _{k}}$ implies $(\sigma (p_{1}),\dots ,\sigma (p_{n}))\in R^{\mathbf{A}}$. \end{enumerate} \end{corollary} \begin{proof} Apply the moreover part of Theorem \ref{conjuncion de atomicas para relaciones} to the class $\{(\mathbf{A},R^{\mathbf{A}}):\mathbf{A}\in \mathcal{K}\}$. \end{proof} Next are the results for definability of functions with conjunction of atomic formulas. \begin{theorem} \label{conjuncion de atomicas}Let $\mathcal{L}\subseteq \mathcal{L}^{\prime } $ be first order languages and let $f_{1},\ldots ,f_{m}\in \mathcal{L} ^{\prime }-\mathcal{L}$ be $n$-ary function symbols. For a class $\mathcal{K} $ of $\mathcal{L}^{\prime }$-structures, the following are equivalent \begin{enumerate} \item[(1)] There is a formula $\varphi $ in $\left[ \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $\vec{f}$ in $\mathcal{K}$. \item[(2)] For all $\mathbf{A}\in \mathbb{P}_{u}\mathbb{P}_{\mathrm{fin}}( \mathcal{K})$, all $\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all homomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$, and all $a_{1},\ldots ,a_{n}\in A_{0}$ such that $f_{1}^{\mathbf{A}}(\vec{a} ),\ldots ,f_{m}^{\mathbf{A}}(\vec{a})\in A_{0}$, we have $\sigma (f_{i}^{ \mathbf{A}}(\vec{a}))=f_{i}^{\mathbf{B}}(\sigma (a_{1}),\ldots ,\sigma (a_{n}))$ for all $i\in \{1,\ldots ,m\}$. \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+m)$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (2). \end{theorem} \begin{proof} This can be proved applying Theorem \ref{conjuncion de atomicas para relaciones} in the same way as we applied \ref{(positive) open para relaciones} to prove Theorem \ref{(positive) open}. \end{proof} \begin{corollary} Let $\mathcal{K}$ be a class of $\mathcal{L}$-algebras contained in a locally finite variety. Suppose $\mathbf{A}\rightarrow f^{\mathbf{A}}$ is a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $ f^{\mathbf{A}}:A^{n}$ $\rightarrow A$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $f$ in $\mathcal{K}$. \item[(2)] For all $k\in \mathbb{N}$, all $\mathbf{A}_{1},\ldots ,\mathbf{A} _{k},\mathbf{A}\in \mathcal{K}$, all $\mathbf{S}\leq \mathbf{A}_{1}\times \ldots \times \mathbf{A}_{k}$, all homomorphisms $\sigma :\mathbf{S} \rightarrow \mathbf{A}$, and all $p_{1},\ldots ,p_{n}\in S$ such that $f^{ \mathbf{A}_{1}}\times \ldots \times f^{\mathbf{A}_{k}}(p_{1},\ldots ,p_{n})\in S$, we have $\sigma (f^{\mathbf{A}_{1}}\times \ldots \times f^{ \mathbf{A}_{k}}(p_{1},\ldots ,p_{n}))=f^{\mathbf{A}}(\sigma (p_{1}),\dots ,\sigma (p_{n}))$. \end{enumerate} \end{corollary} \begin{proof} Apply the moreover part of Theorem \ref{conjuncion de atomicas} to the class $\{(\mathbf{A},f^{\mathbf{A}}):\mathbf{A}\in \mathcal{K}\}$. \end{proof} \subsection*{Applications to definable principal congruences} A variety $\mathcal{V}$ in a language $\mathcal{L}$ has \emph{equationally definable principal congruences }if there exists a formula $\varphi (x,y,z,w)\in \left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $ such that \begin{equation*} \theta ^{\mathbf{A}}(a,b)=\{(c,d):\mathcal{V}\vDash \varphi (a,b,c,d)\} \end{equation*} for any $a,b\in A$, $\mathbf{A}\in \mathcal{V}$. (This notion is called \emph{equationally definable principal congruences in the restricted sense} in \cite{bu}.) The variety $\mathcal{V}$ has the \emph{Fraser-Horn property} if for every $\mathbf{A}_{1},\mathbf{A}_{2}\in \mathcal{V}$ and $\theta \in \mathrm{Con}(\mathbf{A}_{1}\times \mathbf{A}_{2})$, there are $\theta _{1}\in \mathrm{Con}(\mathbf{A}_{1})$ and $\theta _{2}\in \mathrm{Con}( \mathbf{A}_{2})$ such that $\theta =\theta _{1}\times \theta _{2}$ (i.e., algebras in $\mathcal{V}$ do not have skew congruences). Here is an interesting application of Theorem \ref{conjuncion de atomicas para relaciones}. \begin{proposition} \label{aplicacion a dpc 2}Let $\mathcal{V}$ be a variety with definable principal congruences. The following are equivalent: \begin{enumerate} \item[(1)] $\mathcal{V}$ has equationally definable principal congruences. \item[(2)] $\mathcal{V}$ has the congruence extension property and the Fraser-Horn property. \end{enumerate} \end{proposition} \begin{proof} It is well known (see \cite{bu}) that: \begin{enumerate} \item[(i)] A variety has the Fraser-Horn property iff for all $n\in \mathbb{N }$, all $\mathbf{A}_{1},\dots ,\mathbf{A}_{n}\in \mathcal{V}$, and all $ (a_{1},\ldots ,a_{n}),$ $(b_{1},\ldots ,b_{n})\in \mathbf{A}_{1}\times \ldots \times \mathbf{A}_{n}$ we have \begin{equation*} \theta ^{\mathbf{A}_{1}\times \ldots \times \mathbf{A}_{n}}((a_{1},\ldots ,a_{n}),(b_{1},\ldots ,b_{n}))=\theta ^{\mathbf{A}_{1}}(a_{1},b_{1})\times \ldots \times \theta ^{\mathbf{A}_{n}}(a_{n},b_{n})\text{.} \end{equation*} \end{enumerate} (1)$\Rightarrow $(2). By Proposition \ref{aplicacion a dpc} we have that $ \mathcal{V}$ has the congruence extension property. Also we note that (i) and (1) imply that $\mathcal{V}$ has the Fraser-Horn property. (2)$\Rightarrow $(1). Let $\mathcal{L}$ be the language of $\mathcal{V}$. We will use the following well known fact. \begin{enumerate} \item[(ii)] If $\sigma :\mathbf{A}\rightarrow \mathbf{B}$ is a homomorphism, then $(a,b)\in \theta ^{\mathbf{A}}\left( c,d\right) $ implies $(\sigma (a),\sigma (b))\in \theta ^{\mathbf{B}}\left( \sigma \left( c\right) ,\sigma \left( d\right) \right) $. \end{enumerate} \noindent For each $\mathbf{A}\in \mathcal{V}$, let \begin{equation*} R^{\mathbf{A}}=\{(a,b,c,d):(a,b)\in \theta ^{\mathbf{A}}\left( c,d\right) \} \text{,} \end{equation*} and define \begin{equation*} \mathcal{K}=\{\left( \mathbf{A},R^{\mathbf{A}}\right) :\mathbf{A}\in \mathcal{V}\}\text{.} \end{equation*} Since $\mathcal{V}$ has definable principal congruences, $\mathcal{K}$ is a first order class and hence $\mathbb{P}_{u}(\mathcal{K})\subseteq \mathcal{K} $. Since $\mathcal{V}$ has the Fraser-Horn property, (i) says that $\mathbb{P }_{\mathrm{fin}}(\mathcal{K})\subseteq \mathcal{K}$. Thus, in order to prove that (2) of Theorem \ref{conjuncion de atomicas para relaciones} holds we need to check that: \begin{enumerate} \item[(iii)] For all $\mathbf{A},\mathbf{B}\in \mathcal{K}$, all $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all homomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ and all $a,b,c,d\in A_{0}$, we have that $(a,b,c,d)\in R^{\mathbf{A}}$ implies $(\sigma (a),\sigma (b),\sigma (c),\sigma (d))\in R^{\mathbf{B}}$. \end{enumerate} \noindent Or, equivalently \begin{enumerate} \item[(iv)] For all $\mathbf{A},\mathbf{B}\in \mathcal{Q}$, all $\mathbf{A} _{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{ L}}$, all homomorphisms $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ and all $a,b,c,d\in A_{0}$, we have that $(a,b)\in \theta _{\mathcal{Q}}^{ \mathbf{A}}\left( c,d\right) $ implies $(\sigma (a),\sigma (b))\in \theta _{ \mathcal{Q}}^{\mathbf{B}}(\sigma (c),\sigma (d))$. \end{enumerate} \noindent Since $\mathcal{V}$ has the congruence extension property we can replace in (iv) the occurrence of "$(a,b)\in \theta ^{\mathbf{A}}\left( c,d\right) $" by "$(a,b)\in \theta ^{\mathbf{A}_{0}}(c,d)$". Hence (iv) follows from (ii). \end{proof} \begin{corollary} \label{aplicacion a dpc 3}A locally finite variety with the congruence extension property and the Fraser-Horn property has equationally definable principal congruences. \end{corollary} \begin{proof} Combine Corollary \ref{aplicacion a dpc 1} with Proposition \ref{aplicacion a dpc 2}. \end{proof} It is worth mentioning that in the terminology of \cite{bu}, a variety is said to have equationally definable principal congruences if there is a formula of the form $\exists \bigwedge p=q$ which defines the principal congruences. Thus, Theorem 4 of \cite{bu} is not in contradiction with Proposition \ref{aplicacion a dpc 2}, in fact it coincides with Corollary \ref{aplicacion a dpc 4} of the next section. \subsection*{Two translation results} As another application of Theorem \ref{conjuncion de atomicas para relaciones}, we obtain a model theoretic proof of the following translation result. \begin{proposition}[{\protect\cite[Thm 2.3]{cz-dz}}] \label{traduccion}Let $\mathcal{K}$ be a universal class of $\mathcal{L}$ -algebras such that $\mathcal{K}\subseteq \mathcal{Q}_{RFSI}$, for some relatively congruence distributive quasivariety $\mathcal{Q}$. Let $\varphi ( \vec{x})\in \left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $. Then there are $\mathcal{L}$-terms $p_{i},q_{i}$, $i=1,\ldots ,r$ such that \begin{equation*} \mathcal{K}\vDash \varphi (\vec{x})\leftrightarrow \left( \dbigwedge_{i=1}^{r}p_{i}(\vec{x})=q_{i}(\vec{x})\right) \text{.} \end{equation*} \end{proposition} \begin{proof} We start by proving the proposition for the formula \begin{equation*} \varphi =\left( x_{1}=x_{2}\vee x_{3}=x_{4}\right) \text{.} \end{equation*} Given $\mathbf{A}\in \mathcal{K}$, let \begin{equation*} R^{\mathbf{A}}=\{\vec{a}\in A^{4}:\mathbf{A}\vDash \varphi (\vec{a} )\}=\{(a,b,c,d)\in A^{4}:a=b\text{ or }c=d\}\text{.} \end{equation*} Define $\mathcal{K}^{\prime }=\{(\mathbf{A},R^{\mathbf{A}}):\mathbf{A}\in \mathcal{K}\}$. Note that $\mathcal{K}^{\prime }$ is universal. We aim to apply (2) of Theorem \ref{conjuncion de atomicas para relaciones}, so we need to show that: \begin{itemize} \item For all $\mathbf{A}\in \mathbb{P}_{u}\mathbb{P}_{\mathrm{fin}}( \mathcal{K}^{\prime })$, all $\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K} ^{\prime })$, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B} _{0}\leq \mathbf{B}_{\mathcal{L}}$, all homomorphisms $\sigma :\mathbf{A} _{0}\rightarrow \mathbf{B}_{0}$, and all $a,b,c,d\in A_{0}$, we have that $ (a,b,c,d)\in R^{\mathbf{A}}$ implies $(\sigma (a),\sigma (b),\sigma (c),\sigma (d))\in R^{\mathbf{B}}$. \end{itemize} Since $\mathcal{K}^{\prime }$ is universal, we can suppose that $\mathbf{B} \in \mathcal{K}^{\prime }$, $\mathbf{B}_{0}=\mathbf{B}_{\mathcal{L}}$ and $ \sigma $ is onto. Also, as $\mathbb{P}_{u}\mathbb{P}_{\mathrm{fin}}(\mathcal{ K}^{\prime })\subseteq \mathbb{ISPP}_{u}(\mathcal{K}^{\prime })\subseteq \mathbb{ISP}(\mathcal{K}^{\prime })$, we may assume that $\mathbf{A}\leq \Pi \{\mathbf{A}_{i}:i\in I\}$ is a subdirect product with each $\mathbf{A}_{i}$ in $\mathcal{K}^{\prime }$, and that $\mathbf{A}_{0}=\mathbf{A}_{\mathcal{L} } $. Since \begin{equation*} \mathbf{A}/\ker \sigma \simeq \mathbf{B}\in \mathcal{K}^{\prime } \end{equation*} and $\mathcal{K\subseteq Q}_{RFSI}$, we have that $\ker \sigma $ is a meet irreducible element of $\mathrm{Con}_{\mathcal{Q}}(\mathbf{A}_{\mathcal{L}})$ . So, as $\mathrm{Con}_{\mathcal{Q}}(\mathbf{A}_{\mathcal{L}})$ is distributive, we have that $\ker \sigma $ is a meet prime element of $ \mathrm{Con}_{\mathcal{Q}}(\mathbf{A}_{\mathcal{L}})$. Let $(a,b,c,d)\in R^{ \mathbf{A}}$. We prove that $(\sigma (a),\sigma (b),\sigma (c),\sigma (d))\in R^{\mathbf{B}}$ (i.e., $\sigma (a)=\sigma (b)$ or $\sigma (c)=\sigma (d)$). For $i\in I$ let $\pi _{i}:\mathbf{A}\rightarrow \mathbf{A}_{i}$ be the canonical projection. Note that $\ker \pi _{i}\in \mathrm{Con}_{\mathcal{ Q}}(\mathbf{A}_{\mathcal{L}})$ since $\mathbf{A}/\ker \pi _{i}\simeq \mathbf{ A}_{i}\in \mathcal{K}^{\prime }$. From $(a,b,c,d)\in R^{\mathbf{A}}$, it follows that either $a(i)=b(i)$ or $c(i)=d(i)$ for every $i\in I$. Thus \begin{equation*} \bigcap_{(a,b)\in \ker \pi _{i}}\ker \pi _{i}\cap \bigcap_{(c,d)\in \ker \pi _{i}}\ker \pi _{i}=\Delta ^{\mathbf{A}_{\mathcal{L}}}\subseteq \ker \sigma \text{.} \end{equation*} Since $\ker \sigma $ is meet prime, this implies that either \begin{equation*} \bigcap_{(a,b)\in \ker \pi _{i}}\ker \pi _{i}\subseteq \ker \sigma \end{equation*} or \begin{equation*} \bigcap_{(c,d)\in \ker \pi _{i}}\ker \pi _{i}\subseteq \ker \sigma \text{,} \end{equation*} which implies $\sigma (a)=\sigma (b)$ or $\sigma (c)=\sigma (d)$. This concludes the proof for the case $\varphi =\left( x_{1}=x_{2}\vee x_{3}=x_{4}\right) $. The case in which $\varphi $ is the formula $ x_{1}=y_{1}\vee x_{2}=y_{2}\vee \ldots \vee x_{n}=y_{n}$ can be proved in a similar manner. Now, the general case follows easily. \end{proof} Strenghtening RFSI to RS allows for the translation of any open formula to a conjunction of equations over $\mathcal{K}$. This is proved in \cite{ca-va0} using topological arguments. Combining Proposition \ref{traduccion} and Theorem \ref{(positive) open para relaciones} we get a very simple proof. \begin{corollary} \label{traduccion para simples}Let $\mathcal{K}$ be a universal class of $ \mathcal{L}$-algebras such that $\mathcal{K}\subseteq \mathcal{Q}_{RS}$, for some relatively congruence distributive quasivariety $\mathcal{Q}$. Let $ \varphi (\vec{x})\in \mathrm{Op}(\mathcal{L})$. Then there are $\mathcal{L}$ -terms $p_{i},q_{i}$, $i=1,\ldots ,r$ such that \begin{equation*} \mathcal{K}\vDash \varphi (\vec{x})\leftrightarrow \left( \dbigwedge_{i=1}^{r}p_{i}(\vec{x})=q_{i}(\vec{x})\right) \text{.} \end{equation*} \end{corollary} \begin{proof} We show first that there is $\delta (x,y)\in \left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $ such that $\mathcal{K}\vDash x\neq y\leftrightarrow \delta (x,y)$. Given $\mathbf{A}\in \mathcal{K}$, let \begin{equation*} D^{\mathbf{A}}=\{(a,b)\in A^{2}:a\neq b\}\text{,} \end{equation*} and define $\mathcal{K}^{\prime }=\{(\mathbf{A},D^{\mathbf{A}}):\mathbf{A} \in \mathcal{K}\}$. Observe that $\mathcal{K}^{\prime }$ is universal. We want to apply Theorem \ref{(positive) open para relaciones}, thus we need to check that: \begin{itemize} \item For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K}^{\prime }) $, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, all homomorphisms $\sigma :\mathbf{A} _{0}\rightarrow \mathbf{B}_{0}$, and all $a,b\in A_{0}$, we have that $a\neq b$ implies $\sigma (a)\neq \sigma (b)$. \end{itemize} \noindent But this is easy. Just note that both $\mathbf{A}_{0}$ and $\func{ Im}\sigma $ are in $\mathcal{Q}_{RS}$ since $\mathcal{K}$ is universal. So $ \ker \sigma \in \mathrm{Con}_{\mathcal{Q}}(\mathbf{A}_{0})=\{\Delta ^{ \mathbf{A}_{0}},\nabla ^{\mathbf{A}_{0}}\}$, and $\ker \sigma \neq \nabla ^{ \mathbf{A}_{0}}$ because $\func{Im}\sigma $ is simple and thus non-trivial. It follows that $\sigma $ is one-one. By Theorem \ref{(positive) open para relaciones} we have a formula in $\left[ \bigvee \bigwedge \mathrm{At}( \mathcal{L})\right] $ that defines $D$ in $\mathcal{K}$. Now Proposition \ref{traduccion} produces $\varepsilon \left( x,y\right) \in \left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $ such that \begin{equation*} \mathcal{K}\vDash x\neq y\leftrightarrow \varepsilon (x,y)\text{.} \end{equation*} This, in combination with the fact that disjunctions of equations are equivalent to conjunctions in $\mathcal{K}$ (again by Proposition \ref {traduccion}), lets us translate any open formula to a conjunction of equations over $\mathcal{K}$. \end{proof} The translation results above produce the following interesting corollaries for a finite algebra. \begin{corollary} \label{algebraicas para A con traduccion}Suppose $\mathbf{A}$ is a finite $ \mathcal{L}$-algebra such that $\mathbb{S}(\mathbf{A})\subseteq \mathcal{Q} _{RFSI}$ (resp. $\mathbb{S}(\mathbf{A})\subseteq \mathcal{Q}_{RS}$), for some relatively congruence distributive quasivariety $\mathcal{Q}$. Let $ f:A^{n}\rightarrow A$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $f$ in $\mathbf{A}$. \item[(2)] For all $\mathbf{S}_{1},\mathbf{S}_{2}\leq \mathbf{A}$, all homomorphisms (resp. isomorphisms) $\sigma :\mathbf{S}_{1}\rightarrow \mathbf{S}_{2}$, and all $a_{1},\ldots ,a_{n}\in S_{1}$ such that $f^{ \mathbf{A}}(\vec{a})\in S_{1}$, we have $\sigma (f^{\mathbf{A}}(\vec{a}))=f^{ \mathbf{A}}(\sigma (a_{1}),\ldots ,\sigma (a_{n}))$. \end{enumerate} \end{corollary} \begin{proof} (1)$\Rightarrow $(2). Apply (1)$\Rightarrow $(2) of Theorem \ref{conjuncion de atomicas}. (2)$\Rightarrow $(1). By Corollary \ref{open para localy finite} there is a formula $\varphi (\vec{x},z)\in \left[ \bigvee \bigwedge \mathrm{At}( \mathcal{L})\right] $ (resp. $\varphi (\vec{x},z)\in \mathrm{Op}(\mathcal{L} ) $) which defines $f$ in $\mathbb{IS}(\mathbf{A})$. Now use Proposition \ref {traduccion} (resp. Corollary \ref{traduccion para simples}) to obtain a conjunction of equations equivalent to $\varphi $ over $\mathbb{IS}(\mathbf{A })$. \end{proof} \begin{corollary} Suppose $\mathbf{A}$ is a finite $\mathcal{L}$-algebra such that $\mathbb{S}( \mathbf{A})\subseteq \mathcal{Q}_{RFSI}$ (resp. $\mathbb{S}(\mathbf{A} )\subseteq \mathcal{Q}_{RS}$), for some relatively congruence distributive quasivariety $\mathcal{Q}$. Let $R\subseteq A^{n}$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $R$ in $\mathbf{A}$. \item[(2)] For all $\mathbf{S}_{1},\mathbf{S}_{2}\leq \mathbf{A}$, all homomorphisms (resp. isomorphisms) $\sigma :\mathbf{S}_{1}\rightarrow \mathbf{S}_{2}$, and all $a_{1},\ldots ,a_{n}\in S_{1}$, we have that $ (a_{1},\ldots ,a_{n})\in R$ implies\newline $(\sigma (a_{1}),\ldots ,\sigma (a_{n}))\in R$. \end{enumerate} \end{corollary} \section{Definability by existential formulas} \begin{lemma} \label{embedding a ultraproducto}Let $\mathbf{A},\mathbf{B}$ be $\mathcal{L}$ -structures. Suppose for every sentence $\varphi \in \left[ \exists \bigwedge \mathrm{At}(\mathcal{L})\right] $ (resp. $\varphi \in \left[ \exists \bigwedge \mathrm{\pm At}(\mathcal{L})\right] $) we have that $ \mathbf{A}\vDash \varphi $ implies $\mathbf{B}\vDash \varphi $. Then there is a homomorphism (resp. embedding) from $\mathbf{A}$ into an ultrapower of $ \mathbf{B}$. \end{lemma} \begin{proof} Let $\mathcal{L}_{A}=\mathcal{L}\cup A$, where each element of $A$ is added as a new constant symbol. Define \begin{equation*} \Delta =\{\alpha \left( \vec{a}\right) :\alpha \in \lbrack \bigwedge \mathrm{ At}(\mathcal{L})]\text{ and }\mathbf{A}\vDash \alpha (\vec{a})\}\text{,} \end{equation*} i.e., $\Delta $ is the positive atomic diagram of $\mathbf{A}$. Let $I= \mathcal{P}_{\mathrm{fin}}\left( \Delta \right) $, and observe that for every $i\in I$ there is an expansion $\mathbf{B}_{i}$ of $\mathbf{B}$ to $ \mathcal{L}_{A}$ such that $\mathbf{B}_{i}\vDash i$. Now take an ultrafilter $u$ over $I$ such that for each $i\in I$ the set $\{j\in I:i\subseteq j\}$ is in $u$. Let $\mathbf{U}=\left. \prod \mathbf{B}_{i}\right/ u$, and notice that $\mathbf{U}\vDash \Delta $. Thus $a\mapsto a^{\mathbf{U}}$ is an homomorphism from $\mathbf{A}$ into $\mathbf{U}_{\mathcal{L}}=\mathbf{B} ^{I}/u$. The same proof works for the embedding version of the lemma by taking $ \Delta =\{\alpha \left( \vec{a}\right) :\alpha \in \lbrack \mathrm{\pm } \bigwedge \mathrm{At}(\mathcal{L})]$ and $\mathbf{A}\vDash \alpha (\vec{a} )\} $. \end{proof} \begin{theorem} \label{caso existencial para relaciones}Let $\mathcal{L}\subseteq \mathcal{L} ^{\prime }$ be first order languages and let $R\in \mathcal{L}^{\prime }- \mathcal{L}$ be an $n$-ary relation symbol. Let $\mathcal{K}$ be any class of $\mathcal{L}^{\prime }$-structures. Then \begin{enumerate} \item[(1)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \mathrm{Op}(\mathcal{L}) \right] $ which defines $R$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$ and all embeddings $\sigma :\mathbf{A}_{\mathcal{L}}\rightarrow \mathbf{B}_{ \mathcal{L}}$, we have that $\sigma :(\mathbf{A}_{\mathcal{L}},R^{\mathbf{A} })\rightarrow (\mathbf{B}_{\mathcal{L}},R^{\mathbf{B}})$ is a homomorphism. \end{enumerate} \item[(2)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \mathrm{OpHorn}(\mathcal{L}) \right] $ which defines $R$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$ and all embeddings $\sigma :\mathbf{A}_{\mathcal{ L}}\rightarrow \mathbf{B}_{\mathcal{L}}$, we have that $\sigma :(\mathbf{A}_{ \mathcal{L}},R^{\mathbf{A}})\rightarrow (\mathbf{B}_{\mathcal{L}},R^{\mathbf{ B}})$ is a homomorphism. \end{enumerate} \item[(3)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \bigvee \bigwedge \mathrm{At }(\mathcal{L})\right] $ which defines $R$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$ and all homomorphisms $\sigma :\mathbf{A}_{\mathcal{L}}\rightarrow \mathbf{B} _{\mathcal{L}}$, we have that $\sigma :(\mathbf{A}_{\mathcal{L}},R^{\mathbf{A }})\rightarrow (\mathbf{B}_{\mathcal{L}},R^{\mathbf{B}})$ is a homomorphism. \end{enumerate} \item[(4)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $R$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$ and all homomorphisms $\sigma :\mathbf{A}_{ \mathcal{L}}\rightarrow \mathbf{B}_{\mathcal{L}}$, we have that $\sigma :( \mathbf{A}_{\mathcal{L}},R^{\mathbf{A}})\rightarrow (\mathbf{B}_{\mathcal{L} },R^{\mathbf{B}})$ is a homomorphism. \end{enumerate} \end{enumerate} \noindent Moreover, if modulo isomorphism, $\mathcal{K}$ is a finite class of finite structures, then we can remove the operator $\mathbb{P}_{u}$ from the (b) items. \end{theorem} \begin{proof} (1). (a)$\Rightarrow $(b). Note that if $\varphi \left( \vec{x}\right) \in \left[ \exists \mathrm{Op}(\mathcal{L})\right] $ defines $R$ in $\mathcal{K}$ , then $\varphi $ defines $R$ in $\mathbb{P}_{u}(\mathcal{K})$ as well. Now use that $\varphi $ is preserved by embeddings. (b)$\Rightarrow $(a). For $\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})$ and $ \vec{a}\in R^{\mathbf{A}}$, let \begin{equation*} \Gamma ^{\vec{a},\mathbf{A}}=\{\alpha (\vec{x}):\alpha \in \left[ \exists \dbigwedge \mathrm{\pm At}(\mathcal{L})\right] \text{ and }\mathbf{A}\vDash \alpha (\vec{a})\}\text{.} \end{equation*} Suppose $\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$ and $\mathbf{B}\vDash \Gamma ^{\vec{a},\mathbf{A}}(\vec{b})$. We claim that $\vec{b}\in R^{\mathbf{ B}}$. Let $\mathcal{\tilde{L}}$ be the result of adding $n$ new constant symbols to $\mathcal{L}$. Note that every sentence of $\left[ \exists \dbigwedge \mathrm{\pm At}(\mathcal{\tilde{L}})\right] $ which holds in $( \mathbf{A}_{\mathcal{L}},\vec{a})$ holds in $(\mathbf{B}_{\mathcal{L}},\vec{b })$, which by Lemma \ref{embedding a ultraproducto} says that there is an embedding from $(\mathbf{A}_{\mathcal{L}},\vec{a})$ into an ultrapower $( \mathbf{B}_{\mathcal{L}},\vec{b})^{I}/u$. Since $\mathbb{P}_{u}\mathbb{P} _{u}(\mathcal{K})\subseteq \mathbb{P}_{u}(\mathcal{K})$, (b) says that $ \left( b_{1}/u,\dots ,b_{n}/u\right) \in R^{\mathbf{B}^{I}/u}$, which yields $\vec{b}\in R^{\mathbf{B}}$. So we have that \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash \left( \dbigvee\nolimits_{\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})\text{, }\vec{a}\in R^{\mathbf{A} }}\dbigwedge\nolimits_{\alpha \in \Gamma ^{\vec{a},\mathbf{A}}}\alpha (\vec{x })\right) \leftrightarrow R(\vec{x})\text{,} \end{equation*} and compactness produces the formula. (2). (a)$\Rightarrow $(b). Observe that if $\varphi \left( \vec{x}\right) \in \left[ \exists \mathrm{OpHorn}(\mathcal{L})\right] $ defines $R$ in $ \mathcal{K}$, then $\varphi $ also defines $R$ in $\mathbb{P}_{u}\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$. Next use that $\varphi $ is preserved by embeddings. (b)$\Rightarrow $(a). By (1) we have that there is $\varphi \in \left[ \exists \mathrm{Op}(\mathcal{L})\right] $ which defines $R$ in $\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$. Now we can apply a similar argument to that used in the proof of Theorem \ref{openHorn}, to extract from $\varphi $ a formula of $\left[ \exists \mathrm{OpHorn}(\mathcal{L})\right] $ which defines $R$ in $\mathcal{K}$. (3). (b)$\Leftrightarrow $(a). This is similar to (b)$\Leftrightarrow $(a) of (1). (4). (a)$\Rightarrow $(b). Analogous to (a)$\Rightarrow $(b) of (2). (b)$\Rightarrow $(a). Note that (b) holds also when $\mathbf{B}$ is in $ \mathbb{PP}_{u}(\mathcal{K})$. Since $\mathbb{P}_{u}\mathbb{P}_{\mathrm{fin} }(\mathcal{K})\subseteq \mathbb{PP}_{u}(\mathcal{K})$, applying (b)$ \Rightarrow $(a) of (3) to the class $\mathbb{P}_{\mathrm{fin}}(\mathcal{K})$ we have that there is a formula $\varphi \in \left[ \exists \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $ which defines $R$ in $\mathbb{P} _{\mathrm{fin}}(\mathcal{K})$. Now we can apply a similar argument to that used in the proof of Theorem \ref{openHorn}, to extract from $\varphi $ a formula of $\left[ \exists \bigwedge \mathrm{At}(\mathcal{L})\right] $ which defines $R$ in $\mathcal{K}$. The moreover part is left to the reader (see the proof of \ref{open horn para relaciones}). \end{proof} We state without proof the functional version of the above theorem. \begin{theorem} \label{caso existencial}Let $\mathcal{L}\subseteq \mathcal{L}^{\prime }$ be first order languages and let $f_{1},\ldots ,f_{m}\in \mathcal{L}^{\prime }- \mathcal{L}$ be $n$-ary function symbols. Let $\mathcal{K}$ be a class of $ \mathcal{L}^{\prime }$-structures. Then we have: \begin{enumerate} \item[(1)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \mathrm{Op}(\mathcal{L}) \right] $ which defines $\vec{f}$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$ and all embeddings $\sigma :\mathbf{A}_{\mathcal{L}}\rightarrow \mathbf{B}_{ \mathcal{L}}$, we have that $\sigma :(\mathbf{A}_{\mathcal{L}},f_{1}^{ \mathbf{A}},\ldots ,f_{m}^{\mathbf{A}})\rightarrow (\mathbf{B}_{\mathcal{L} },f_{1}^{\mathbf{B}},\ldots ,f_{m}^{\mathbf{B}})$ is an embedding. \end{enumerate} \item[(2)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \mathrm{OpHorn}(\mathcal{L}) \right] $ which defines $\vec{f}$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$ and all embeddings $\sigma :\mathbf{A}_{\mathcal{ L}}\rightarrow \mathbf{B}_{\mathcal{L}}$, we have that $\sigma :(\mathbf{A}_{ \mathcal{L}},f_{1}^{\mathbf{A}},\ldots ,f_{m}^{\mathbf{A}})\rightarrow ( \mathbf{B}_{\mathcal{L}},f_{1}^{\mathbf{B}},\ldots ,f_{m}^{\mathbf{B}})$ is an embedding. \end{enumerate} \item[(3)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \bigvee \bigwedge \mathrm{At }(\mathcal{L})\right] $ which defines $\vec{f}$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$ and all homomorphisms $\sigma :\mathbf{A}_{\mathcal{L}}\rightarrow \mathbf{B} _{\mathcal{L}}$, we have that $\sigma :(\mathbf{A}_{\mathcal{L}},f_{1}^{ \mathbf{A}},\ldots ,f_{m}^{\mathbf{A}})\rightarrow (\mathbf{B}_{\mathcal{L} },f_{1}^{\mathbf{B}},\ldots ,f_{m}^{\mathbf{B}})$ is a homomorphism. \end{enumerate} \item[(4)] The following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $\vec{f}$ in $\mathcal{K}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}\mathbb{P}_{ \mathrm{fin}}(\mathcal{K})$ and all homomorphisms $\sigma :\mathbf{A}_{ \mathcal{L}}\rightarrow \mathbf{B}_{\mathcal{L}}$, we have that $\sigma :( \mathbf{A}_{\mathcal{L}},f_{1}^{\mathbf{A}},\ldots ,f_{m}^{\mathbf{A} })\rightarrow (\mathbf{B}_{\mathcal{L}},f_{1}^{\mathbf{B}},\ldots ,f_{m}^{ \mathbf{B}})$ is a homomorphism. \end{enumerate} \end{enumerate} \noindent Moreover, if modulo isomorphism $\mathcal{K}$ is a finite class of finite structures, then we can remove the operator $\mathbb{P}_{u}$ from the (b) items. \end{theorem} For the case in which $\mathcal{K}=\{\mathbf{A}\}$, with $\mathbf{A}$ finite, (1) and (4) of Theorem \ref{caso existencial} are proved in \cite{kr} . Using (b)$\Rightarrow $(a) of (4) in Theorem \ref{caso existencial para relaciones} we can prove the following result of \cite{bu} (see the paragraph below Corollary \ref{aplicacion a dpc 3}). \begin{corollary} \label{aplicacion a dpc 4}Let $\mathcal{V}$ be a variety with definable principal congruences. Let $\mathcal{L}$ be the language of $\mathcal{V}$. The following are equivalent: \begin{enumerate} \item[(1)] There is a formula in $\left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines the principal congruences in $\mathcal{V} $. \item[(2)] $\mathcal{V}$ has the Fraser-Horn property. \end{enumerate} \end{corollary} \subsection*{Primitive positive functions} Functions definable in a finite algebra $\mathbf{A}$ by a formula of the form $\exists \dbigwedge p=q$ are called \emph{primitive positive functions} and they have been extensively studied. For the case in which $\mathcal{K}=\{ \mathbf{A}\}$ for some finite algebra $\mathbf{A}$, (4) of Theorem \ref{caso existencial} is a well known result \cite{ge}. The translation results of Section 4 produce the following. \begin{proposition} \label{primitive positives para A con traduccion}Suppose $\mathbf{A}$ is a finite $\mathcal{L}$-algebra such that $\mathbb{S}(\mathbf{A})\subseteq Q_{RFSI}$ (resp. $\mathbb{S}(\mathbf{A})\subseteq Q_{RS}$), for some relatively congruence distributive quasivariety $Q$. Let $f:A^{n}\rightarrow A$. The following are equivalent: \begin{enumerate} \item[(1)] $f$ is primitive positive. \item[(2)] If $\sigma :\mathbf{A}\rightarrow \mathbf{A}$ is a homomorphism (resp. isomorphism), then $\sigma :(\mathbf{A},f)\rightarrow (\mathbf{A},f)$ is a homomorphism (resp. isomorphism). \end{enumerate} \end{proposition} \begin{proof} (2)$\Rightarrow $(1) By (3) (resp. (1)) of Theorem \ref{caso existencial} we have that there is a formula $\exists \vec{u}\ \psi (\vec{u},\vec{x},z)$, with $\psi \in \left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $ (resp. $\left[ \mathrm{Op}(\mathcal{L})\right] $), which defines $f$ in $ \mathbf{A}$. Since $\mathbb{IS}(\mathbf{A})$ is a universal class, Proposition \ref{traduccion} (resp. Corollary \ref{traduccion para simples}) says that there is a $\varphi \in \left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ equivalent with $\psi $ over $\mathbb{IS}(\mathbf{A})$. Clearly $ \exists \vec{u}\ \varphi (\vec{u},\vec{x},z)$ defines $f$ in $\mathbf{A}$. \end{proof} \subsubsection{Primitive positive functions in Stone algebras} As an application of the results in Section \ref{Sec: definibilidad por atomicas} and the current section we characterize primitive positive functions and functions definable by formulas of the form $\dbigwedge p=q$ in Stone algebras. If $\mathbf{L}$ is a Stone algebra, let $\rightarrow ^{ \mathbf{L}}$ denote its Heyting implication, when it does exist. A \emph{ three valued Heyting algebra} is a Heyting algebra belonging to the variety generated by the three element Heyting algebra. \begin{proposition} \label{allgebraicas de stone algebras copy(1)}Let $\mathbf{L}=(L,\vee ,\wedge ,^{\ast },0,1)$ be a Stone algebra and let $f:L^{n}\rightarrow L$ be any function. Let $\mathcal{L}=\{\vee ,\wedge ,^{\ast },0,1\}$. Then: \begin{enumerate} \item[(1)] If the Heyting implication exists in $\mathbf{L}$, and $(L,\vee ,\wedge ,\rightarrow ^{\mathbf{L}},0,1)$ is a three valued Heyting algebra, then the following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $f$ in $\mathbf{L}$. \item[(b)] There is a formula in $\left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $f$ in $\mathbf{L}$. \item[(c)] There is a term of the language $\{\vee ,\wedge ,\rightarrow ,^{\ast },0,1\}$ which represents $f$ in $\mathbf{L}$. \end{enumerate} \item[(2)] If $\mathbf{L}$ does not satisfy the hypothesis of (1), then the following are equivalent: \begin{enumerate} \item[(a)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $f$ in $\mathbf{L}$. \item[(b)] There is a formula in $\left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $f$ in $\mathbf{L}$. \item[(c)] There is an $\mathcal{L}$-term which represents $f$ in $\mathbf{L} $. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Let $\mathbf{3}$ be the three element Stone algebra $(\{0,1/2,1\},\max ,\min ,^{\ast },0,1)$. Let $\mathbf{2}$ denote the subalgebra of $\mathbf{3}$ with universe $\{0,1\}$, i.e. $\mathbf{2}$ is the two element Boolean algebra. First we prove a series of claims. \noindent \textbf{Claim 1.} Let $f:\{0,1/2,1\}^{n}\rightarrow \{0,1/2,1\}$. The following are equivalent: \begin{enumerate} \item[(i)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $f$ in $\mathbf{3}$. \item[(ii)] There is a formula in $\left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $f$ in $\mathbf{3}$. \item[(iii)] $f$ is a term function of $(\mathbf{3},\rightarrow ^{\mathbf{3} })$. \end{enumerate} \noindent Proof. Since $^{\ast \ast }:\{0,1/2,1\}\rightarrow \{0,1/2,1\}$ is the only non trivial homomorphism between subalgebras of $\mathbf{3}$, Corollary \ref{algebraicas para A con traduccion} and Proposition \ref {primitive positives para A con traduccion} say that both (i) and (ii) are equivalent to \begin{enumerate} \item[(iv)] $f(x_{1},\ldots ,x_{n})^{\ast \ast }=f(x_{1}^{\ast \ast },\ldots ,x_{n}^{\ast \ast })$, for any $x_{1},\ldots ,x_{n}\in \{0,1/2,1\}$. \end{enumerate} \noindent Since $^{\ast \ast }:\{0,1/2,1\}\rightarrow \{0,1/2,1\}$ is a Heyting homomorphism, we have that (iii) implies (iv), and so (iii) implies (i) and (ii). Suppose that (iv) holds and that $f$ is not a term function of $(\mathbf{3},\rightarrow ^{\mathbf{3}})$. We will arrive at a contradiction. Note that (iv) implies that $\{0,1\}$ is closed under $f$. Since $f$ is not a term function, the Baker-Pixley Theorem says that at least one of the following subuniverses of $(\mathbf{3},\rightarrow ^{\mathbf{3}})\times ( \mathbf{3},\rightarrow ^{\mathbf{3}})$ is not closed under $f\times f$, \begin{equation*} \begin{array}{l} S_{1}=\{(0,0),(1/2,1/2),(1,1/2),(1/2,1),(1,1)\}\text{,} \\ S_{2}=\{(0,0),(1/2,1),(1,1)\}\text{,} \\ S_{3}=\{(0,0),(1,1/2),(1,1)\}\text{.} \end{array} \end{equation*} (The other subuniverses of $(\mathbf{3},\rightarrow ^{\mathbf{3}})\times ( \mathbf{3},\rightarrow ^{\mathbf{3}})$ are clearly closed under $f\times f$ .) Suppose $S_{1}$ is not closed under $f\times f$. Note that $\mathbf{S} _{1} $ is generated by $\{(1,1/2),(1/2,1)\}$. So, we can suppose that $f$ is binary, satisfies (iv) and $(f(1,1/2),f(1/2,1))\notin S_{1}$. If $ (f(1,1/2),f(1/2,1))=(0,1/2)$, then \begin{equation*} \begin{array}{lll} (0,1) & = & (0^{\ast \ast },1/2^{\ast \ast }) \\ & = & (f(1^{\ast \ast },1/2^{\ast \ast }),f(1/2^{\ast \ast },1^{\ast \ast })) \\ & = & (f(1,1),f(1,1)) \end{array} \end{equation*} which is absurd. The other cases are similar. If either $S_{2}$ or $S_{3}$ is not closed under $f\times f$ we can arrive to a contradiction in a similar manner. \noindent \textbf{Claim 2.} Let $f:\{0,1/2,1\}^{n}\rightarrow \{0,1/2,1\}$. If there is a formula $\varphi \in \left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $f$ in $\mathbf{3}$ and $f$ is not a term function of $\mathbf{3}$, then $\rightarrow ^{\mathbf{3}}$ is a term function of $(\mathbf{3},f)$. \noindent Proof. By Claim 1, $f$ is a term function of $(\mathbf{3} ,\rightarrow ^{\mathbf{3}})$. By the Baker-Pixley Theorem, since $f$ is not a term function of $\mathbf{3}$ we have that \begin{equation*} S_{4}=\{(0,0),(1/2,1/2),(1/2,1),(1,1)\} \end{equation*} or \begin{equation*} S_{5}=\{(0,0),(1/2,1/2),(1,1/2),(1,1)\} \end{equation*} is not closed under $f\times f$ (the other subuniverses are Heyting subuniverses and hence they are closed under $f\times f$). But $\mathbf{S} _{4}$ and $\mathbf{S}_{5}$ are isomorphic and $f\times f$ is defined by $ \varphi $ in $\mathbf{3}\times \mathbf{3}$ which implies that both $S_{4}$ and $S_{5}$ are not closed under $f\times f$. Thus every subalgebra of $( \mathbf{3},f)\times (\mathbf{3},f)$ is a Heyting subalgebra which by the Baker-Pixley Theorem says that $\rightarrow ^{\mathbf{3}}$ is a term operation of $(\mathbf{3},f)$. \noindent \textbf{Claim 3.} If $\mathbf{L}$ is a Stone algebra which is a subdirect product of a family of Stone algebras $\{\mathbf{L}_{i}:i\in I\}$, and $\varphi \in \left[ \exists \bigwedge \mathrm{At}(\mathcal{L})\right] $ defines an $n$-ary function $f$ on $L$, then $\varphi $ defines a function $ f_{i}$ on each $L_{i}$, and $f$ is the restriction of $(f_{i})_{i\in I}$ to $ L^{n}$. \noindent Proof. Since $\mathbf{L}_{i}$ is a homomorphic image of $\mathbf{L} $, we have that $\mathbf{L}_{i}\vDash \exists z\varphi (\vec{x},z)$. Suppose $\varphi (\vec{x},z)=\exists \vec{w}\psi \left( \vec{w},\vec{x},z\right) $ with $\psi \in \left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $. Since every subquasivariety of the variety of Stone algebras is a variety and $ \mathbf{L}\vDash \psi (\vec{w},\vec{x},z_{1})\wedge \psi (\vec{w},\vec{x} ,z_{2})\rightarrow z_{1}=z_{2}$, we have that $\mathbf{L}_{i}\vDash \varphi ( \vec{x},z_{1})\wedge \varphi (\vec{x},z_{2})\rightarrow z_{1}=z_{2}$, which says that $\varphi $ defines a function on $L_{i}$. \noindent \textbf{Claim 4.} If $\mathbf{A}\leq \mathbf{B}$, and $\varphi \in \left[ \exists \bigwedge \mathrm{At}(\mathcal{L})\right] $ defines $n$-ary functions $f$ on $\mathbf{A}$ and $g$ on $\mathbf{B}$, then $f$ is equal to the restriction of $g$ to $A^{n}$. \noindent Proof. Trivial. We are ready to prove (1). Suppose the Heyting implication exists in $ \mathbf{L}$, and $(L,\vee ,\wedge ,\rightarrow ^{\mathbf{L}},0,1)$ is a three valued Heyting algebra. Recall that $(\{0,1\},\max ,$ $\min ,\rightarrow ^{\mathbf{2}},0,1)$ and $(\{0,1/2,1\},\max ,\min ,\rightarrow ^{ \mathbf{3}},0,1)$ are the only subdirectly irreducible three valued Heyting algebras. Since $x^{\ast }=x\rightarrow ^{\mathbf{L}}0$ for every $x\in L$, we can suppose that \begin{equation*} (L,\vee ,\wedge ,^{\ast },\rightarrow ^{\mathbf{L}},0,1)\leq \prod \{\mathbf{ L}_{u}:u\in I\cup J\} \end{equation*} is a subdirect product, where $\mathbf{L}_{u}=(\mathbf{2},\rightarrow ^{ \mathbf{2}})$ for $u\in I$, and $\mathbf{L}_{u}=(\mathbf{3},\rightarrow ^{ \mathbf{3}})$ for $u\in J$. Suppose that $J\neq \emptyset $ and let $ u_{0}\in J$. The case $J=\emptyset $ is left to the reader. (a)$\Rightarrow $(b). This is clear. (b)$\Rightarrow $(c). By Claim 3, $\varphi $ defines a function $f_{u}$ on each $\mathbf{L}_{u}$ and $f$ is the restriction of $(f_{u})_{u\in I\cup J}$ to $L^{n}$. Note that $f_{u}=f_{v}$ whenever $u,v\in I$ or $u,v\in J$. By Claim 4, for every $u\in I$, the function $f_{u}$ is the restriction of $ f_{u_{0}}$ to $\{0,1\}^{n}$. By Claim 1, there is a $\{\vee ,\wedge ,^{\ast },\rightarrow ,0,1\}$-term $p$ such that $f_{u_{0}}=p^{\mathbf{L}_{u_{0}}}$. Note that $f_{u}=p^{\mathbf{L}_{u}}$, for every $u\in I\cup J$ and hence $ f=p^{(L,\vee ,\wedge ,^{\ast },\rightarrow ^{\mathbf{L}},0,1)}$. (c)$\Rightarrow $(a). Suppose $p$ is a $\{\vee ,\wedge ,^{\ast },\rightarrow ,0,1\}$-term such that $f=p^{(L,\vee ,\wedge ,^{\ast },\rightarrow ^{\mathbf{ L}},0,1)}$. By Claim 1, $p^{\mathbf{L}_{u_{0}}}$ is definable in $\mathbf{L} _{u_{0}}=(\mathbf{3},\rightarrow ^{\mathbf{3}})$ by a formula $\varphi \in \left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $. Note that, for every $ u\in $ $I\cup J$, we have that $p^{\mathbf{L}_{u}}$ is defined in $\mathbf{L} _{u}$ by $\varphi $. But $f$ is the restriction of $(p^{\mathbf{L} _{u}})_{u\in I\cup J}$ to $L$, which implies that $f$ is defined by $\varphi $ in $\mathbf{L}$. Next we prove (2). The implication (c)$\Rightarrow $(a) is immediate. In fact, if an $\mathcal{L}$-term $t\left( \vec{x}\right) $ represents $f$ in $ \mathbf{L}$, then the formula $z_{1}=t\left( \vec{x}\right) $ defines $f$ in $\mathbf{L}$. To show (b)$\Rightarrow $(c) we prove: \noindent \textbf{Claim 5. }Assume $f:L^{n}\rightarrow L$ is not representable by an $\mathcal{L}$-term in $\mathbf{L}$, and suppose $f$ is defined in $\mathbf{L}$ by a formula $\varphi \in \left[ \exists \bigwedge \mathrm{At}(\mathcal{L})\right] $. Then the Heyting implication exists in $ \mathbf{L}$, and $(L,\vee ,\wedge ,\rightarrow ^{\mathbf{L}},0,1)$ is a three valued Heyting algebra. To prove this claim we first note that since $\mathbf{2}$ and $\mathbf{3}$ are the only subdirectly irreducible Stone algebras we can suppose that \begin{equation*} \mathbf{L}\leq \prod \{\mathbf{L}_{u}:u\in I\cup J\} \end{equation*} is a subdirect product, where $\mathbf{L}_{u}=\mathbf{2}$ for $u\in I$, and $ \mathbf{L}_{u}=\mathbf{3}$ for $u\in J$. By Claim 3, $\varphi $ defines a function $f_{u}$ on each $\mathbf{L}_{u}$ and $f$ is the restriction of $ (f_{u})_{u\in I\cup J}$ to $L$. Note that $f_{u}=f_{v}$ whenever $u,v\in I$ or $u,v\in J$. Also, by Claim 4, the function $f_{u}$ is the restriction of $ f_{v}$ to $\{0,1\}$, whenever $u\in I$ and $v\in J$. If $J=\emptyset $, then since $\mathbf{2}$ is primal, there is an $\mathcal{L}$-term $p$ which represents $f_{u}$ in $\mathbf{2}$. But this is impossible since this implies that $f$ is representable by $p$ in $\mathbf{L}$. So $J\neq \emptyset $. Let $u_{0}\in J$. Since $f$ is not representable by an $ \mathcal{L}$-term in $\mathbf{L}$, we have that $f_{u_{0}}$ is not representable by an $\mathcal{L}$-term in $\mathbf{3}$. Thus Claim 2 implies that $\rightarrow ^{\mathbf{3}}$ is a term function of $(\mathbf{3} ,f_{u_{0}})$. Note that the same term witnesses that $\rightarrow ^{\mathbf{L }_{u}}$ is a term function of $(\mathbf{L}_{u},f_{u})$ for every $u\in I\cup J$. But $\mathbf{L}$ is closed under $(f_{u})_{u\in I\cup J}$, and hence we have that $\mathbf{L}$ is closed under $(\rightarrow ^{\mathbf{L} _{u}})_{u\in I\cup J}$. This says that the Heyting implication exists in $ \mathbf{L}$, and that $(L,\vee ,\wedge ,\rightarrow ^{\mathbf{L}},0,1)$ is a three valued Heyting algebra. \end{proof} Functions definable by a formula of the form $\dbigwedge p=q$ are called \emph{mono-algebraic}. They are studied in \cite{Ca-Va} using sheaf representations. \subsubsection{Primitive positive functions in De Morgan algebras} Let $\mathcal{L}=\{\vee ,\wedge ,^{-},0,1\}$ be the language of De Morgan algebras. Let $\mathbf{M}=(\{0,a,b,1\},\vee ,\wedge ,^{-},0,1)$ where $ (\{0,a,b,1\},\vee ,\wedge ,0,1)$ is the two-atom Boolean lattice and $\bar{0} =1$, $\bar{1}=0$, $\bar{a}=a$ and $\bar{b}=b$. It is well known that $ \mathbf{M}$ is a simple De Morgan algebra which generates the variety of all De Morgan algebras \cite{ba-dw}. Primitive positive functions of $\mathbf{M}$ can be characterized as follows. \begin{proposition} \label{pp de morgan}For any function $f:M^{n}\rightarrow M$, the following are equivalent \begin{enumerate} \item[(1)] There is a formula in $\left[ \bigwedge \mathrm{At}(\mathcal{L}) \right] $ which defines $f$ in $\mathbf{M}$. \item[(2)] There is a formula in $\left[ \exists \bigwedge \mathrm{At}( \mathcal{L})\right] $ which defines $f$ in $\mathbf{M}$. \item[(3)] $f$ is a term operation of $(\mathbf{M},^{\circ })$, where $ 0^{\circ }=0$, $1^{\circ }=1$, $a^{\circ }=b$ and $b^{\circ }=a$. \item[(4)] $f(x_{1},\ldots ,x_{n})^{\circ }=f(x_{1}^{\circ },\ldots ,x_{n}^{\circ })$, for any $x_{1},\ldots ,x_{n}\in M$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow $(2) This is trivial. (2)$\Rightarrow $(3) We note that \begin{equation*} ^{\circ }:\{0,a,b,1\}\rightarrow \{0,a,b,1\} \end{equation*} is an automorphism of $(\mathbf{M},^{\circ })$. Since $f$ is preserved by $ ^{\circ }$, assumption (2) implies that $\{0,1\}$ is closed under $f$. Also note that \begin{equation*} \overline{x}^{\circ }=\text{ Boolean complement of }x\text{,} \end{equation*} which says that $(\mathbf{M},^{\circ })$ generates an arithmetical variety. Since this algebra is simple and $\{0,1\}$ is its only proper subuniverse, Fleischer's theorem says that the subuniverses of $(\mathbf{M},^{\circ })\times (\mathbf{M},^{\circ })$ are: $\{0,1\}\times \{0,1\}$, $M\times M$, $ \{(x,x^{\circ }):x\in M\}$, $\{(x,x):x\in M\}$ and $\{(0,0),(1,1)\}$. Each of these is easily seen to be closed under $f\times f$, which by the Baker-Pixley Theorem says that $f$ is a term function of $(\mathbf{M} ,^{\circ })$. (3)$\Rightarrow $(4) This is clear since $^{\circ }$ is an automorphism of $( \mathbf{M},^{\circ })$. (4)$\Rightarrow $(1) Note that $\mathbf{M}$ has three non trivial inner isomorphisms which are $^{\circ }$ and the restrictions of $^{\circ }$ to $ \{0,a,1\}$ and $\{0,b,1\}$. Since the variety of De Morgan algebras is congruence distributive, Corollary \ref{algebraicas para A con traduccion} says that (1) holds. \end{proof} \section{Term interpolation} Given a structure $\mathbf{A}$, an interesting --albeit often elusive-- problem is to provide a useful description of its term-operations. That is, to give concise (semantical) conditions that characterize when a given function $f:A^{n}\rightarrow A$ is a term-operation. This is beautifully accomplished in the classical Baker-Pixley Theorem for the case in which $ \mathbf{A}$ is finite and has a near-unanimity term \cite{ba-pi}. A natural way to generalize this problem to classes of structures is as follows. Given a class $\mathcal{K}$ of $\mathcal{L}$-structures and a map $ \mathbf{A}\rightarrow f^{\mathbf{A}}$ which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $f^{\mathbf{A}}:A^{n}$ $\rightarrow A$, provide conditions that guarantee the existence of a term $t$ such that $t^{ \mathbf{A}}=f^{\mathbf{A}}$ on every $\mathbf{A}$ in $\mathcal{K}$. We address this problem for classes in the current section, and obtain some interesting results including generalizations of the aforementioned Baker-Pixley Theorem and for Pixley's theorem characterizing the term-operations of quasiprimal algebras \cite{Pixley}. Another avenue of generalization we considered are functions that are interpolated by a finite number of terms. This is also looked at in the setting of classes. \subsection*{Term-valued functions by cases} Let $f\in \mathcal{L}$ be a function symbol, and let $\mathcal{K}$ be a class of $\mathcal{L}$-structures. Given $\mathcal{L}$-terms $t_{1}(\vec{x} ),\dots ,t_{k}(\vec{x})$ and first order $\mathcal{L}$-formulas $\varphi _{1}(\vec{x}),\dots ,\varphi _{k}(\vec{x})$, we write \begin{equation*} f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}\text{ in }\mathcal{K} \end{equation*} to express that $\mathcal{K}\vDash \varphi _{1}(\vec{x})\vee \dots \vee \varphi _{k}(\vec{x})$ and \begin{equation*} f^{\mathbf{A}}(\vec{a})=\left\{ \begin{array}{cc} t_{1}^{\mathbf{A}}(\vec{a}) & \text{if }\mathbf{A}\vDash \varphi _{1}(\vec{a} ) \\ \vdots & \vdots \\ t_{k}^{\mathbf{A}}(\vec{a}) & \text{if }\mathbf{A}\vDash \varphi _{k}(\vec{a} ) \end{array} \right. \end{equation*} for all $\vec{a}\in A^{n}$ and $\mathbf{A}\in \mathcal{K}$. (Note that as $ f^{\mathbf{A}}$ is a function the definition by cases is not ambiguous.) We say that a term $t(\vec{x})$ \emph{represents }$f$ in $\mathcal{K}$ if $f^{ \mathbf{A}}(\vec{a})=t^{\mathbf{A}}(\vec{a})$, for all $\mathbf{A}\in \mathcal{K}$ and $\vec{a}\in A^{n}$. With the help of results from previous sections it is possible to characterize when $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$, with the $t_{i}$'s not involving $ f $ and a fixed format for the $\varphi _{i}$'s. \begin{theorem} \label{term valued with open cases}Let $\mathcal{L}\subseteq \mathcal{L} ^{\prime }$ be first order languages and let $f\in \mathcal{L}^{\prime }- \mathcal{L}$ be an $n$-ary function symbol. Let $\mathcal{K}$ be a class of $ \mathcal{L}^{\prime }$-structures. The following are equivalent: \begin{enumerate} \item[(1)] $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathcal{K}$, with each $t_{i}$ an $ \mathcal{L}$-term and each $\varphi _{i}$ in $\mathrm{Op}(\mathcal{L})$. \item[(2)] The following conditions hold: \begin{enumerate} \item[(a)] For all $\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})$ and all $ \mathbf{S}\leq \mathbf{A}_{\mathcal{L}}$, we have that $S$ is closed under $ f^{\mathbf{A}}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, and all isomorphisms $\sigma :\mathbf{A} _{0}\rightarrow \mathbf{B}_{0}$, we have that $\sigma :(\mathbf{A}_{0},f^{ \mathbf{A}}|_{A_{0}})\rightarrow (\mathbf{B}_{0},f^{\mathbf{B}}|_{B_{0}})$ is an isomorphism. \end{enumerate} \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+1)$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (a) and (b). \end{theorem} \begin{proof} (1)$\Rightarrow $(2). If $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathcal{K}$, with each $t_{i}$ an $\mathcal{L}$-term and each $\varphi _{i}$ in $\mathrm{Op}( \mathcal{L})$, then $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathbb{P}_{u}(\mathcal{K} ) $. Now (a) and (b) are routine verifications. (2)$\Rightarrow $(1). We first note that (a) implies \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash \dbigvee_{t(\vec{x})\text{ an }\mathcal{L} \text{-term }}f(\vec{x})=t(\vec{x})\text{,} \end{equation*} which by compactness says that there are $\mathcal{L}$-terms $t_{1}(\vec{x} ),\dots ,t_{k}(\vec{x})$ such that \begin{equation*} \mathbb{P}_{u}(\mathcal{K})\vDash f(\vec{x})=t_{1}(\vec{x})\vee \dots \vee f( \vec{x})=t_{k}(\vec{x})\text{.} \end{equation*} Since (b) holds, Theorem \ref{(positive) open} implies that there exists a formula $\varphi \in \mathrm{Op}(\mathcal{L})$ which defines $f$ in $ \mathcal{K}$. It is clear that for any $\mathbf{A}\in \mathcal{K}$ we have \begin{equation*} \mathcal{K}\vDash \varphi (\vec{x},t_{1}(\vec{x}))\vee \dots \vee \varphi ( \vec{x},t_{k}(\vec{x})) \end{equation*} and \begin{equation*} f^{\mathbf{A}}(\vec{a})=\left\{ \begin{array}{cc} t_{1}^{\mathbf{A}}(\vec{a}) & \text{if }\mathbf{A}\vDash \varphi (\vec{a} ,t_{1}(\vec{a})) \\ \vdots & \vdots \\ t_{k}^{\mathbf{A}}(\vec{a}) & \text{if }\mathbf{A}\vDash \varphi (\vec{a} ,t_{k}(\vec{a})) \end{array} \right. \end{equation*} for all $\vec{a}\in A^{n}$. So we have proved that $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ \ldots \ \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathcal{K}$, where $\varphi _{i}(\vec{x})=\varphi ( \vec{x},t_{i}(\vec{x}))$, $i=1,\ldots ,k$. If $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+1)$ -generated substructures and each one is finite, then we note that there are $\mathcal{L}$-terms $t_{1}(\vec{x}),\dots ,t_{k}(\vec{x})$ such that \begin{equation*} \mathcal{K}\vDash f(\vec{x})=t_{1}(\vec{x})\vee \dots \vee f(\vec{x})=t_{k}( \vec{x}) \end{equation*} and the proof can be continued in the same manner as above. \end{proof} \begin{corollary} Let $\mathcal{K}$ be any class of $\mathcal{L}$-algebras contained in a locally finite variety. Suppose $\mathbf{A}\rightarrow f^{\mathbf{A}}$ is a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $ f^{\mathbf{A}}:A^{n}$ $\rightarrow A$. The following are equivalent: \begin{enumerate} \item[(1)] $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathcal{K}$, with each $t_{i}$ an $ \mathcal{L}$-term and each $\varphi _{i}$ in $\mathrm{Op}(\mathcal{L})$. \item[(2)] The following conditions hold: \begin{enumerate} \item[(a)] If $\mathbf{S}\leq \mathbf{A}\in \mathcal{K}$, then $S$ is closed under $f^{\mathbf{A}}$. \item[(b)] If $\mathbf{A}_{0}\leq \mathbf{A}\in \mathcal{K}$, $\mathbf{B} _{0}\leq \mathbf{B}\in \mathcal{K}$ and $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ is an isomorphism, then $\sigma :(\mathbf{A}_{0},f^{\mathbf{A }})\rightarrow (\mathbf{B}_{0},f^{\mathbf{B}})$ is an isomorphism. \end{enumerate} \end{enumerate} \end{corollary} Using Theorem \ref{term valued with open cases} and its proof as a blueprint it is easy to produce analogous results for other families of formulas. For example here is the positive open case. \begin{theorem} \label{term valued with positive cases}Let $\mathcal{L}\subseteq \mathcal{L} ^{\prime }$ be first order languages and let $f\in \mathcal{L}^{\prime }- \mathcal{L}$ be an $n$-ary function symbol. Let $\mathcal{K}$ be a class of $ \mathcal{L}^{\prime }$-structures. The following are equivalent: \begin{enumerate} \item[(1)] $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathcal{K}$, with each $t_{i}$ an $ \mathcal{L}$-term and each $\varphi _{i}$ in $\left[ \bigvee \bigwedge \mathrm{At}(\mathcal{L})\right] $. \item[(2)] $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathcal{K}$, with each $t_{i}$ an $ \mathcal{L}$-term and each $\varphi _{i}$ in $\left[ \bigwedge \mathrm{At}( \mathcal{L})\right] $. \item[(3)] The following conditions hold: \begin{enumerate} \item[(a)] For all $\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})$ and all $ \mathbf{S}\leq \mathbf{A}_{\mathcal{L}}$, we have that $S$ is closed under $ f^{\mathbf{A}}$. \item[(b)] For all $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, all $\mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B}_{\mathcal{L}}$, and all homomorphisms $\sigma :\mathbf{A} _{0}\rightarrow \mathbf{B}_{0}$, we have that $\sigma :(\mathbf{A}_{0},f^{ \mathbf{A}})\rightarrow (\mathbf{B}_{0},f^{\mathbf{B}})$ is a homomorphism. \end{enumerate} \end{enumerate} \noindent Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+1)$-generated substructures and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (a) and (b). \end{theorem} \subsection*{Pixley's theorem for classes} Recall that the \emph{ternary discriminator} on the set $\mathbf{A}$ is the function \begin{equation*} d^{\mathbf{A}}(x,y,z)=\left\{ \begin{tabular}{ll} $z$ & \ \ if $x=y$, \\ $x$ & \ \ if $x\neq y$. \end{tabular} \ \right. \end{equation*} An algebra $\mathbf{A}$ is called \textit{quasiprimal} if it is finite and has the discriminator as a term function. A well known result of A. Pixley \cite{Pixley} characterizes quasiprimal algebras as those finite algebras in which every function preserving the inner isomorphisms is a term function. Of course the ternary discriminator preserves the inner isomorphisms and hence one direction of the theorem is trivial. The following theorem generalizes the non trivial direction. \begin{theorem} \label{generalizacion imp no trivial de pixley}Let $\mathcal{L}\subseteq \mathcal{L}^{\prime }$ be first order languages without relation symbols and let $f\in \mathcal{L}^{\prime }-\mathcal{L}$ be an $n$-ary function symbol. Let $\mathcal{K}$ be a class of $\mathcal{L}^{\prime }$-algebras such that there is an $\mathcal{L}$-term representing the ternary discriminator in each member of $\mathcal{K}$. Suppose that \begin{enumerate} \item[(a)] If $\mathbf{A}\in \mathbb{P}_{u}(\mathcal{K})$ and $\mathbf{S} \leq \mathbf{A}_{\mathcal{L}}$, then $S$ is closed under $f^{\mathbf{A}}$. \item[(b)] If $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$, $ \mathbf{A}_{0}\leq \mathbf{A}_{\mathcal{L}}$, $\mathbf{B}_{0}\leq \mathbf{B} _{\mathcal{L}}$ and $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ is an isomorphism, then $\sigma :(\mathbf{A}_{0},f^{\mathbf{A}})\rightarrow ( \mathbf{B}_{0},f^{\mathbf{B}})$ is an isomorphism. \end{enumerate} \noindent Then $f$ is representable by an $\mathcal{L}$-term in $\mathcal{K}$ . Moreover, if $\mathcal{K}_{\mathcal{L}}$ has finitely many isomorphism types of $(n+1)$-generated subalgebras and each one is finite, then we can remove the operator $\mathbb{P}_{u}$ from (a) and (b). \end{theorem} \begin{proof} By Theorem \ref{term valued with open cases} we have that $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathcal{K}$, with each $t_{i}$ an $\mathcal{L}$-term and each $\varphi _{i}$ in $\mathrm{Op}(\mathcal{L})$. We shall prove that $ f $ is representable by an $\mathcal{L}$-term in $\mathcal{K}$. Of course, if $k=1$, the theorem follows. Suppose $k>1$. We show that we can reduce $k$. Let $t\left( x,y,z\right) $ be an $\mathcal{L}$-term representing the discriminator in $\mathcal{K}$. Then the $\mathcal{L}$-term \begin{equation*} D(x,y,z,w)=t\left( t\left( x,y,z\right) ,t\left( x,y,w\right) ,w\right) \end{equation*} represents the \textit{quaternary discriminator} in $\mathcal{K}$, that is, for every $a,b,c,d\in A$, with $\mathbf{A}\in \mathcal{K}$, \begin{equation*} D^{\mathbf{A}}(a,b,c,d)=\left\{ \begin{tabular}{ll} $c$ & $a=b$ \\ $d$ & $a\neq b$. \end{tabular} \right. \end{equation*} Having a discriminator term for $\mathcal{K}$ also provides the following translation property (see \cite{we}): \begin{itemize} \item For every open $\mathcal{L}$-formula $\varphi (\vec{x})$ there exist $ \mathcal{L}$-terms $p(\vec{x})$ and $q(\vec{x})$ such that either $\mathcal{K }\vDash \varphi (\vec{x})\leftrightarrow p(\vec{x})=q(\vec{x})$ or $\mathcal{ K}\vDash \varphi (\vec{x})\leftrightarrow p(\vec{x})\neq q(\vec{x})$. \end{itemize} Now, suppose for example that \begin{equation*} \begin{tabular}{l} $\mathcal{K}\vDash \varphi _{1}(\vec{x})\leftrightarrow p_{1}(\vec{x})\neq q_{1}(\vec{x})$, \\ $\mathcal{K}\vDash \varphi _{2}(\vec{x})\leftrightarrow p_{2}(\vec{x})\neq q_{2}(\vec{x})$, \end{tabular} \end{equation*} for some $\mathcal{L}$-terms $p_{i}(\vec{x})$ and $q_{i}(\vec{x})$. Then \begin{equation*} f^{\mathbf{A}}(\overline{a})=\left\{ \begin{array}{ll} D(p_{1},q_{1},t_{2},t_{1})^{\mathbf{A}}(\vec{a}) & \ \ \ \text{if }\mathbf{A} \vDash \varphi (\vec{a}) \\ t_{3}^{\mathbf{A}}(\vec{a}) & \ \ \ \text{if }\mathbf{A}\vDash \varphi _{3}( \vec{a}) \\ \ \ \ \vdots & \ \ \ \ \ \ \ \ \ \ \ \vdots \\ t_{k}^{\mathbf{A}}(\vec{a}) & \ \ \ \text{if }\mathbf{A}\vDash \varphi _{k}( \vec{a}) \end{array} \right. \end{equation*} where $\varphi (\vec{x})=p_{1}(\vec{x})\neq q_{1}(\vec{x})\vee \left( p_{1}( \vec{x})=q_{1}(\vec{x})\wedge p_{2}(\vec{x})\neq q_{2}(\vec{x})\right) $. The other cases are similar. \end{proof} For the locally finite case Pixley's theorem can be generalized as follows. \begin{theorem} \label{Pixley}Let $\mathcal{K}$ be a class of $\mathcal{L}$-algebras contained in a locally finite variety. The following are equivalent: \begin{enumerate} \item[(1)] There is an $\mathcal{L}$-term representing the ternary discriminator on each member of $\mathcal{K}$. \item[(2)] Assume $\mathbf{A}\rightarrow f^{\mathbf{A}}$ is a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $f^{\mathbf{ A}}:A^{n}$ $\rightarrow A$ in such a manner that: \begin{enumerate} \item[(a)] for all $\mathbf{A}\in \mathcal{K}$ and all $\mathbf{S}\leq \mathbf{A}$ we have that $S$ is closed under $f^{\mathbf{A}}$, and \item[(b)] for all $\mathbf{A},\mathbf{B}\in \mathcal{K}$, all $\mathbf{A} _{0}\leq \mathbf{A}$, $\mathbf{B}_{0}\leq \mathbf{B}$, and every isomorphism $\sigma :\mathbf{A}_{0}\rightarrow \mathbf{B}_{0}$ we have that $\sigma :( \mathbf{A}_{0},f^{\mathbf{A}})\rightarrow (\mathbf{B}_{0},f^{\mathbf{B}})$ is an isomorphism. \end{enumerate} \noindent Then $f$ is representable by an $\mathcal{L}$-term in $\mathcal{K}$ . \end{enumerate} \end{theorem} \begin{proof} (1)$\Rightarrow $(2). This follows from Theorem \ref{generalizacion imp no trivial de pixley} applied to the class $\{(\mathbf{A},f^{\mathbf{A}}\dot{)}: \mathbf{A}\in \mathcal{K\}}$. (2)$\Rightarrow $(1). It is clear that the map $\mathbf{A}\rightarrow d^{ \mathbf{A}}$ which assigns to each $\mathbf{A}\in \mathcal{K}$ the ternary discriminator on $\mathbf{A}$, satisfies (a) and (b) of (2). \end{proof} \subsection*{Baker-Pixley's theorem for classes} Let $\mathcal{K}$ be a class of $\mathcal{L}$-algebras. An $\mathcal{L}$ -term $M\left( x,y,z\right) $ is called a \textit{majority term for} $ \mathcal{K}$ if the following identities hold in $\mathcal{K}$ \begin{equation*} M(x,x,y)\approx M(x,y,x)\approx M(y,x,x)\approx x\text{.} \end{equation*} Next, we shall give a generalization of the well known theorem of Baker and Pixley \cite{ba-pi} on the existence of terms representing functions in finite algebras with a majority term. First a lemma. \begin{lemma} \label{existencia de terminos en general}Let $\mathcal{K}$ be a class of $ \mathcal{L}$-algebras contained in a locally finite variety. Let $\mathbf{A} \rightarrow f^{\mathbf{A}}$ be a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $f^{\mathbf{A}}:A^{n}$ $\rightarrow A$. Suppose that for any $m\in \mathbb{N}$, $\mathbf{A}_{1},\dots ,\mathbf{A} _{m}\in \mathcal{K}$ and $\mathbf{S}\leq \mathbf{A}_{1}\times \ldots \times \mathbf{A}_{m}$, we have that $S$ is closed under $f^{\mathbf{A}_{1}}\times \ldots \times f^{\mathbf{A}_{m}}$. Then $f$ is representable by an $\mathcal{ L}$-term in $\mathcal{K}$. \end{lemma} \begin{proof} Let $t_{1}(\vec{x}),\dots ,t_{k}(\vec{x})$ be $\mathcal{L}$-terms such that for every $\mathbf{A}$ in the variety generated by $\mathcal{K}$ and every $ \vec{a}\in A^{n}$ we have that the subalgebra of $\mathbf{A}$ generated by $ a_{1},\ldots ,a_{n}$ has universe $\{t_{1}^{\mathbf{A}}(\vec{a}),\ldots ,t_{k}^{\mathbf{A}}(\vec{a})\}$. We prove that $f$ is representable by $ t_{i} $ in $\mathcal{K}$, for some $i$. Suppose to the contrary that for each $i$ there are $\mathbf{A}_{i}\in \mathcal{K}$ and $\vec{a} ^{i}=(a_{1}^{i},\ldots ,a_{n}^{i})\in A_{i}^{n}$ such that $f^{\mathbf{A} _{i}}(\vec{a}^{i})\neq t_{i}^{\mathbf{A}_{i}}(\vec{a}^{i})$. Let $\mathbf{S}$ be the subalgebra of $\mathbf{A}_{1}\times \ldots \times \mathbf{A}_{k}$ generated by $\{p_{j}:j=1,\ldots ,n\}$, where $p_{j}=(a_{j}^{1},a_{j}^{2}, \ldots ,a_{j}^{k})$. Since $S$ is closed under $f^{\mathbf{A}_{1}}\times \ldots \times f^{\mathbf{A}_{k}}$ we have that \begin{equation*} f^{\mathbf{A}_{1}}\times \ldots \times f^{\mathbf{A}_{k}}(p_{1},\ldots ,p_{n})=(f^{\mathbf{A}_{1}}(\vec{a}^{1}),\ldots ,f^{\mathbf{A}_{k}}(\vec{a} ^{k}))\in S\text{.} \end{equation*} Thus there is $i$ such that \begin{equation*} (f^{\mathbf{A}_{1}}(\vec{a}^{1}),\ldots ,f^{\mathbf{A}_{k}}(\vec{a} ^{k}))=t_{i}^{\mathbf{A}_{1}\times \ldots \times \mathbf{A} _{m}}(p_{1},\ldots ,p_{n})\text{,} \end{equation*} which produces \begin{equation*} (f^{\mathbf{A}_{1}}(\vec{a}^{1}),\ldots ,f^{\mathbf{A}_{k}}(\vec{a} ^{k}))=(t_{i}^{\mathbf{A}_{1}}(\vec{a}^{1}),\ldots ,t_{i}^{\mathbf{A}_{k}}( \vec{a}^{k}))\text{.} \end{equation*} In particular we have that $f^{\mathbf{A}_{i}}(\vec{a}^{i})=t_{i}^{\mathbf{A} _{i}}(\vec{a}^{i})$, a contradiction. \end{proof} \begin{theorem} \label{Baker-Pixley para clases caso localmente finitas}Let $\mathcal{K}$ be a class of algebras contained in a locally finite variety and suppose that $ \mathcal{K}$ has a majority term. Let $\mathbf{A}\rightarrow f^{\mathbf{A}}$ be a map which assigns to each $\mathbf{A}\in \mathcal{K}$ an $n$-ary operation $f^{\mathbf{A}}:A^{n}$ $\rightarrow A$. Assume that for all $ \mathbf{A},\mathbf{B}\in \mathcal{K}$ and every $\mathbf{S}\leq \mathbf{A} \times \mathbf{B}$ we have that $S$ is closed under $f^{\mathbf{A}}\times f^{ \mathbf{B}}$. Then $f$ is representable by a term in $\mathcal{K}$. \end{theorem} \begin{proof} First we show that: \begin{enumerate} \item[(i)] If $\mathbf{A},\mathbf{B}\in \mathcal{K}$, $\vec{a}\in A^{n}$ and $\vec{b}\in B^{n}$, then there\ is\ a\ term $t(\vec{x})$\ satisfying\ $t^{ \mathbf{A}}(\vec{a})=f^{\mathbf{A}}(\vec{a})$ and $t^{\mathbf{B}}(\vec{b} )=f^{\mathbf{B}}(\vec{b})$. \end{enumerate} \noindent Let $\mathbf{S}$ be the subalgebra of $\mathbf{A}\times \mathbf{B}$ generated by $\{(a_{1},b_{1}),\ldots ,(a_{n},b_{n})\}$. Since $S$ is closed under $f^{\mathbf{A}}\times f^{\mathbf{B}}$ we have that \begin{equation*} f^{\mathbf{A}}\times f^{\mathbf{B}}((a_{1},b_{1}),\ldots ,(a_{n},b_{n}))=(f^{ \mathbf{A}}(\vec{a}),f^{\mathbf{B}}(\vec{b}))\in S\text{.} \end{equation*} Thus there is a term $t(\vec{x})$ such that \begin{equation*} (f^{\mathbf{A}}(\vec{a}),f^{\mathbf{B}}(\vec{b}))=t^{\mathbf{A}\times \mathbf{B}}((a_{1},b_{1}),\ldots ,(a_{n},b_{n}))=(t^{\mathbf{A}}(\vec{a}),t^{ \mathbf{B}}(\vec{b}))\text{.} \end{equation*} Next we prove by induction in $m$ that: \begin{enumerate} \item[(I$_{m}$)] If $\mathbf{A}_{1},\dots ,\mathbf{A}_{m}\in \mathcal{K}$ and\ $\vec{a}_{j}\in A_{j}^{n}$, for $j=1,\ldots ,m$,\ then there\ is\ a\ term $t(\vec{x})$\ satisfying\ $t^{\mathbf{A}_{j}}(\vec{a}_{j})=f^{\mathbf{A} _{j}}(\vec{a}_{j})$, for $j=1,\ldots ,m$. \end{enumerate} \noindent By (i) we have that (I$_{m}$) holds for $m=1,2$. Fix $m\geq 3$, $ \mathbf{A}_{1},\dots ,\mathbf{A}_{m}\in \mathcal{K}$ and\ $\vec{a}_{j}\in A_{j}^{n}$, for $j=1,\ldots ,m$. By inductive hypothesis there are terms $ t_{1}$, $t_{2}$ and $t_{3}$ satisfying \begin{equation*} \begin{tabular}{l} $t_{1}^{\mathbf{A}_{j}}(\vec{a}_{j})=f^{\mathbf{A}_{j}}(\vec{a}_{j})$, for all $j\neq 1$, $1\leq j\leq m$, \\ $t_{2}^{\mathbf{A}_{j}}(\vec{a}_{j})=f^{\mathbf{A}_{j}}(\vec{a}_{j})$, for all $j\neq 2$, $1\leq j\leq m$, \\ $t_{3}^{\mathbf{A}_{j}}(\vec{a}_{j})=f^{\mathbf{A}_{j}}(\vec{a}_{j})$, for all $j\neq 3$, $1\leq j\leq m$. \end{tabular} \end{equation*} It is easy to check that $t=M(t_{1},t_{2},t_{3})$ satisfies \begin{equation*} t^{\mathbf{A}_{j}}(\vec{a}_{j})=f^{\mathbf{A}_{j}}(\vec{a}_{j})\text{, for } j=1,\ldots ,m\text{.} \end{equation*} We observe that the fact that (I$_{m}$) holds for every $m\geq 1$ implies that the hypothesis of Lemma \ref{existencia de terminos en general} holds and hence $f$ is representable by a term in $\mathcal{K}$. \end{proof} We conclude the section with another term-interpolation result in the spirit of the Baker-Pixley Theorem -- in this case, for classes contained in arithmetical varieties having a universal class of finitely subdirectly irreducibles. There are plenty of well-known examples of this kind of varieties, we list a few: f-rings, vector groups, MV-algebras, Heyting algebras, discriminator varieties, etc. In our proof we use the notion of a \emph{global subdirect product}, which is a special kind of subdirect product, tight enough so that significant information can be obtained from the properties of the factors. We do not provide the definition here but refer the reader to \cite{kr-cl}. We write $\mathcal{V}_{FSI}$ to denote the class of finitely subdirectly irreducible members of a variety $\mathcal{V}$. \begin{lemma} \label{existencia de terminos en variedades aritmeticas}Let $\mathcal{V}$ be an arithmetical variety of $\mathcal{L}$-algebras and suppose that $\mathcal{ V}_{FSI}$ is universal. If $\mathcal{V}_{FSI}\vDash \forall \vec{x}\exists !z\ \varphi (\vec{x},z)$, where $\varphi \in \left[ \bigwedge \mathrm{At}( \mathcal{L})\right] $, then there exists an $\mathcal{L}$-term $t(\vec{x})$ such that $\mathcal{V}\vDash \forall \vec{x}\ \varphi (\vec{x},t(\vec{x}))$. \end{lemma} \begin{proof} By \cite{gr-va} every algebra of $\mathcal{V}$ is isomorphic to a global subdirect product whose factors are finitely subdirectly irreducible. Since global subdirect products preserve $(\forall \exists !\bigwedge p=q)$ -sentences (see \cite{vo}), we have that $\mathcal{V}\vDash \forall \vec{x} \exists !z\ \varphi (\vec{x},z)$. Let $\mathbf{F}$ be the algebra of $ \mathcal{V}$ freely generated by $x_{1},\ldots ,x_{n}$. Since $\mathbf{F} \vDash \exists !z\ \varphi (\vec{x},z)$, there exists a term $t(\vec{x})$ such that $\mathbf{F}\vDash \varphi (\vec{x},t(\vec{x}))$. It is easy to check that $\mathcal{V}\vDash \forall \vec{x}\ \varphi (\vec{x},t(\vec{x}))$. \end{proof} For a class of $\mathcal{L}$-algebras $\mathcal{K}$ let $\mathbb{V}(\mathcal{ K})$ denote the $\emph{variety}$\emph{\ generated by }$\mathcal{K}$. \begin{theorem} \label{Baker-Pixley para aritmeticas}Let $\mathcal{L}\subseteq \mathcal{L} ^{\prime }$ be first order languages without relation symbols and let $f\in \mathcal{L}^{\prime }-\mathcal{L}$ be an $n$-ary function symbol. Let $ \mathcal{K}$ be a class of $\mathcal{L}^{\prime }$-algebras satisfying: \begin{enumerate} \item[(a)] $\mathbb{V}(\mathcal{K}_{\mathcal{L}})$ is arithmetical and $ \mathbb{V}(\mathcal{K}_{\mathcal{L}})_{FSI}$ is a universal class. \item[(b)] If $\mathbf{A},\mathbf{B}\in \mathbb{P}_{u}(\mathcal{K})$ and $ \mathbf{S}\leq \mathbf{A}_{\mathcal{L}}\times \mathbf{B}_{\mathcal{L}}$, then $S$ is closed under $f^{\mathbf{A}}\times f^{\mathbf{B}}$. \end{enumerate} \noindent Then $f$ is representable by an $\mathcal{L}$-term in $\mathcal{K}$ . \end{theorem} \begin{proof} W.l.o.g. we can suppose that $\mathcal{L}^{\prime }=\mathcal{L}\cup \{f\}$. As in the proof of Theorem \ref{Baker-Pixley para clases caso localmente finitas} we can see that given $\mathbf{A},\mathbf{B}\in \mathbb{SP}_{u}( \mathcal{K})$, $\vec{a}\in A^{n}$ and $\vec{b}\in B^{n}$, there is an $ \mathcal{L}$-term $t(\vec{x})$ satisfying\emph{\ }$t^{\mathbf{A}}(\vec{a} )=f^{\mathbf{A}}(\vec{a})$ and $t^{\mathbf{B}}(\vec{b})=f^{\mathbf{B}}(\vec{b })$. We establish this property in a wider class. \begin{enumerate} \item[(i)] If $\mathbf{A},\mathbf{B}\in \mathbb{HSP}_{u}(\mathcal{K})$, $ \vec{a}\in A^{n}$ and $\vec{b}\in B^{n}$, then there is an $\mathcal{L}$ -term $t(\vec{x})$ such that $t^{\mathbf{A}}(\vec{a})=f^{\mathbf{A}}(\vec{a} ) $ and $t^{\mathbf{B}}(\vec{b})=f^{\mathbf{B}}(\vec{b})$. \end{enumerate} \noindent Take $\mathbf{A},\mathbf{B}\in \mathbb{HSP}_{u}(\mathcal{K})$, and fix $\vec{a}\in A^{n}$ and $\vec{b}\in B^{n}$. There are $\mathbf{A}_{1}\in \mathbb{S}\mathbb{P}_{u}(\mathcal{K})$, $\mathbf{B}_{1}\in \mathbb{SP}_{u}( \mathcal{K})$ and onto homomorphisms $F:\mathbf{A}_{1}\rightarrow \mathbf{A}$ and $G:\mathbf{B}_{1}\rightarrow \mathbf{B}$. Let $\vec{c}\in A_{1}^{n}$ and $\vec{d}\in B_{1}^{n}$ be such that $F(\vec{c})=\vec{a}$ and $G(\vec{d})= \vec{b}$. Let $t(\vec{x})$ be an $\mathcal{L}$-term such that $t^{\mathbf{A} _{1}}(\vec{c})=f^{\mathbf{A}_{1}}(\vec{c})$ and $t^{\mathbf{B}_{1}}(\vec{d} )=f^{\mathbf{B}_{1}}(\vec{d})$. Thus, we have that \begin{eqnarray*} t^{\mathbf{A}}(\vec{a}) &=&t^{\mathbf{A}}(F(\vec{c})) \\ &=&F(t^{\mathbf{A}_{1}}(\vec{c})) \\ &=&F(f^{\mathbf{A}_{1}}(\vec{c})) \\ &=&f^{\mathbf{A}}(F(\vec{c})) \\ &=&f^{\mathbf{A}}(\vec{a})\text{,} \end{eqnarray*} and similarly, $t^{\mathbf{B}}(\vec{b})=f^{\mathbf{B}}(\vec{b})$. Next we prove that \begin{enumerate} \item[(ii)] $\func{Con}\mathbf{A}=\func{Con}\mathbf{A}_{\mathcal{L}}$, for every $\mathbf{A}\in \mathbb{HSP}_{u}(\mathcal{K})$. \end{enumerate} \noindent Let $\mathbf{A}\in \mathbb{HSP}_{u}(\mathcal{K})$ and $\theta \in \func{Con}\mathbf{A}_{\mathcal{L}}$. We show that $\theta $ is compatible with $f$. Suppose $\vec{a},\vec{b}\in A^{n}$ are such that $\vec{a}\ \theta \ \vec{b}$. By (i) we have an $\mathcal{L}$-term $t\left( \vec{x}\right) $ such that $t^{\mathbf{A}}(\vec{a})=f^{\mathbf{A}}(\vec{a})$ and $t^{\mathbf{A }}(\vec{b})=f^{\mathbf{A}}(\vec{b})$. Clearly \begin{equation*} f^{\mathbf{A}}(\vec{a})=t^{\mathbf{A}}(\vec{a})\ \theta \ t^{\mathbf{A}}( \vec{b})=f^{\mathbf{A}}(\vec{b})\text{.} \end{equation*} Now we shall see that \begin{enumerate} \item[(iii)] $\mathbb{HSP}_{u}(\mathcal{K}_{\mathcal{L}})\subseteq \left( \mathbb{HSP}_{u}(\mathcal{K})\right) _{\mathcal{L}}$. \end{enumerate} \noindent It is always the case that $\mathbb{P}_{u}(\mathcal{K}_{\mathcal{L} })=\mathbb{P}_{u}(\mathcal{K})_{\mathcal{L}}$, and (i) implies that $ \mathcal{L}$-subreducts of algebras in $\mathbb{P}_{u}(\mathcal{K})$ are closed under $f$. Thus $\mathbb{SP}_{u}(\mathcal{K}_{\mathcal{L}})\subseteq ( \mathbb{SP}_{u}(\mathcal{K}))_{\mathcal{L}}$, and it only remains to see that $\mathbb{H}(\mathbb{SP}_{u}(\mathcal{K})_{\mathcal{L}})\subseteq \left( \mathbb{HSP}_{u}(\mathcal{K})\right) _{\mathcal{L}}$, which is immediate by (ii). By J\'{o}nsson's lemma we have that $\mathbb{V}(\mathcal{K}_{\mathcal{L} })_{FSI}\subseteq \mathbb{HSP}_{u}(\mathcal{K}_{\mathcal{L}})$, and so (iii) produces \begin{enumerate} \item[(iv)] $\mathbb{V}(\mathcal{K}_{\mathcal{L}})_{FSI}\subseteq (\mathbb{ HSP}_{u}(\mathcal{K}))_{\mathcal{L}}$. \end{enumerate} Using that $\mathbb{P}_{u}\mathbb{HSP}_{u}(\mathcal{K})\subseteq \mathbb{HSP} _{u}(\mathcal{K})$ and (i), it is easy to check that $f$ and $\mathbb{HSP} _{u}(\mathcal{K})$ satisfy the conditions stated in (3) of Theorem \ref{term valued with positive cases}. Thus we can conclude that \begin{enumerate} \item[(v)] $f=\left. t_{1}\right\vert _{\varphi _{1}}\cup \ldots \cup \left. t_{k}\right\vert _{\varphi _{k}}$ in $\mathbb{HSP}_{u}(\mathcal{K})$, with each $t_{i}$ an $\mathcal{L}$-term and each $\varphi _{i}$ in $\left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $. \end{enumerate} \noindent Hence, \begin{enumerate} \item[(vi)] the formula $\varphi (\vec{x},z)=(\varphi _{1}(\vec{x})\wedge z=t_{1}(\vec{x}))\vee \dots \vee (\varphi _{k}(\vec{x})\wedge z=t_{k}(\vec{x} ))$ defines $f$ in $\mathbb{HSP}_{u}(\mathcal{K})$. \end{enumerate} \noindent Observe that (v) implies $\mathbb{HSP}_{u}(\mathcal{K})\vDash \forall \vec{x}\exists !z\ \varphi (\vec{x},z)$, and by (iv) we obtain $ \mathbb{V}(\mathcal{K}_{\mathcal{L}})_{FSI}\vDash \forall \vec{x}\exists !z\ \varphi (\vec{x},z)$. Since $\varphi \in \left[ \bigvee \bigwedge \mathrm{At} (\mathcal{L})\right] $, Proposition \ref{traduccion} says that there is a formula $\psi \in \left[ \bigwedge \mathrm{At}(\mathcal{L})\right] $ such that $\mathbb{V}(\mathcal{K}_{\mathcal{L}})_{FSI}\vDash \varphi \leftrightarrow \psi $. Thus we have that $\mathbb{V}(\mathcal{K}_{\mathcal{L }})_{FSI}\vDash \forall \vec{x}\exists !z\ \psi (\vec{x},z)$ and by Lemma \ref{existencia de terminos en variedades aritmeticas} there is an $\mathcal{ L}$-term $t(\vec{x})$ such that $\mathbb{V}(\mathcal{K}_{\mathcal{L}})\vDash \forall \vec{x}\ \psi (\vec{x},t(\vec{x}))$. In particular, \begin{enumerate} \item[(vii)] $\mathbb{V}(\mathcal{K}_{\mathcal{L}})_{FSI}\vDash \forall \vec{ x}\ \varphi (\vec{x},t(\vec{x}))$. \end{enumerate} Now, if we take $\mathbf{A}\in \mathbb{V}(\mathcal{K})_{FSI}$, by J\'{o} nsson's lemma $\mathbf{A}\in \mathbb{HSP}_{u}(\mathcal{K})$, and so by (ii) $ \mathbf{A}$ and $\mathbf{A}_{\mathcal{L}}$ have the same congruences. Hence $ \mathbf{A}_{\mathcal{L}}\in \mathbb{V}(\mathcal{K}_{\mathcal{L}})_{FSI}$, and by (vii) $\mathbf{A}\vDash \forall \vec{x}\ \varphi (\vec{x},t(\vec{x}))$ . Thus, \begin{enumerate} \item[(viii)] $\mathbb{V}(\mathcal{K})_{FSI}\vDash \forall \vec{x}\ \varphi ( \vec{x},t(\vec{x}))$. \end{enumerate} Finally, as (vi) says that $\varphi (\vec{x},z)$ defines $f$ in $\mathbb{HSP} _{u}(\mathcal{K})$, it follows from (viii) that $\mathbb{V}(\mathcal{K} )_{FSI}\vDash \forall \vec{x}\ t(\vec{x})=f(\vec{x})$. So this identity holds in $\mathbb{V}(\mathcal{K})$ and the theorem is proved. \end{proof} \begin{corollary} Let $\mathcal{V}$ be an arithmetical variety such that $\mathcal{V}_{FSI}$ is universal. Let $\mathcal{K}\subseteq \mathcal{V}$ be a first order class and suppose that $\psi (\vec{x},z)$ is a first order formula (in the language of $\mathcal{V}$) which defines on each algebra $\mathbf{A}$ of $ \mathcal{K}$ a function $f^{\mathbf{A}}:A^{n}\rightarrow A$. Assume that for all $\mathbf{A},\mathbf{B}\in \mathcal{K}$ every subalgebra $\mathbf{S}\leq \mathbf{A}\times \mathbf{B}$ is closed under $f^{\mathbf{A}}\times f^{ \mathbf{B}}$. Then $f$ is representable by a term in $\mathcal{K}$. \end{corollary} We believe it likely that the Baker-Pixley Theorem holds in scenarios other than the two considered here (locally finite and arithmetical). A question we were unable to answer is the following. Let $\mathcal{K}$ be a first order axiomatizable class of $(\mathcal{L}\cup \{f\})$-algebras with a majority $\mathcal{L}$-term and suppose that for any $\mathbf{A},\mathbf{B}\in \mathcal{K}$ every subalgebra of $\mathbf{A}_{ \mathcal{L}}\times \mathbf{B}_{\mathcal{L}}$ is closed under $f^{\mathbf{A} \times \mathbf{B}}$. Is $f$ representable by an $\mathcal{L}$-term in $ \mathcal{K}$? \end{document}
arXiv
\begin{document} \title{On a Cahn-Hilliard-Brinkman model for tumour growth and its singular limits} \begin{abstract} In this work, we study a model consisting of a Cahn-Hilliard-type equation for the concentration of tumour cells coupled to a reaction-diffusion type equation for the nutrient density and a Brinkman-type equation for the velocity. We equip the system with Neumann boundary for the tumour cell variable and the chemical potential, Robin-type boundary conditions for the nutrient and a \grqq no-friction" boundary condition for the velocity, which allows us to consider solution dependent source terms. Well-posedness of the model as well as existence of strong solutions will be established for a broad class of potentials. We will show that in the singular limit of vanishing viscosities we recover a Darcy-type system related to Cahn-Hilliard-Darcy type models for tumour growth which have been studied earlier. An asymptotic limit will show that the results are also valid in the case of Dirichlet boundary conditions for the nutrient. \end{abstract} \noindent{\bf Key words:} tumour growth, Cahn-Hilliard equation, Brinkman's law, chemotaxis, Stokes flow, Darcy flow, outflow conditions \noindent{\bf AMS-Classification:} 35K35, 35Q92, 92C50, 35D30, 76D07 \section{Introduction} \numberwithin{equation}{section} The growth of living cancer cells is affected by many biological and chemical mechanisms. Although the amount of experimental data coming from clinical experiments is quite extensive, the understanding of involved mechanisms and biological effects is still at an unsatisfying level. In the recent past, several mathematical models for tumour growth have been developed and simulated and some of them seem to compare well with tumour data coming from clinical experiments, see \cite{AgostiEtAl,BearerEtAl,FrieboesEtAl}. These models may provide further insights into tumour growth dynamics to understand key mechanisms and to design new treatment strategies. \newline Many models are based on the hypothesis that different tissue components of the tumour (viable, quiescent, necrotic) are separated by a sharp interfacial layer and therefore can be described by free boundary problems, see \cite{ByrneChaplain2,FranksKing,Friedman,Friedman3}. As a young tumour does not have its own vascular system and must therefore consume growth factors like nutrients or oxygen from the surrounding host tissue, in the early stage of growth the tumour may undergo morphological instabilities like fingering or folding (see \cite{CristiniLowengrubNie,CristiniEtAl}) to grow without angiogenesis and to overcome diffusional limitations. This leads to highly challenging mathematical problems when modelling the tumour in the context of free boundary problems since changes in topology have to be tracked.\newline To overcome this difficulties, it has turned out that diffuse interface models, treating the tumour as a collection of cells, are a good strategy to describe the evolution and interactions of different species. These models are typically based on a multiphase approach, on balance laws for the single constituents, like mass and momentum balance, on constitutive laws and on thermodynamic principles. Several additional variables describing the extracellular matrix (ECM), growth factors or inhibitors, can be incorporated into these models, biological mechanisms like chemotaxis, apoptosis or necrosis and effects of stress, plasticity or viscoelasticity can be included, see \cite{AstaninPreziosi,GarckeLamNuernbergSitka,OdenTinsleyHawkins,PreziosiTosin}. Many contributions in the literature consider a mixture of two components (healthy and surrounding tissue), modelled as a two-phase flow and coupled to a reaction-diffusion type equation for an unknown species acting as a nutrient for the tumour, like oxygen or glucose, leading to coupled systems of partial differential equations of Cahn-Hilliard-type, see \cite{ColliGilardiHilhorst,FrigeriGrasselliRocca,GarckeLam3,GarckeLam2,HawkinsZeeKristofferOdenTinsley,HilhorstKampmannNguyenZee}. Some of them incorporate effects based on the momentum balance equation which leads to models involving e.g. Darcy's law for the velocity or a Stokes like equation, see \cite{EbenbeckGarcke,GarckeLam1,GarckeLam4,JiangWuZheng}. \newline\newline In the following, we will consider a Cahn-Hilliard-Brinkman system for tumour growth. For a bounded domain $\Omega\subset \mathbb R^d,\,d=2,3,$ and a fixed time $T>0$, we consider for $Q\coloneqq \Omega\times (0,T)$ the following system of equations \begin{subequations} \label{MEQ} \begin{alignat}{3} \label{MEQ_1}\textnormal{div}(\mathbf{v})&=\Gamma_{\mathbf{v}}(\varphi,\sigma)&& \quad \text{in } Q,\\ \label{MEQ_2}-\textnormal{div}(\mathbf{T}(\mathbf{v},p)) +\nu\mathbf{v}&= (\mu + \chi\sigma)\nabla\varphi && \quad \text{in } Q,\\ \label{MEQ_3}\partial_{t}\varphi + \textnormal{div}(\varphi\mathbf{v}) &= \textnormal{div}(m(\varphi)\nabla\mu)+\Gamma_{\varphi}(\varphi,\sigma)&& \quad \text{in } Q,\\ \label{MEQ_4}\mu&= \epsilon^{-1}\psi'(\varphi)-\epsilon\Delta\varphi -\chi\sigma && \quad \text{in } Q,\\ \label{MEQ_5}0 &= \Delta\sigma - h(\varphi)\sigma&& \quad \text{in } Q, \end{alignat} \end{subequations} where the viscous stress tensor is defined by \begin{equation} \label{DEF_STRTENS}\mathbf{T}(\mathbf{v},p) \coloneqq 2\eta(\varphi) {\mathbf{D}}\mathbf{v}+\lambda(\varphi)\textnormal{div}(\mathbf{v})\mathbf{I} - p\mathbf{I} , \end{equation} and the symmetric velocity gradient is given by \begin{equation*} {\mathbf{D}}\mathbf{v}\coloneqq \frac{1}{2}(\nabla\mathbf{v}+\nabla\mathbf{v}^T). \end{equation*} In \eqref{MEQ}-\eqref{DEF_STRTENS}, $\mathbf{v}$ denotes the volume-averaged velocity of the mixture, $p$ denotes the pressure, $\sigma$ denotes the concentration of an unknown species acting as a nutrient, $\varphi\in[-1,1]$ denotes the difference in volume fractions, with $\{\varphi = 1\}$ representing the unmixed tumour tissue and $\{\varphi = -1\}$ representing the surrounding healthy tissue, and $\mu$ denotes the chemical potential for $\varphi$. The function $m(\cdot)$ represents the mobility for the phase variable $\varphi$. The constant $\epsilon >0$ is related to the thickness of the diffuse interface, whereas $\nu$ is a positive constant representing the fluid permeability. Moreover, the functions $\eta(\cdot)$ and $\lambda(\cdot)$ are non-negative and represent the shear and the bulk viscosity, respectively. The chemotaxis parameter $\chi$ is a non-negative constant.\newline By $\mathbf n $ we will denote the outer unit normal on $\partial\Omega$, and $\partial_\n g \coloneqq \nabla g\cdot \mathbf n $ is the directional derivative . We equip the system with the following initial and boundary conditions \begin{subequations} \label{BIC} \begin{alignat}{3} \label{BC_1}\partial_\n\mu=\partial_\n\varphi &= 0 &&\quad \text{on }\partial\Omega\times (0,T)\eqqcolon\Sigma,\\ \label{BC_2}\partial_\n\sigma &= K(\sigma_{\infty}-\sigma)&&\quad\text{on }\Sigma,\\ \label{BC_3}\mathbf{T}(\mathbf{v},p)\mathbf n &= \mathbf{0}&&\quad\text{on }\Sigma,\\ \label{IC}\varphi(0) &= \varphi_0 &&\quad\text{in }\Omega, \end{alignat} \end{subequations} where $\varphi_0,\,\sigma_{\infty}$ are given functions and $K$ is a positive permeability constant. We now motivate and compare our model with other models in the literature. \begin{enumerate}[$\bullet$] \item In \cite{EbenbeckGarcke}, the authors of this paper considered a similar model with \eqref{MEQ_2}, \eqref{MEQ_5} and \eqref{BC_2} replaced by \begin{subequations} \begin{alignat}{3} \label{MEQ_NONSTAT_1}-\textnormal{div}(\mathbf{T}(\mathbf{v},p))+\nu\mathbf{v} &= \mu\nabla\varphi + (\chi_{\sigma} + \chi(1-\varphi))\nabla\sigma&&\quad\text{in }Q,\\ \label{MEQ_NONSTAT_2}\partial_{t}\sigma + \textnormal{div}(\sigma\mathbf{v}) &= \textnormal{div}(n(\varphi)(\chi_{\sigma}\nabla\sigma-\chi\nabla\varphi))-\Gamma_{\sigma}(\varphi,\sigma,\mu)&&\quad \text{in }Q,\\ \label{MEQ_NONSTAT_3}n(\varphi)\chi_{\sigma}\partial_\n\sigma &= K(\sigma_{\infty}-\sigma)&&\quad\text{on }\Sigma, \end{alignat} \end{subequations} where $\chi_{\sigma}$ is a given positive constant denoting the nutrient diffusion coefficient, $n(\varphi)$ is a mobility function and $\Gamma_{\sigma}$ is a source or sink term similar to ours. Reformulating the pressure as \begin{equation*} \tilde{p}\coloneqq p - \frac{\chi_{\sigma}}{2}|\sigma|^2 + \chi\sigma(1-\varphi), \end{equation*} we can indeed recover \eqref{MEQ_2} from \eqref{MEQ_NONSTAT_1}. For a discussion regarding pressure reformulations, we refer to \cite{GarckeLam1}. Following the arguments in \cite[Sec. 3.3.3]{GarckeLamSitkaStyles}, we introduce an active transport parameter $\kappa$, a new mobility function $\mathcal{D}(\varphi)$ and set \begin{equation*} n(\varphi) = \kappa D(\varphi)\chi^{-1},\quad \chi_{\sigma}= \kappa^{-1}\chi, \end{equation*} which allows us to \grqq decouple" active transport and chemotaxis mechanisms. Neglecting mechanism due to active transport, we can rewrite \eqref{MEQ_NONSTAT_2}-\eqref{MEQ_NONSTAT_3} as \begin{subequations} \begin{alignat}{3} \label{MEQ_NONSTAT_2a}\partial_{t}\sigma + \textnormal{div}(\sigma\mathbf{v}) &= \textnormal{div}(\mathcal{D}(\varphi)\nabla\sigma)-\Gamma_{\sigma}(\varphi,\sigma,\mu)&&\quad\text{in }Q,\\ \label{MEQ_NONSTAT_3a}\mathcal{D}(\varphi)\partial_\n\sigma &= K(\sigma_{\infty}-\sigma)&&\quad\text{on }\Sigma. \end{alignat} \end{subequations} Performing a non-dimensionalisation and using that the ratio between nutrient diffusion time-scale (minutes or hours) and cell-proliferation time-scale (days or weeks) is quite small, we can drop the time derivative and the convection term in \eqref{MEQ_NONSTAT_2a} to obtain \begin{subequations} \begin{alignat}{3} \label{MEQ_NONSTAT_2b}0 &= \Delta\sigma-\Gamma_{\sigma}(\varphi,\sigma,\mu)&&\quad\text{in }Q,\\ \label{MEQ_NONSTAT_3b}\partial_\n\sigma &= K(\sigma_{\infty}-\sigma)&&\quad\text{on }\Sigma. \end{alignat} \end{subequations} We omit the details and refer to \cite{GarckeLam3,Preziosi,WiseLowengrubFrieboesChristini}. \item Brinkman's law was first proposed in \cite{Brinkman} and has been derived rigorously by several authors using a homogenization argument for the Stokes equation, see \cite{Allaire1,SanchezPalencia}. It can be interpreted as an interpolation between Stokes flow and Darcy's law since the former one is approximated on small length scales whereas the latter one on large length scales, see \cite{DurlofskyBrady}. Setting $\sigma = 0$, neglecting source terms, i.\,e. $\Gamma_{\varphi} = \Gamma_{\mathbf{v}}=0$, and imposing Dirichlet boundary conditions for the velocity, equations \eqref{MEQ}, \eqref{BIC} have been studied in \cite{BosciaContiGrasselli} for smooth potentials and in \cite{ContiGiorgini} for logarithmic potentials. However, if source terms are present, the Dirichlet boundary condition leads to a compatibility condition since \begin{equation*} \int_\Omega \Gamma_{\mathbf{v}}\;\mathrm dx = \int_\Omega \textnormal{div}(\mathbf{v})\;\mathrm dx = \int_{\partial\Omega}\mathbf{v}\cdot\mathbf n {\mathrm{d}}\mathcal{H}^{d-1} = 0, \end{equation*} which can in general not be fulfilled if $\Gamma_{\mathbf{v}}$ depends on additional variables like $\varphi$ and $\sigma$. The \grqq no-friction" boundary condition does not lead to such an condition and therefore enables us to consider solution dependent source terms. \item In the case $\mathbf{v} = 0$ and with \eqref{MEQ_5} replaced by \eqref{MEQ_NONSTAT_2}, the model \eqref{MEQ}, \eqref{BIC} was analysed in \cite{GarckeLam3} and well-posedness has been shown in the case of Dirichlet boundary conditions for $\varphi,\mu,\sigma$ in \cite{GarckeLam2}, where they could rigorously prove the quasi-static limit. In \cite{FrigeriGrasselliRocca}, well-posedness has been established for $\partial_\n\varphi = \partial_\n\mu = \partial_\n\sigma = 0$ on $\Sigma$, for a large class of potentials and with source terms of the form \begin{equation*} \Gamma_{\varphi}(\varphi,\mu,\sigma) = P(\varphi)(\sigma- \mu),\quad \Gamma_{\sigma}(\varphi,\mu,\sigma) = P(\varphi)(\sigma- \mu), \end{equation*} where $P(\cdot)$ is a nonlinear proliferation function. \item When setting $\eta(\cdot)=\lambda(\cdot)\equiv 0$, the model reduces to the so called Cahn-Hilliard-Darcy model. In the absence of nutrients, this model has been studied in \cite{JiangWuZheng} for $\partial_\n\varphi = \partial_\n\mu = \mathbf{v}\cdot\mathbf n = 0$ on $\Sigma$ and with prescribed source terms $\Gamma_{\varphi} = \Gamma_{\mathbf{v}} = \mathcal{S}$. Setting $\sigma = 0$, prescribing $\partial_\n\varphi = \partial_\n\mu = \mathbf{v} = 0$ on $\Sigma$ and setting $\Gamma_{\mathbf{v}}=\Gamma_{\varphi}=0$, solutions of the Cahn-Hilliard-Darcy system have been established upon considering the zero viscosity limit in the Brinkman equation, see \cite{BosciaContiGrasselli}. Furthermore, we want to refer to \cite{DaiEtAl}, where they considered a multispecies Cahn-Hilliard-Darcy model for tumour growth with quasi-static nutrient equation. \end{enumerate} In the following, we want to outline some challenges arising in the analysis. When testing the Brinkman equation with $\mathbf{v}$, we have to estimate the term $\int_\Omega p\textnormal{div}(\mathbf{v})\;\mathrm dx$. Hence, we need to get an estimate on $\norml{2}{p}$ in the absence of any a-priori-estimates. To overcome this difficulty, we will use the so-called method of \grqq Subtracting the divergence". More precisely, we choose $\mathbf{v}-\mathbf{u} $ as a test-function in the momentum equation, where $\mathbf{u} $ satisfies \begin{equation*} \textnormal{div}(\mathbf{u}) = \Gamma_{\mathbf{v}}\quad\text{in }\Omega,\quad \mathbf{u} = \frac{1}{|\partial\Omega|}\left(\int_\Omega \Gamma_{\mathbf{v}}\right)\mathbf n \quad\text{on }\partial\Omega. \end{equation*} Although this prevents us from estimating the pressure, we now have to handle the term \begin{align*} \int_\Omega\mu\nabla\varphi\cdot\mathbf{u} \;\mathrm dx &= \int_\Omega (\mu-\mu_{\Omega})\nabla\varphi\cdot\mathbf{u} \;\mathrm dx + \mu_{\Omega}\int_\Omega \nabla\varphi\cdot\mathbf{u}\;\mathrm dx \\ &= \int_\Omega (\mu-\mu_{\Omega})\nabla\varphi\cdot\mathbf{u} \;\mathrm dx + \mu_{\Omega}\left(\frac{1}{|\partial\Omega|}\int_\Omega \Gamma_{\mathbf{v}}\;\mathrm dx\int_{\partial\Omega}\varphi{\mathrm{d}} \mathcal{H}^{d-1} - \int_\Omega \varphi\Gamma_{\mathbf{v}}\;\mathrm dx\right), \end{align*} where $\mu_{\Omega} = \frac{1}{|\Omega|}\int_\Omega \mu\;\mathrm dx$. To handle the boundary integral, we will derive an estimate for the $L^{\rho}(\partial\Omega)-$norm for $\varphi$, where $\rho\in [2,6]$ is an exponent connected to the growth rate of the potential $\psi(\cdot)$.\newline Furthermore, we comment on the assumption $\sigma_{\infty}\in L^4(L^2(\partial\Omega))$, which is not needed to prove existence of weak solutions, but crucial to establish well-posedness of the system. Indeed, this enables us to estimate the velocity in $L^{\frac{8}{3}}(0,T;\mathbf H^1)$ (see Section 3.3). As a consequence, we can handle the term \begin{equation*} \int_\Omega 2(\eta(\varphi_1)-\eta(\varphi_2)){\mathbf{D}}\mathbf{v}_2\colon\nabla\mathbf{v}\;\mathrm dx \end{equation*} in the proof for continuous dependence (see Section 5). We remark that this term does not arise in the case of constant viscosity.\newline Finally, in the proof for existence of strong solutions we will derive an estimate for the time derivative of the nutrient concentration by using a difference quotient method. This is needed due to the fact that the $L^2$-orthogonal projection $\mathbb{P}_n$ onto the $n$-dimensional Galerkin solution spaces is not continuous on the whole space $H^2$. Indeed, when testing \eqref{MEQ_4} with $\Delta\partial_{t}\varphi$ in the Galerkin scheme and integrating by parts twice, we encounter the term \begin{equation*} \int_{0}^{T}\int_\Omega \chi\Delta(\mathbb{P}_n\sigma)\partial_{t}\varphi{\mathrm{d}} x{\mathrm{d}} t. \end{equation*} Although we can control $\sigma\in L^2(H^2)$, an estimate of $\Delta\mathbb{P}_n\sigma\in L^2(L^2)$ can not be deduced due to \eqref{BC_2}. If the time derivative of $\sigma$ fulfils $\partial_{t}\sigma\in L^2(H^1)$, a control of $\Delta\mathbb{P}_n\sigma\in L^2(L^2)$ is not needed, see Section 6. \subsection{Notation and preliminaries} We first want to fix some notation: For a (real) Banach space $X$ we denote by $\norm{\cdot}_X$ its norm, by $X^*$ the dual space and by $\langle \cdot{,}\cdot \rangle_X$ the duality pairing between $X^*$ and $X$. For an inner product space $X$, the inner product is denoted by $(\cdot{,}\cdot)_X$. We define the scalar product of two matrices by \begin{equation*} \mathbf{A}\colon \mathbf{B}\coloneqq \sum_{i=1}^{d}a_{jk}b_{jk}\quad\text{for } \mathbf{A},\mathbf{B}\in\mathbb R^{d\times d}. \end{equation*} For the standard Lebesgue and Sobolev spaces with $1\leq p\leq \infty$, $k>0$, we use the notation $L^p\coloneqq L^p(\Omega)$ and $W^{k,p}\coloneqq W^{k,p}(\Omega)$ with norms $\norml{p}{\cdot}$ and $\normw{k}{p}{\cdot}$ respectively. In the case $p=2$ we use $H^k\coloneqq W^{k,2}$ and the norm $\normh{k}{\cdot}$. For $\beta\in (0,1)$ and $r\in (1,\infty)$, we will denote the Lebesgue and Sobolev spaces on the boundary by $L^p(\partial\Omega)$ and $W^{\beta,r}(\partial\Omega)$ with corresponding norms $\norm{\cdot}_{L^p(\partial\Omega)}$ and $\norm{\cdot}_{W^{\beta,r}(\partial\Omega)}$ (see \cite[Chap. I.3]{Sohr} for more details). By $\mathbf L^p$, ${\mathbf W}^{k,p}$, $\mathbf H^k$, $\mathbf L^p(\partial\Omega)$ and ${\mathbf W}^{\beta,r}(\partial\Omega)$, we will denote the corresponding spaces of vector valued and matrix valued functions. We denote the space $H_0^1$ as the completion of $C_0^{\infty}(\Omega)$ with respect to the $H^1$-norm. For the Bochner spaces, we use the notation $L^p(X)\coloneqq L^p(0,T;X)$ for a Banach space $X$ with $p\in [1,\infty]$. For the dual space $X^*$ of a Banach space $X$, we introduce the (generalised) mean value by \begin{equation*} v_{\Omega}\coloneqq \frac{1}{|\Omega|}\int_\Omega v\;\mathrm dx \quad\text{for } v\in L^1, \quad v_{\Omega}^*\coloneqq \frac{1}{|\Omega|}\langle v{,}1\rangle_X\quad\text{for } v\in X^*. \end{equation*} Moreover, we introduce the function spaces \begin{align*} &L_0^2\coloneqq \{w\in L^2\colon w_{\Omega}=0\},\quad H_N^2\coloneqq \{w\in H^2\colon \partial_\n w = 0 \text{ on } \partial\Omega\},\\ &(H^1)_0^*\coloneqq \{f\in (H^1)^*\colon f_{\Omega}^* =0\}. \end{align*} Then, the Neumann-Laplace operator $-\Delta_N\colon H^1\cap L_0^2\to (H^1)_0^*$ is positive definite and self-adjoint. In particular, by the Lax-Milgram theorem and Poincaré's inequality, the inverse operator $(-\Delta_N)^{-1}\colon (H^1)_0^*\to H^1\cap L_0^2$ is well-defined, and we set $u\coloneqq (-\Delta_N)^{-1}f$ for $f\in (H^1)_0^*$ if $u_{\Omega}=0$ and \begin{equation*} -\Delta u = f\text{ in }\Omega,\quad\partial_\n u =0\text{ on }\partial\Omega. \end{equation*} We have dense and continuous embeddings $H_N^2\subset H^1\subset L^2\simeq (L^2)^*\subset (H^1)^*\subset (H_N^2)^*$ and the identifications $\langle u{,}v\rangle_{H^1}=(u{,}v)_{L^2}$, $\langle u{,}w\rangle _{H^2} = (u{,}w)_{L^2}$ for all $u\in L^2,\, v\in H^1$ and $w\in H_N^2$. We also want to state the following inequalities resulting from elliptic regularity theory and integration by parts: \begin{subequations}\label{ELLIPTIC_EST} \begin{alignat}{3} \label{ELLIPTIC_EST_1}\norml{2}{\Delta f}&\leq \normL{2}{\nabla f}^{\frac{1}{2}}\normL{2}{\nabla\Delta f}^{\frac{1}{2}}\leq C\normh{1}{\varphi}^{\frac{1}{2}}\normh{3}{\varphi}^{\frac{1}{2}}&&\qquad\forall f\in H_N^2\cap H^3,\\ \label{ELLIPTIC_EST_2}\normh{2}{f}&\leq C\left(\norml{2}{f}+\norml{2}{\Delta f}\right)&&\qquad\forall f\in H_N^2, \end{alignat} \end{subequations} with a constant $C$ depending only on $\Omega$. Furthermore, we define \begin{equation*} L_{\textnormal{div}}^2(\Omega)\coloneqq \left\{ \mathbf{f}\in \mathbf L^2\colon \textnormal{div}(\mathbf{f})\in L^2 \right\} \end{equation*} equipped with the norm \begin{equation*} \norm{\mathbf{f}}_{L_{\textnormal{div}}^2(\Omega)}\coloneqq \left(\normL{2}{\mathbf{f}}^2 + \norml{2}{\textnormal{div}(\mathbf{f})}^2\right)^{\frac{1}{2}}, \end{equation*} which is a reflexive Banach space (see \cite{SimaderSohr}). In particular, for $\mathbf{u}\in L_{\textnormal{div}}^2(\Omega)$ we have \begin{equation} \label{TRACE_GENERAL}\langle \mathbf{u}\cdot\mathbf n{,}\Phi\rangle_{H^{\frac{1}{2}}(\partial\Omega)} = \int_\Omega\mathbf{u}\cdot\nabla\Phi{\mathrm{d}} x + \int_\Omega \Phi\textnormal{div}(\mathbf{u}){\mathrm{d}} x\quad \forall \Phi\in H^1 \end{equation} and \begin{equation} \label{TRACE_ESTIM}\norm{\mathbf{u}\cdot\mathbf n}_{\left(H^{\frac{1}{2}}(\partial\Omega)\right)^*}\leq C_{\textnormal{div}}\norm{\mathbf{u}}_{L_{\textnormal{div}}^2(\Omega)}, \end{equation} with a constant $C_{\textnormal{div}}$ depending only on $\Omega$ (see e.g. \cite[Chap. III.2]{Galdi}). Furthermore, we will use the following generalised Gagliardo-Nirenberg inequality: \begin{lemma}\label{LEM_GAGNIR} Let $\Omega\subset \mathbb R^d,~d\in\mathbb N,$ be a bounded domain with Lipschitz-boundary and $f\in W^{m,r}\cap L^q,~1\leq q,r\leq \infty$. For any integer $j,~0\leq j<m$, suppose there is $\alpha\in\mathbb R$ such that \begin{equation*} j - \frac{d}{p} = \left(m-\frac{d}{r}\right)\alpha + (1-\alpha)\left(-\frac{d}{q}\right),\quad \frac{j}{m}\leq \alpha\leq 1. \end{equation*} Then, there exists a positive constant $C$ depending only on $\Omega,d,m,j,q,r,$ and $\alpha$ such that \begin{equation} \label{LEM_GAGNIR_EST}\norml{p}{D^jf}\leq C\normw{m}{r}{f}^{\alpha}\norml{q}{f}^{1-\alpha}. \end{equation} \end{lemma} We will also need the following theorem concerning solvability of the divergence equation: \begin{lemma}(\cite[Sec. III.3]{Galdi})\label{LEM_DIVEQU} Let $\Omega\subset\mathbb R^d,\,d\geq 2,$ be a bounded domain with Lipschitz-boundary and let $1<q<\infty$. Then, for every $f\in L^q$ and $\mathbf a\in {\mathbf W}^{1-1/q,q}(\partial\Omega)$ satisfying \begin{equation} \label{DIV_COMP_COND}\int_\Omega f\;\mathrm dx = \int_{\partial\Omega}\mathbf a\cdot\mathbf n {\mathrm{d}} \mathcal{H}^{d-1}, \end{equation} there exists at least one solution $\mathbf{u} \in {\mathbf W}^{1,q}$ of the problem \begin{alignat*}{3} \textnormal{div}(\mathbf{u}) &= f&&\quad\text{in }\Omega,\\ \mathbf{u} &= \mathbf a &&\quad \text{on }\partial\Omega. \end{alignat*} In addition, the following estimate holds \begin{equation} \label{DIV_EST}\normW{1}{q}{\mathbf{u}}\leq C\left(\normL{q}{f}+\norm{\mathbf a}_{{\mathbf W}^{1-1/q,q}(\partial\Omega)}\right), \end{equation} with $C$ depending only on $\Omega$ and $q$. \end{lemma} Finally, in the Galerkin ansatz (see Sec. 3) we will make use of the following lemma (see \cite{AbelsTerasawa} for a proof): \begin{lemma}\label{LEM_STOKES} Let $\Omega\subset\mathbb R^d, d=2,3,$ be a bounded domain with $C^2$-boundary and outer unit normal $\mathbf n $ and $1<q<\infty$. Furthermore, assume that $g\in W^{1,q}$, $\mathbf{f}\in \mathbf L^q$, $c\in W^{1,r}$ with $r>d$, and the functions $\eta(\cdot),\,\lambda(\cdot)$ fulfil (A3) (see Assumptions \ref{ASS} below). Then, there exists a unique solution $(\mathbf{v},p)\in {\mathbf W}^{2,q}\times W^{1,q}$ of the system \begin{subequations} \label{STOKES_EQU} \begin{alignat}{3}\ \label{STOKES_EQU_1}-\textnormal{div}(2\eta(c) {\mathbf{D}}\mathbf{v} +\lambda(c)\textnormal{div}(\mathbf{v})\mathbf{I} )+\nu\mathbf{v} + \nabla p &= \mathbf{f}&&\quad \text{a.\,e. in }\Omega,\\ \label{STOKES_EQU_2}\textnormal{div}(\mathbf{v}) &= g&&\quad\text{a.\,e. in }\Omega,\\ \label{STOKES_EQU_3}(2\eta(c) {\mathbf{D}}\mathbf{v} +\lambda(c)\textnormal{div}(\mathbf{v})\mathbf{I} -p\mathbf{I} )\mathbf n &= \mathbf{0}&&\quad \text{a.\,e. on }\partial \Omega, \end{alignat} \end{subequations} satisfying the following estimate \begin{equation} \label{STOKES_EST}\normW{2}{q}{\mathbf{v}} + \normw{1}{q}{p}\leq C\left(\normL{q}{\mathbf{f}}+\normw{1}{q}{g}\right), \end{equation} with a constant $C$ depending only on $\eta_0,\eta_1,\lambda_0,q, \normw{1}{r}{c}$ and $\Omega$. \end{lemma} \section{Main results} We make the following assumptions: \begin{annahme} \label{ASS}\, \begin{enumerate} \item[(A1)] The positive constants $\epsilon,\,\nu,\,K,\,T$ are fixed, $\chi$ is a fixed, non-negative constant. Furthermore, the function $\sigma_{\infty}\in L^{2}(L^2(\partial\Omega))$ and the initial datum $\varphi_0\in H^1$ are prescribed. \item[(A2)] The mobility $m(\cdot)$ is continuous on $\mathbb R$ and satisfies \begin{equation*} m_0\leq m(t)\leq m_1\quad\forall t\in\mathbb R, \end{equation*} for positive constants $m_0,m_1$. \item[(A3)] The viscosities fulfil $\eta,\lambda\in C^2(\mathbb R)$ with bounded first derivatives and \begin{equation} \label{ASS_VISC}\eta_0\leq \eta(t)\leq \eta_1,\quad 0\leq \lambda(t)\leq \lambda_0\quad\forall t\in\mathbb R, \end{equation} for positive constants $\eta_0,\eta_1$ and a non-negative constant $\lambda_0$. \item[(A4)] The source terms are of the form \begin{equation*} \Gamma_{\mathbf{v}}(\varphi,\sigma) = b_{\mathbf{v}}(\varphi)\sigma + f_{\mathbf{v}}(\varphi),\quad \Gamma_{\varphi}(\varphi,\sigma) = b_{\varphi}(\varphi)\sigma + f_{\varphi}(\varphi), \end{equation*} where $b_{\mathbf{v}},f_{\mathbf{v}}\in C^1(\mathbb R)$ are bounded with bounded first derivatives and $b_{\varphi},f_{\varphi}\in C^0(\mathbb R)$ are bounded functions. The function $h\in C^0(\mathbb R)$ is continuous, bounded and non-negative. \item[(A5)] The function $\psi\in C^2(\mathbb R)$ is non-negative and can be written as \begin{equation} \label{ASS_PSI_1}\psi(s) = \psi_1(s) + \psi_2(s)\quad\forall s\in \mathbb R, \end{equation} where $\psi_1, \psi_2\in C^2(\mathbb R)$ and \begin{alignat}{3} \label{ASS_PSI_2}R_1(1+|s|^{\rho-2})&\leq \psi_1''(s)\leq R_2(1+|s|^{\rho-2})&&\quad\forall s\in\mathbb R,\\ \label{ASS_PSI_3}|\psi_2''(s)|&\leq R_3&&\quad\forall s\in\mathbb R, \end{alignat} where $R_i,\,i=1,2,3,$ are positive constants with $R_1<R_2$ and $\rho\in [2,6]$. Furthermore, if $\rho=2$, we assume $2R_1>R_3$. \end{enumerate} \end{annahme} \begin{bemerkung} Using \textnormal{(A5)}, it is straightforward to check that there exist positive constants $R_i,\,i=4,5,6,$ such that \begin{equation} \label{ASS_PSI_4}\psi(s)\geq R_4|s|^{\rho} - R_5\quad \forall s\in \mathbb R \end{equation} and \begin{equation} \label{ASS_PSI_5}|\psi'(s)|\leq R_6(1+|s|^{\rho-1})\quad \forall s\in \mathbb R. \end{equation} \end{bemerkung} We now introduce the weak formulation of \eqref{MEQ}, \eqref{BIC}: \begin{definition}(Weak solution for \eqref{MEQ},\eqref{BIC}) \label{DEF_WSOL_1}We call a quintuple $(\varphi,\sigma,\mu,\mathbf{v},p)$ a weak solution of \eqref{MEQ} and \eqref{BIC} if \begin{align*} &\varphi \in H^1(0,T;(H^1)^*)\cap L^{\infty}(0,T;H^1)\cap L^2(0,T;H^3) ,\quad \mu\in L^2(0,T;H^1),\\ &\sigma\in L^2(0,T;H^1),\quad \mathbf{v}\in L^2(0,T;\mathbf H^1),\quad p\in L^{2}(0,T;L^2), \end{align*} such that \begin{equation*} \textnormal{div}(\mathbf{v}) = \Gamma_{\mathbf{v}}(\varphi,\sigma)\text{ a.\,e. in }Q,\quad \varphi(0)=\varphi_0\text{ a.\,e. in }\Omega, \end{equation*} and \begin{subequations} \label{WFORM_1} \begin{align} \label{WFORM_1a}0 &= \int_\Omega \mathbf{T}(\mathbf{v},p)\colon \nabla\boldsymbol{\Phi}+\nu\mathbf{v}\cdot\boldsymbol{\Phi} - (\mu+\chi\sigma)\nabla\varphi\cdot\boldsymbol{\Phi}\;\mathrm dx ,\\ \label{WFORM_1b} 0 &= \langle\partial_{t}\varphi{,}\Phi\rangle_{H^1,(H^1)^*} + \int_\Omega m(\varphi)\nabla\mu\cdot\nabla\Phi+ (\nabla\varphi\cdot\mathbf{v}+ \varphi\Gamma_{\mathbf{v}}(\varphi,\sigma) - \Gamma_{\varphi}(\varphi,\sigma))\Phi\;\mathrm dx, \\ \label{WFORM_1c} 0 &= \int_\Omega (\mu+ \chi\sigma)\Phi -\epsilon^{-1}\Psi'(\varphi)\Phi - \epsilon\nabla\varphi\cdot\nabla\Phi \;\mathrm dx ,\\ \label{WFORM_1d} 0 &= \int_\Omega \nabla\sigma\cdot\nabla\Phi + h(\varphi)\sigma\Phi\;\mathrm dx +\int_{\partial\Omega}K(\sigma-\sigma_{\infty})\Phi{\mathrm{d}}\mathcal{H}^{d-1} , \end{align} \end{subequations} for a.\,e. $t\in(0,T)$ and for all $\boldsymbol{\Phi}\in \mathbf H^1,\,\Phi\in H^1$. \end{definition} \begin{theorem}\label{THM_WSOL_1} Let $\Omega\subset\mathbb R^d,\,d=2,3,$ be a bounded domain with $C^3$-boundary and assume that Assumptions \ref{ASS} is fulfilled. Then there exists a solution quintuple $(\varphi,\mu,\sigma,\mathbf{v},p)$ of \eqref{MEQ}, \eqref{BIC} in the sense of Definition \ref{DEF_WSOL_1}. Furthermore, we have \begin{align*} &\varphi \in H^1(0,T;(H^1)^*)\cap L^{\infty}(0,T;H^1)\cap L^4(0,T;H^2)\cap L^2(0,T;H^3) ,\quad \sigma\in L^2(0,T;H^1),\\ &\mu\in L^2(0,T;H^1),\quad\mathbf{v}\in L^2(0,T;\mathbf H^1),\quad p\in L^{2}(0,T;L^2), \end{align*} and the following estimate holds: \begin{align} \nonumber&\norm{\varphi}_{H^1((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3) } + \norm{\sigma}_{L^2(H^1)} + \norm{\mu}_{L^2(H^1)}\\ \label{THM_WSOL_1_EST_1}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^{\frac{3}{2}})} + \norm{\mathbf{v}}_{L^2(\mathbf H^1)}+ \norm{p}_{L^2(L^2)}\leq C, \end{align} with a constant $C$ depending only on the system parameters, $\Omega$ and $T$.\newline If in addition $\sigma_{\infty}\in L^4(L^2(\partial\Omega))$, we have \begin{equation*} \sigma\in L^4(0,T;H^1),\quad \mu\in L^4(0,T;L^2),\quad \mathbf{v}\in L^{\frac{8}{3}}(0,T;\mathbf H^1), \end{equation*} and \begin{equation} \label{THM_WSOL_1_EST_2} \norm{\sigma}_{L^4(H^1)} + \norm{\mu}_{L^4(L^2)}+ \norm{\mathbf{v}}_{L^{\frac{8}{3}}(\mathbf H^1)} + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^2)} \leq C. \end{equation} \end{theorem} \begin{theorem}(The limit $K\to\infty$)\label{THM_KLIM} Let the assumptions of Theorem \ref{THM_WSOL_1} be fulfilled and assume in addition that $\sigma_{\infty}\in L^2(H^{\frac{1}{2}}(\partial\Omega))$. Let $K>0$ and denote by $(\varphi_K,\mu_K,\sigma_K,\mathbf{v}_K,p_K)$ a weak solution of \eqref{MEQ}, \eqref{BIC} corresponding to $\varphi_0$ and $K$ in the sense of Definition \eqref{DEF_WSOL_1}. Then, as $K\to\infty$, we have \begin{alignat*}{3} \varphi_K&\to\varphi&&\quad\text{weakly-}*&&\quad\text{ in }H^1((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3) ,\\ \sigma_K&\to \sigma &&\quad\text{weakly-}*&&\quad\text{ in }L^2(H^1),\\ \mu_K&\to \mu&&\quad\text{weakly}&&\quad\text{ in } L^2(H^1),\\ p_K&\to p&&\quad\text{weakly}&&\quad\text{ in }L^2(L^2),\\ \mathbf{v}_K&\to\mathbf{v}&&\quad\text{weakly}&&\quad\text{ in }L^2(\mathbf H^1),\\ \textnormal{div}(\varphi_K\mathbf{v}_K)&\to \textnormal{div}(\varphi\mathbf{v})&&\quad\text{weakly}&&\quad \text{ in }L^2(L^{\frac{3}{2}}),\\ \sigma_K&\to \sigma_{\infty}&&\quad\text{strongly}&&\quad\text{ in }L^2(L^2(\partial\Omega)), \end{alignat*} where $(\varphi,\mu,\sigma,\mathbf{v},p)$ satisfies \begin{equation*} \textnormal{div}(\mathbf{v}) = \Gamma_{\mathbf{v}}(\varphi,\sigma)\text{ a.\,e. in }Q,\quad \varphi(0)=\varphi_0\text{ a.\,e. in }\Omega,\quad \sigma\in (\sigma_{\infty}+L^2(0,T;H_0^1)), \end{equation*} and \eqref{WFORM_1} with \eqref{WFORM_1d} replaced by \begin{equation} \label{WF_KLIM_4}0 = \int_\Omega \nabla\sigma\cdot\nabla\xi + h(\varphi)\sigma\xi\;\mathrm dx , \end{equation} for a.\,e. $t\in (0,T)$ and for all $\xi\in H_0^1$. \end{theorem} We now introduce the definition of weak solutions of the Cahn-Hilliard-Darcy system endowed with \eqref{BC_1}-\eqref{BC_2} and \begin{equation} \label{BC_3_DARCY}p = 0\quad\text{on } \Sigma. \end{equation} \begin{definition}\label{DEF_WS_DARCY} We call a quintuple $(\varphi,\mu,\sigma,\mathbf{v},p)$ weak solution of the Cahn-Hilliard-Darcy system endowed with \eqref{BC_1}-\eqref{BC_2} and \eqref{BC_3_DARCY} if \begin{align*} &\varphi\in W^{1,\frac{8}{5}}(0,T;(H^1)^*)\cap L^{\infty}(0,T;H^1)\cap L^2(0,T;H^3),\quad\mu\in L^2(0,T;H^1),\\ & \quad \sigma\in L^2(0,T;H^1),\quad \mathbf{v}\in L^2(0,T;\mathbf L^2),\quad p\in L^2(0,T;H_0^1), \end{align*} such that \begin{equation*} \varphi(0)=\varphi_0 \quad\text{a.\,e. in } \Omega \end{equation*} and \begin{subequations}\label{WF_DARCY} \begin{align} \label{WF_DARCY_a}0&= \langle \partial_{t}\varphi{,}\phi\rangle_{H^1} + \int_\Omega m(\varphi)\nabla\mu\cdot\nabla\phi{\mathrm{d}} x + \int_\Omega (\nabla\varphi\cdot\mathbf{v} + \varphi\Gamma_{\mathbf{v}}(\varphi,\sigma) - \Gamma_{\varphi}(\varphi,\sigma))\phi{\mathrm{d}} x,\\ \label{WF_DARCY_b}0&= \int_\Omega(\mu+\chi\sigma)\phi - \epsilon^{-1}\psi'(\varphi)\phi - \epsilon\nabla\varphi\cdot\nabla\phi{\mathrm{d}} x,\\ \label{WF_DARCY_c}0 &= \int_\Omega \nabla\sigma\cdot\nabla\phi + h(\varphi)\sigma\phi{\mathrm{d}} x + \int_{\partial\Omega}K(\sigma-\sigma_{\infty})\phi{\mathrm{d}}\mathcal{H}^{d-1},\\ \label{WF_DARCY_d}0 &= \int_\Omega \left(\mathbf{v} + \frac{1}{\nu}\nabla p-\frac{1}{\nu}(\mu+\chi\sigma)\nabla\varphi\right)\cdot\boldsymbol{\Phi}{\mathrm{d}} x,\\ \label{WF_DARCY_e}0&= \int_\Omega \frac{1}{\nu}\nabla p\cdot\nabla\xi - \Gamma_{\mathbf{v}}(\varphi,\sigma)\xi - \frac{1}{\nu}(\mu+\chi\sigma)\nabla\varphi\cdot\nabla\xi{\mathrm{d}} x, \end{align} \end{subequations} for a.\,e. $t\in (0,T)$ and all $\phi\in H^1,\,\xi\in H_0^1,\,\boldsymbol{\Phi}\in\mathbf{L}^2$. \end{definition} The following theorem states that solutions of the Cahn-Hilliard-Darcy system can be found as the limit of the Cahn-Hilliard-Brinkman system when the viscosities tend to zero. \begin{theorem}\label{THM_ZERO_VISC} Let $\Omega\subset\mathbb R^d,\,d=2,3,$ be a bounded domain with $C^3$-boundary and assume that \textnormal{(A1)}-\textnormal{(A2)}, \textnormal{(A4)}-\textnormal{(A5)} holds. Furthermore, let $\{\eta_n,\lambda_n\}_{n\in\mathbb N}$ be a sequence of function pairs fulfilling (A3) such that \begin{equation*} \norm{\eta_n(\cdot)}_{C^0(\mathbb R)}\to 0,\quad \norm{\lambda_n(\cdot)}_{C^0(\mathbb R)}\to 0\quad\text{as }n\to\infty. \end{equation*} Let $(\varphi_n,\mu_n,\sigma_n,\mathbf{v}_n,p_n)$ be a sequence of weak solutions of the Cahn-Hilliard-Brinkman system in the sense of Definition \ref{DEF_WSOL_1} for $\eta(\cdot)=\eta_n(\cdot),\,\lambda(\cdot)= \lambda_n(\cdot)$ and originating from $\varphi_0\in H^1$. Then, at least for a subsequence, $(\varphi_n,\mu_n,\sigma_n,\mathbf{v}_n,p_n)$ converges to a weak solution $(\varphi,\mu,\sigma,\mathbf{v},p)$ of the Cahn-Hilliard-Darcy system in the sense of Definition \ref{DEF_WS_DARCY} such that \begin{alignat*}{3} \varphi_{n}&\to\varphi&&\quad\text{weakly-}*&&\quad\text{ in } W^{1,\frac{8}{5}}((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3),\\ \sigma_{n}&\to \sigma &&\quad\text{weakly-}&&\quad\text{ in }L^2(H^1),\\ \mu_{n}&\to \mu&&\quad\text{weakly}&&\quad\text{ in }L^2(H^1),\\ p_{n}&\to p&&\quad\text{weakly}&&\quad\text{ in }L^{2}(L^2),\\ \mathbf{v}_{n}&\to\mathbf{v}&&\quad\text{weakly}&&\quad\text{ in }L^2(\mathbf L^2)\cap L^2\left(L_{\textnormal{div}}^2(\Omega)\right),\\ 2\eta_n(\varphi_n) {\mathbf{D}}\mathbf{v}_{n}&\to \mathbf{0}&&\quad\text{weakly}&&\quad\text{ in }L^2(\mathbf L^2),\\ \lambda_n(\varphi_n) \textnormal{div}(\mathbf{v}_n)\mathbf{I}&\to \mathbf{0}&&\quad\text{weakly}&&\quad\text{ in }L^2(\mathbf L^2), \end{alignat*} and \begin{equation*} \varphi_{n}\to\varphi\quad\text{strongly}\quad\text{ in }C^0(L^r)\cap L^2(W^{2,r})\text{ and a.\,e. in }Q, \end{equation*} for all $r\in [1,6)$. Moreover, it holds that \begin{alignat*}{3} \partial_\n\varphi &= p= 0 &&\quad \text{a.\,e. on }\Sigma,\\ \textnormal{div}(\mathbf{v}) &= \Gamma_{\mathbf{v}}(\varphi,\sigma)&&\quad\text{a.\,e. in }Q,\\ \mathbf{v} &= \frac{1}{\nu}\left(-\nabla p + (\mu+\chi\sigma)\nabla\varphi\right)&&\quad\text{a.\,e. in }Q,\\ \mu &= \epsilon^{-1}\psi'(\varphi)-\epsilon\Delta \varphi-\chi\sigma &&\quad\text{a.\,e. in }Q, \end{alignat*} and \begin{align} \nonumber &\norm{\varphi}_{W^{1,\frac{8}{5}}((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3) } + \norm{\mu}_{L^2(H^1)} + \norm{\sigma}_{L^2(H^1)} \\ \label{EST_DARCY}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^{\frac{8}{5}}((H^1)^*)} + \norm{\mathbf{v}}_{L^2(L_{\textnormal{div}}^2(\Omega))}+ \norm{p}_{L^2(H_0^1)}\leq C, \end{align} with a constant $C$ depending only on the system parameters and on $\Omega,\,T$. \end{theorem} To prove continuous dependence on the initial and boundary data, we make the following additional assumptions: \begin{annahme}\label{ASS_1}\, \begin{enumerate} \item[(B1)] The mobility $m(\cdot)$ is a constant, without loss of generality we assume $m(\cdot)\equiv 1$. \item[(B2)] The functions $b_{\varphi}(\cdot)$,\,$f_{\varphi}(\cdot)$ and $h(\cdot),$ are Lipschitz continuous with Lipschitz constants $L_b$,\,$L_f$ and $L_h$, respectively. \item[(B3)] For $\psi'$ and $\psi''$, we assume that \begin{alignat}{3} \label{ASS_PSI_6}|\psi'(s_1)-\psi'(s_2)|&\leq k_1(1+|s_1|^4 + |s_2|^4)|s_1-s_2|&&\quad \forall s_1,s_2\in\mathbb R,\\ \label{ASS_PSI_7}|\psi''(s_1)-\psi''(s_2)|&\leq k_2(1+|s_1|^3 + |s_2|^3)|s_1-s_2|&&\quad \forall s_1,s_2\in\mathbb R, \end{alignat} for some positive constants $k_1,k_2$. \end{enumerate} \end{annahme} \begin{theorem} \label{THM_CONTDEP} Let $\Omega\subset\mathbb R^d,\,d=2,3,$ be a bounded domain with $C^3$-boundary and assume that Assumptions \ref{ASS} and \ref{ASS_1} hold. Then, for any two weak solution quintuples $\{\varphi_i,\mu_i,\sigma_i,\mathbf{v}_i,p_i\}$, $i=1,2,$ of \eqref{MEQ}, \eqref{BIC} satisfying \begin{align*} &\varphi \in H^1(0,T;(H^1)^*)\cap L^{\infty}(0,T;H^1)\cap L^4(0,T;H^2)\cap L^2(0,T;H^3) ,\quad \sigma\in L^{4}(0,T;H^1),\\ &\mu\in L^4(0,T;L^2)\cap L^2(0,T;H^1) ,\quad\mathbf{v}\in L^{\frac{8}{3}}(0,T;\mathbf H^1),\quad p\in L^{2}(0,T;L^2), \end{align*} with $\sigma_{i,\infty}\in L^4(L^2(\partial\Omega))$ and $\varphi_i(0)=\varphi_{i,0}\in H^1$ for $i=1,2$, there exists a positive constant $C$ depending only on the system parameters and on $\Omega,T,L_h,L_b,L_f$ such that \begin{align} \nonumber &\sup_{t\in (0,T]}\left(\normh{1}{\varphi_1(t)-\varphi_2(t)}^2\right) + \norm{\varphi_1-\varphi_2}_{H^1((H^1)^*)\cap L^2(H^3)}^2 + \norm{\mu_1-\mu_2}_{L^2(H^1)}^2\\ \nonumber&\quad + \norm{\sigma_1-\sigma_2}_{L^2(H^1)}^2 + \norm{\mathbf{v}_1-\mathbf{v}_2}_{L^2(\mathbf H^1)}^2 + \norm{p_1-p_2}_{L^2(L^2)}^2\\ \label{THM_CONTDEP_EST}&\quad\leq C\left(\normh{1}{\varphi_{1,0}-\varphi_{2,0}}^2 + \norm{\sigma_{1,\infty}-\sigma_{2,\infty}}_{L^4(L^2(\partial\Omega))}^2\right). \end{align} \end{theorem} We have the following notion of strong solutions: \begin{definition}(Strong solution for \eqref{MEQ},\eqref{BIC}) \label{DEF_SSOL_1}We call a quintuple $(\varphi,\sigma,\mu,\mathbf{v},p)$ a strong solution of \eqref{MEQ} and \eqref{BIC} if \begin{align*} &\varphi \in H^1(0,T;L^2)\cap L^{\infty}(0,T;H^2)\cap L^2(0,T;H^4) ,\quad \mu\in L^2(0,T;H^2),\\ &\sigma\in L^2(0,T;H^2),\quad\mathbf{v}\in L^2(0,T;\mathbf H^2),\quad p\in L^{2}(0,T;H^1), \end{align*} and \eqref{MEQ}, \eqref{BIC} are fulfilled almost everywhere in the respective sets. \end{definition} For the existence of strong solutions, we make the following additional assumptions: \begin{annahme}\label{ASS_2}\, \item[(C1)] The mobility $m(\cdot)$ is a constant, without loss of generality we assume $m(\cdot)\equiv 1$. The function $h(\cdot)$ is Lipschitz continuous with Lipschitz constant $L_h$. \item[(C2)] The boundary datum $\sigma_{\infty}\in H^1(0,T;H^{\frac{1}{2}}(\partial\Omega))$ and the initial datum $\varphi_0\in H_N^2$ are prescribed. \item[(C3)] The function $\psi\in C^3(\mathbb R)$ fulfils \begin{equation} \label{ASS_PSI_8}|\psi'''(s)|\leq k_3(1+|s|^3)\quad\forall s\in \mathbb R, \end{equation} for a positive constant $k_3$. \end{annahme} We have the following result concerning strong solutions: \begin{theorem}\label{THM_SSOL} Let $\Omega\subset\mathbb R^d,\,d=2,3,$ be a bounded domain with $C^4$-boundary and assume that Assumptions \ref{ASS} and \ref{ASS_2} hold. Then there exists a solution quintuple $(\varphi,\mu,\sigma,\mathbf{v},p)$ of \eqref{MEQ}, \eqref{BIC} in the sense of Definition \ref{DEF_SSOL_1}. Furthermore, we have \begin{equation*} \varphi \in C^0(\bar{Q}),\quad \mu\in L^{\infty}(0,T;L^2),\quad \sigma\in H^1(0,T;H^1)\cap L^{\infty}(0,T;H^2),\quad\mathbf{v}\in L^{8}(0,T;\mathbf H^2),\quad p\in L^{8}(0,T;H^1), \end{equation*} and the following estimate holds: \begin{align} \nonumber&\norm{\varphi}_{H^1(L^2)\cap C^0(\bar{Q})\cap L^{\infty}(H^2)\cap L^2(H^4) } + \norm{\sigma}_{H^1(H^1)\cap L^{\infty}(H^2)} + \norm{\mu}_{L^{\infty}(L^2)\cap L^2(H^2)}\\ \label{THM_SSOL_EST}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^2)} + \norm{\mathbf{v}}_{L^{8}(\mathbf H^2)}+ \norm{p}_{L^8(H^1)}\leq C. \end{align} \end{theorem} \section{Existence of weak solutions} In order to prove the result, we now derive a priori estimates for \eqref{WFORM_1a}-\eqref{WFORM_1d}. By $C$, we denote a positive constant not depending on $(\varphi,\mu,\sigma,\mathbf{v},p)$ which may vary from line to line. The duality pairing in \eqref{WFORM_1b} can be replaced by the $L^2$-product for smooth enough functions which is satisfied for example by a Galerkin-ansatz. Approximating solutions can be constructed by applying a Galerkin approximation with respect to $\varphi$ and $\mu$ and at the same time solving for $\mathbf{v},\,p$ and $\sigma$ in the corresponding whole function spaces (for details, we refer to \cite{EbenbeckGarcke}, \cite{GarckeLam1}). In the following, we will write $\Gamma_{\varphi},\,\Gamma_{\mathbf{v}},$ instead of $\Gamma_{\varphi}(\varphi,\sigma),\,\Gamma_{\mathbf{v}}(\varphi,\sigma)$. \subsection{A-priori-estimates} \subsubsection{Estimating the nutrient concentration} Choosing $\Phi = \sigma$ in \eqref{WFORM_1d} and using the non-negativity of $h(\cdot)$, we obtain \begin{equation} \label{APRI_EQ_1}\int_\Omega |\nabla\sigma|^2\;\mathrm dx + K\int_{\partial\Omega}|\sigma|^2 {\mathrm{d}}\mathcal{H}^{d-1} \leq K\int_{\partial\Omega}\sigma\sigma_{\infty}{\mathrm{d}}\mathcal{H}^{d-1}. \end{equation} Using Hölder's and Young's inequalities, we have \begin{equation} \label{APRI_EQ_2}\left|K\int_{\partial\Omega}\sigma\sigma_{\infty}{\mathrm{d}}\mathcal{H}^{d-1}\right|\leq \frac{K}{2}\norm{\sigma}_{L^2(\partial\Omega)}^2 + \frac{K}{2}\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2. \end{equation} Using \eqref{APRI_EQ_1}-\eqref{APRI_EQ_2} and Poincaré's inequality, we deduce that \begin{equation} \label{APRI_EQ_3}\normh{1}{\sigma}\leq C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}. \end{equation} Moreover, by the continuous embedding $H^1\hookrightarrow L^p,\, p\in[2,6],$ and \textnormal{(A4)}, we have \begin{equation} \label{APRI_EQ_4}\norml{p}{\Gamma_{\varphi}} + \norml{p}{\Gamma_{\mathbf{v}}}\leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}\right)\quad\forall p\in[2,6]. \end{equation} \subsubsection{An energy identity} Due to \textnormal{(A4)}, \eqref{APRI_EQ_3}-\eqref{APRI_EQ_4}, there exists a solution $\mathbf{u} \in \mathbf H^1$ of the problem \begin{equation} \label{APRI_EQ_6}\textnormal{div}(\mathbf{u}) = \Gamma_{\mathbf{v}} \quad\text{in }\Omega,\qquad \mathbf{u} = \frac{1}{|\partial\Omega|}\left(\int_\Omega\Gamma_{\mathbf{v}}\;\mathrm dx \right)\mathbf n \eqqcolon \mathbf a\quad\text{on }\partial\Omega, \end{equation} satisfying the estimate \begin{equation} \label{APRI_EQ_7}\norm{\mathbf{u} }_{\mathbf H^1}\leq C\norml{2}{\Gamma_{\mathbf{v}}}\leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}\right). \end{equation} Choosing $\boldsymbol{\Phi}=\mathbf{v}-\mathbf{u}$ in \eqref{WFORM_1a}, $\Phi = \mu+\chi\sigma$ in \eqref{WFORM_1b}, $\Phi = \partial_{t}\varphi$ in \eqref{WFORM_1c}, and summing the resulting identities, we obtain \begin{align} \nonumber &\frac{{\mathrm{d}}}{\;\mathrm dt } \int_\Omega \epsilon^{-1}\psi(\varphi)+\frac{\epsilon}{2}|\nabla\varphi|^2 \;\mathrm dx +\int_\Omega m(\varphi)|\nabla\mu|^2 +\int_\Omega 2\eta(\varphi)|{\mathbf{D}}\mathbf{v}|^2 +\nu|\mathbf{v}|^2\;\mathrm dx \\ \nonumber&\quad = \int_\Omega -m(\varphi)\chi\nabla\mu\cdot\nabla\sigma + (\Gamma_{\varphi}-\varphi\Gamma_{\mathbf{v}})(\mu+\chi\sigma)\;\mathrm dx \\ \label{APRI_EQ_8}& \quad +\int_\Omega 2\eta(\varphi) {\mathbf{D}}\mathbf{v}\colon \nabla\mathbf{u} + \nu\mathbf{v}\cdot \mathbf{u} \;\mathrm dx - \int_\Omega (\mu + \chi\sigma)\nabla\varphi \cdot \mathbf{u} \;\mathrm dx. \end{align} \subsubsection{Estimating the right hand side of the energy identity} Using Hölder's and Young's inequalities together with \eqref{APRI_EQ_7}, we have \begin{equation} \label{APRI_EQ_9}\left|\int_\Omega 2\eta(\varphi) {\mathbf{D}}\mathbf{v} \colon\nabla\mathbf{u} + \nu\mathbf{v} \cdot\mathbf{u} \;\mathrm dx \right| \leq \normL{2}{\sqrt{\eta(\varphi)}{\mathbf{D}}\mathbf{v}}^2 + \frac{\nu}{2}\normL{2}{\mathbf{v}}^2 + C\left(1+\norml{\infty}{\eta(\varphi)}\right)\norml{2}{\Gamma_{\mathbf{v}}}^2. \end{equation} We now want to estimate the terms involving $\Gamma_{\mathbf{v}}$ and $\Gamma_{\varphi}$. Using Hölder's, Poincaré's and Young's inequalities together with \eqref{APRI_EQ_3}-\eqref{APRI_EQ_4}, we obtain \begin{align} \nonumber \left|\int_\Omega \Gamma_{\varphi}(\mu+\chi\sigma)\;\mathrm dx \right| &\leq C_P\norml{2}{\Gamma_{\varphi}}\big(|(\mu+\chi\sigma)_{\Omega}| + \normL{2}{\nabla(\mu+\chi\sigma)}\big)\\ \label{APRI_EQ_10}&\leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right)\big(1+|(\mu+\chi\sigma)_{\Omega}|\big) + \frac{m_0}{8}\normL{2}{\nabla\mu}^2. \end{align} With similar arguments, using the continuous embedding $H^1\hookrightarrow L^6$, we obtain \begin{align} \nonumber \left|\int_\Omega \Gamma_{\mathbf{v}}\varphi(\mu+\chi\sigma)\;\mathrm dx \right| &\leq C\norml{3}{\Gamma_{\mathbf{v}}}\norml{2}{\varphi}\norml{6}{\mu+\chi\sigma} \\ \nonumber &\leq C\norml{3}{\Gamma_{\mathbf{v}}}\norml{2}{\varphi}\big(|(\mu+\chi\sigma)_{\Omega}| + \normL{2}{\nabla(\mu+\chi\sigma)}\big)\\ \label{APRI_EQ_11} &\leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right)\big(1+\norml{2}{\varphi}^2 + \norml{2}{\varphi}|(\mu+\chi\sigma)_{\Omega}|\big) + \frac{m_0}{8}\normL{2}{\nabla\mu}^2. \end{align} Now, choosing $\Phi = 1$ in \eqref{WFORM_1c} and using \eqref{ASS_PSI_5}, we obtain \begin{equation*} \left|\int_\Omega \mu+\chi\sigma\;\mathrm dx \right| = \left|\int_\Omega \epsilon^{-1}\psi'(\varphi)\;\mathrm dx\right|\leq \epsilon^{-1}R_6\int_\Omega 1+|\varphi|^{\rho-1}\;\mathrm dx \leq C\left(1+\norml{\rho-1}{\varphi}^{\rho-1}\right), \end{equation*} hence \begin{equation*} |(\mu+\chi\sigma)_{\Omega}|\leq C\left(1+\norml{\rho-1}{\varphi}^{\rho-1}\right)\leq C\left(1+\norml{\rho}{\varphi}^{\rho-1}\right). \end{equation*} In particular, using Young's inequality, the continuous embedding $L^{\rho}\hookrightarrow L^2,\,\rho\in [2,6],$ and \eqref{ASS_PSI_4}, this implies \begin{align} \label{APRI_EQ_12} |(\mu+\chi\sigma)_{\Omega}|&\leq C\left(1+\norml{\rho}{\varphi}^{\rho-1}\right) \leq C\left(1+\norml{\rho}{\varphi}^{\rho}\right)\leq C\left(1+\norml{1}{\psi(\varphi)}\right),\\ \label{APRI_EQ_13} \norml{2}{\varphi}|(\mu+\chi\sigma)_{\Omega}|&\leq C\left(\norml{2}{\varphi} + \norml{\rho}{\varphi}^{\rho}\right)\leq C\left(1+\norml{\rho}{\varphi}^{\rho}\right)\leq C\left(1+\norml{1}{\psi(\varphi)}\right). \end{align} Using \eqref{APRI_EQ_12}-\eqref{APRI_EQ_13} in \eqref{APRI_EQ_10}-\eqref{APRI_EQ_11} and applying the continuous embedding $L^{\rho}\hookrightarrow L^2,\,\rho\in [2,6],$ together with \eqref{ASS_PSI_4}, we end up with \begin{equation} \label{APRI_EQ_14} \left|\int_\Omega (\Gamma_{\varphi}-\varphi\Gamma_{\mathbf{v}})(\mu+\chi\sigma) \;\mathrm dx\right| \leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right)\left(1+\norml{1}{\psi(\varphi)}\right) + \frac{m_0}{4}\normL{2}{\nabla\mu}^2. \end{equation} For the first term on the r.h.s. of \eqref{APRI_EQ_8}, applying Hölder's and Young's inequalities, \textnormal{(A2)} and \eqref{APRI_EQ_3}, we obtain \begin{equation} \label{APRI_EQ_15} \left|\int_\Omega m(\varphi)\chi\nabla\mu\cdot\nabla\sigma\;\mathrm dx \right| \leq m_1\chi\normL{2}{\nabla\mu}\normL{2}{\nabla\sigma}\leq \frac{m_0}{8}\normL{2}{\nabla\mu}^2 + C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2. \end{equation} \subsubsection{Estimating the remaining term} \paragraph{The case $\rho=2$:}Using Hölder's, Young's and Poincaré's inequalities, the continuous embeddings $\mathbf H^1\hookrightarrow \mathbf L^6,\, H^1\hookrightarrow L^3$ and \eqref{ASS_PSI_4}, \eqref{APRI_EQ_7}, \eqref{APRI_EQ_12}, we obtain \begin{align} \nonumber\left|\int_\Omega (\mu+\chi\sigma)\nabla\varphi\cdot\mathbf{u}\;\mathrm dx \right| &\leq \norml{3}{\mu+\chi\sigma}\normL{2}{\nabla\varphi}\normL{6}{\mathbf{u}}\\ \nonumber&\leq C\big(|(\mu+\chi\sigma)_{\Omega}| + \normL{2}{\nabla(\mu+\chi\sigma)}\big)\normL{2}{\nabla\varphi}\normH{1}{\mathbf{u}}\\ \nonumber&\leq C\big(1+\norml{2}{\varphi} + \normL{2}{\nabla(\mu+\chi\sigma)}\big)\normL{2}{\nabla\varphi}\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}\right)\\ \nonumber&\leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right)\left(1+\norml{2}{\varphi}^2 + \normL{2}{\nabla\varphi}^2\right) + \frac{m_0}{8}\normL{2}{\nabla\mu}^2\\ \label{APRI_EQ_16}&\leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right)\left(1+\norml{1}{\psi(\varphi)} + \normL{2}{\nabla\varphi}^2\right) + \frac{m_0}{8}\normL{2}{\nabla\mu}^2. \end{align} \paragraph{The case $\rho\in (2,6]$:} In this case, we need a more subtle argument. Choosing $\Phi = -\Delta\varphi$ in \eqref{WFORM_1c}, integrating by parts and using \eqref{ASS_PSI_1}, it holds \begin{equation} \label{APRI_EQ_17}\epsilon\int_\Omega |\Delta\varphi|^2\;\mathrm dx + \epsilon^{-1}\int_\Omega \psi_1''(\varphi)|\nabla\varphi|^2\;\mathrm dx = -\epsilon^{-1}\int_\Omega \psi_2''(\varphi)|\nabla\varphi|^2\;\mathrm dx + \int_\Omega \nabla(\mu+\chi\sigma)\cdot\nabla\varphi\;\mathrm dx. \end{equation} Neglecting the non-negative term $\epsilon\int_\Omega |\Delta\varphi|^2\;\mathrm dx$ on the l.h.s. of this equation and using \eqref{ASS_PSI_2}-\eqref{ASS_PSI_3} together with Young's inequality, we obtain \begin{equation} \label{APRI_EQ_18}\int_\Omega |\varphi|^{\rho-2}|\nabla\varphi|^2\;\mathrm dx \leq \left(1+\frac{R_3}{R_1} + \frac{\epsilon^2}{4\delta R_1^2}\right)\normL{2}{\nabla\varphi}^2 + \delta\normL{2}{\nabla(\mu+\chi\sigma)}^2, \end{equation} with $\delta>0$ to be chosen later. Observing that \begin{equation*} \left| \nabla\left(\frac{2|\varphi|^{\frac{\rho}{2}}}{\rho}\right)\right| = |\varphi|^{\frac{\rho-2}{2}}|\nabla\varphi|, \end{equation*} from \eqref{APRI_EQ_18} we deduce that \begin{equation} \label{APRI_EQ_19}\normL*{2}{\nabla(|\varphi|^{\frac{\rho}{2}})}^2\leq \frac{\rho^2}{4}\left(1+\frac{R_3}{R_1} + \frac{\epsilon^2}{4\delta R_1^2}\right)\normL{2}{\nabla\varphi}^2 + \frac{\delta\rho^2}{4}\normL{2}{\nabla(\mu+\chi\sigma)}^2. \end{equation} Now, applying the trace theorem and \eqref{APRI_EQ_19} yields \begin{align*} \nonumber \norm*{|\varphi|^{\frac{\rho}{2}}}_{L^2(\partial\Omega)}^2 &\leq C_{tr}^2\left(\norml*{2}{|\varphi|^{\frac{\rho}{2}}}^2 + \normL*{2}{\nabla\left(|\varphi|^{\frac{\rho}{2}}\right)}^2\right)\\ &\leq C_{tr}^2\left(\norml{\rho}{\varphi}^{\rho} +\frac{\rho^2}{4}\left(1+\frac{R_3}{R_1} + \frac{\epsilon^2}{4\delta R_1^2}\right)\normL{2}{\nabla\varphi}^2 + \frac{\delta\rho^2}{4}\normL{2}{\nabla(\mu+\chi\sigma)}^2\right), \end{align*} hence \begin{equation} \label{APRI_EQ_20} \norm{\varphi}_{L^{\rho}(\partial\Omega)}^{\rho}\leq C_{tr}^2\left(\norml{\rho}{\varphi}^{\rho} +\frac{\rho^2}{4}\left(1+\frac{R_3}{R_1} + \frac{\epsilon^2}{4\delta R_1^2}\right)\normL{2}{\nabla\varphi}^2 + \frac{\delta\rho^2}{4}\normL{2}{\nabla(\mu+\chi\sigma)}^2\right). \end{equation} Now, upon integrating by parts and recalling $\eqref{APRI_EQ_6}_2$, we calculate \begin{align} \nonumber\int_\Omega (\mu+\chi\sigma)\nabla\varphi\cdot\mathbf{u}\;\mathrm dx &= \int_\Omega \big(\mu+\chi\sigma - (\mu+\chi\sigma)_{\Omega}\big)\nabla\varphi\cdot\mathbf{u}\;\mathrm dx + (\mu+\chi\sigma)_{\Omega}\int_\Omega \nabla\varphi\cdot\mathbf{u}\;\mathrm dx\\ \nonumber&= \int_\Omega \big(\mu+\chi\sigma - (\mu+\chi\sigma)_{\Omega}\big)\nabla\varphi\cdot\mathbf{u}\;\mathrm dx \\ \label{APRI_EQ_21} &\quad + (\mu+\chi\sigma)_{\Omega}\left(\frac{1}{|\partial\Omega|}\int_\Omega \Gamma_{\mathbf{v}}\;\mathrm dx \int_{\partial\Omega}\varphi{\mathrm{d}}\mathcal{H}^{d-1} - \int_\Omega \varphi\Gamma_{\mathbf{v}}\;\mathrm dx\right). \end{align} Using Hölder's, Young's and Poincaré's inequalities, the continuous embedding $\mathbf H^1\hookrightarrow\mathbf L^6$ and \eqref{APRI_EQ_7}, it is straightforward to check that \begin{equation} \label{APRI_EQ_22}\left|\int_\Omega \big(\mu+\chi\sigma - (\mu+\chi\sigma)_{\Omega}\big)\nabla\varphi\cdot\mathbf{u}\;\mathrm dx\right| \leq C_{\delta_1}(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2)(1+\normL{2}{\nabla\varphi}^2) + \delta_1 \normL{2}{\nabla\mu}^2, \end{equation} with $\delta_1>0$ to be chosen. Using \eqref{APRI_EQ_7}, \eqref{APRI_EQ_13} and Hölder's inequality, we obtain \begin{equation} \label{APRI_EQ_23} \left|(\mu+\chi\sigma)_{\Omega}\int_\Omega \varphi\Gamma_{\mathbf{v}}\;\mathrm dx\right| \leq |(\mu+\chi\sigma)_{\Omega}| \norml{2}{\Gamma_{\mathbf{v}}}\norml{2}{\varphi}\leq C\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}\right)\left(1+\norml{1}{\psi(\varphi)}\right) \end{equation} Now, using Hölder's and Young's inequalities, \eqref{ASS_PSI_4}, \eqref{APRI_EQ_7}, \eqref{APRI_EQ_12}, \eqref{APRI_EQ_20} and recalling $\rho >2$, we deduce that \begin{align} \nonumber \left|(\mu+\chi\sigma)_{\Omega}\frac{1}{|\partial\Omega|}\int_\Omega \Gamma_{\mathbf{v}}\;\mathrm dx \int_{\partial\Omega}\varphi{\mathrm{d}}\mathcal{H}^{d-1}\right| &\leq C|(\mu+\chi\sigma)_{\Omega}| \norml{2}{\Gamma_{\mathbf{v}}}\norm{\varphi}_{L^{\rho}(\partial\Omega)}\\ \nonumber &\leq C\left(1+\norml{\rho}{\varphi}^{\rho-1}\right)\norml{2}{\Gamma_{\mathbf{v}}}\norm{\varphi}_{L^{\rho}(\partial\Omega)}\\ \nonumber &\leq C_{\delta_2}(1+\norml{\rho}{\varphi}^{\rho})\norml{2}{\Gamma_{\mathbf{v}}}^{\frac{\rho}{\rho-1}} + \delta_2\norm{\varphi}_{L^{\rho}(\partial\Omega)}^{\rho}\\ \label{APRI_EQ_24}&\leq C_{\delta_2}\left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right)(1+\norml{1}{\psi(\varphi)})+ \delta_2\norm{\varphi}_{L^{\rho}(\partial\Omega)}^{\rho} \end{align} where we used that $\frac{\rho-1}{\rho} + \frac{1}{\rho} = 1$. Now, plugging in \eqref{APRI_EQ_22}-\eqref{APRI_EQ_24} into \eqref{APRI_EQ_21}, using \eqref{ASS_PSI_4}, \eqref{APRI_EQ_20} and choosing $\delta,\delta_1,\delta_2$ small enough, we finally obtain \begin{equation} \label{APRI_EQ_25} \left|\int_\Omega (\mu+\chi\sigma)\nabla\varphi\cdot\mathbf{u}\;\mathrm dx\right|\leq \left(1+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right)\left(1+\norml{1}{\psi(\varphi)} + \normL{2}{\nabla\varphi}^2\right) + \frac{m_0}{8}\normL{2}{\nabla\mu}^2. \end{equation} Plugging in \eqref{APRI_EQ_9}, \eqref{APRI_EQ_14}-\eqref{APRI_EQ_16}, \eqref{APRI_EQ_25} into \eqref{APRI_EQ_8}, using \eqref{ASS_PSI_4} and \eqref{APRI_EQ_3}, we end up with \begin{align*} &\frac{{\mathrm{d}}}{\;\mathrm dt } \int_\Omega \epsilon^{-1}\psi(\varphi)+\frac{\epsilon}{2}|\nabla\varphi|^2 \;\mathrm dx + \frac{m_0}{4}\normL{2}{\nabla\mu}^2+ \frac{\nu}{2}\normL{2}{\mathbf{v}}^2 + \normL{2}{\sqrt{\eta(\varphi)}{\mathbf{D}}\mathbf{v}}^2 + \normh{1}{\sigma}^2\\ &\leq \alpha(t)\left(1+ \normL{2}{\nabla\varphi(t)}^2 + \norml{1}{\psi(\varphi(t))}\right), \end{align*} where, recalling \textnormal{(A1)} and \textnormal{(A3)}, \begin{equation} \label{APRI_EQ_25a}\alpha(t)\coloneqq C\left(1+\norml{\infty}{\eta(\varphi(t))}\right)\left(1 + \norm{\sigma_{\infty}(t)}_{L^2(\partial\Omega)}^2\right)\in L^1(0,T).\\ \end{equation} Integrating the last inequality in time from $0$ to $s\in (0,T]$ and applying Gronwall's Lemma (see \cite[Lemma 3.1]{GarckeLam3}) yields \begin{align} \nonumber & \epsilon^{-1}\norml{1}{\psi(\varphi(s))}+\frac{\epsilon}{2}\normL{2}{\nabla\varphi(s)}^2 + \int_{0}^{s}\frac{m_0}{4}\normL{2}{\nabla\mu}^2+ \frac{\nu}{2}\normL{2}{\mathbf{v}}^2 + \normL{2}{\sqrt{\eta(\varphi)}{\mathbf{D}}\mathbf{v}}^2 + \normh{1}{\sigma}^2{\mathrm{d}} t\\ \label{APRI_EQ_26}&\leq \left(\epsilon^{-1}\norml{1}{\psi(\varphi_0)}+\frac{\epsilon}{2}\normL{2}{\nabla\varphi_0}^2 + \int_{0}^{s}\alpha(t)\;\mathrm dt\right)\exp\left(\int_{0}^{s}\alpha(t)\;\mathrm dt\right)\quad\forall s\in (0,T]. \end{align} Due to \textnormal{(A1)}, \textnormal{(A5)} and the continuous embedding $H^1\hookrightarrow L^6$, we have $\psi(\varphi_0)\in L^1,\,\varphi_0\in \mathbf L^2$. Then, due to Korn's inequality (see \cite[Thm. 6.3-3]{Ciarlet}) and \textnormal{(A3)}, taking the supremum over all $s\in (0,T]$ in \eqref{APRI_EQ_26} implies \begin{equation} \label{APRI_EQ_27}\esssup_{s\in (0,T]}\left(\norml{1}{\psi(\varphi(s))} + \normL{2}{\nabla\varphi(s)}^2\right) +\int_{0}^{T}\normH{1}{\mathbf{v}}^2 + \normL{2}{\nabla\mu}^2 + \normh{1}{\sigma}^2 \;\mathrm dt \leq C. \end{equation} Recalling \eqref{ASS_PSI_4}, using Poincaré's inequality, \eqref{APRI_EQ_12}, \eqref{APRI_EQ_27} and the fact that $\rho\geq 2$, this in particular gives \begin{equation*} \esssup_{s\in (0,T]}\normh{1}{\varphi(s)}^2 + \int_{0}^{T} \normh{1}{\mu}^2\;\mathrm dt\leq C. \end{equation*} Combining the last inequality with \eqref{APRI_EQ_27} yields \begin{equation} \label{APRI_EQ_28}\esssup_{s\in (0,T]}\left(\norml{1}{\psi(\varphi(s))} + \normh{1}{\varphi(s)}^2\right) +\int_{0}^{T}\normH{1}{\mathbf{v}}^2 + \normh{1}{\mu}^2 + \normh{1}{\sigma}^2 \;\mathrm dt \leq C. \end{equation} Due to \textnormal{(A4)} and the continuous embedding $H^1\hookrightarrow L^6$, this implies \begin{equation} \label{APRI_EQ_29}\norm{\Gamma_{\mathbf{v}}}_{L^2(L^6)} + \norm{\Gamma_{\varphi}}_{L^2(L^6)}\leq C. \end{equation} \subsubsection{Estimating the pressure} Using Lemma \ref{LEM_DIVEQU}, we deduce that there is at least one solution ${\mathbf{q}}\in \mathbf H^1$ of the system \begin{equation*} \textnormal{div}({\mathbf{q}}) = p\quad\text{in }\Omega,\qquad {\mathbf{q}} = \frac{1}{|\partial\Omega|}\left(\int_\Omega p\;\mathrm dx \right)\mathbf n \quad\text{on }\partial\Omega, \end{equation*} such that \begin{equation} \label{APRI_EQ_30}\normH{1}{{\mathbf{q}}}\leq C_d\norml{2}{p}, \end{equation} with $C_d$ depending only on $\Omega$ and $q=2$. Notice that the compatibility condition \eqref{DIV_COMP_COND} is satisfied since \begin{equation*} \int_{\partial\Omega} {\mathbf{q}}\cdot\mathbf n {\mathrm{d}} \mathcal{H}^{d-1} = \frac{1}{|\partial\Omega|}\left(\int_\Omega p\;\mathrm dx \right)\int_{\partial\Omega}\mathbf n \cdot\mathbf n {\mathrm{d}} \mathcal{H}^{d-1} = \int_\Omega p\;\mathrm dx . \end{equation*} Choosing $\boldsymbol{\Phi}={\mathbf{q}}$ in \eqref{WFORM_1a} and using $\textnormal{div}(\mathbf{v})=\Gamma_{\mathbf{v}}$ a.\,e. in $Q$, we obtain \begin{equation} \label{APRI_EQ_31}\int_\Omega |p|^2\;\mathrm dx = \int_\Omega (2\eta(\varphi) {\mathbf{D}}\mathbf{v} + \lambda(\varphi)\Gamma_{\mathbf{v}}\mathbf{I} )\colon \nabla{\mathbf{q}}\;\mathrm dx + \int_\Omega (\nu\mathbf{v} - (\mu + \chi\sigma)\nabla\varphi)\cdot{\mathbf{q}}\;\mathrm dx . \end{equation} Using \eqref{APRI_EQ_30}, \textnormal{(A3)} and Hölder's and Young's inequalities, an easy calculation shows that \begin{equation} \label{APRI_EQ_32}\norml{2}{p}^2 \leq C\left(\norml{\infty}{\eta(\varphi)}\normL{2}{\sqrt{\eta(\varphi)}{\mathbf{D}}\mathbf{v}}^2 + \norml{\infty}{\lambda(\varphi)}^2\norml{2}{\Gamma_{\mathbf{v}}}^2 + \normL{2}{\mathbf{v}}^2 + \norml{3}{\mu + \chi\sigma}^2\normL{2}{\nabla\varphi}^2\right). \end{equation} Integrating this inequality in time from $0$ to $T$ and using Hölder's inequality, we arrive at \begin{align*} \norm{p}_{L^2(L^2)}^2&\leq C\left(\norm{\eta(\cdot)}_{L^{\infty}(\mathbb R)}\norm{\sqrt{\eta(\varphi)}{\mathbf{D}}\mathbf{v}}_{L^2(\mathbf L^2)}^2 + \norm{\lambda(\cdot)}_{L^{\infty}(\mathbb R)}^2\norm{\Gamma_{\mathbf{v}}}_{L^2(L^2)}^2\right)\\ &\quad + C\left(\norm{\mathbf{v}}_{L^2(\mathbf L^2)}^2+ \norm{\mu+\chi\sigma}_{L^2(L^3)}^2\norm{\nabla\varphi}_{L^{\infty}(\mathbf L^2)}^2\right). \end{align*} Using \eqref{APRI_EQ_26}, \eqref{APRI_EQ_28}-\eqref{APRI_EQ_29} and \textnormal{(A3)}, the last inequality implies \begin{equation} \label{APRI_EQ_33} \norm{p}_{L^2(L^2)}\leq C. \end{equation} \subsubsection{Higher order estimates for $\varphi$} Our aim is to show that \begin{equation} \label{APRI_EQ_34}\norm{\varphi}_{L^2(H^3)}\leq C. \end{equation} Choosing $\Phi = -\Delta\varphi$ in \eqref{WFORM_1c}, integrating by parts and neglecting the non-negative term resulting from $\psi_1$ (see \textnormal{(A5)}), we obtain \begin{equation*} \epsilon\norml{2}{\Delta\varphi}^2 + \int_\Omega \epsilon^{-1}\psi_2''(\varphi)|\nabla\varphi|^2\;\mathrm dx \leq \int_\Omega \nabla(\mu + \chi\sigma)\cdot\nabla\varphi \;\mathrm dx . \end{equation*} Using Hölder's inequality and the assumptions on $\psi_2$, we therefore get \begin{equation*} \label{higher_order_estimates_4}\epsilon\norml{2}{\Delta\varphi}^2 \leq \epsilon^{-1}R_3\normL{2}{\nabla\varphi}^2 + \normL{2}{\nabla(\mu + \chi\sigma)}\normL{2}{\nabla\varphi}. \end{equation*} Taking the square of this inequality and integrating in time from $0$ to $T$, we obtain \begin{align} \nonumber \epsilon^2\int_{0}^{T}\norml{2}{\Delta\varphi}^4 &\leq C\int_{0}^{T} \normL{2}{\nabla\varphi}^4 + \normL{2}{\nabla\varphi}^2\normL{2}{\nabla(\mu + \chi\sigma)}^2\;\mathrm dt \\ \nonumber &\leq C\left(\norm{\nabla\varphi}_{L^{\infty}(\mathbf L^2)}^4 + \norm{\nabla(\mu +\chi\sigma)}_{L^2(\mathbf L^2)}^2\norm{\nabla\varphi}_{L^{\infty}(\mathbf L^2)}^2\right). \end{align} Applying elliptic regularity theory and \eqref{APRI_EQ_28}, this gives \begin{equation} \label{APRI_EQ_35}\norm{\varphi}_{L^4(H^2)}\leq C. \end{equation} Next, we take $\Phi = \Delta^2\varphi$ in \eqref{WFORM_1c}, integrate by parts and in time from $0$ to $T$ to obtain \begin{equation} \label{APRI_EQ_36}\epsilon\norm{\nabla\Delta\varphi}_{L^2(\mathbf L^2)}^2 = - \int_{0}^{T}\int_\Omega \nabla(\mu + \chi\sigma)\cdot\nabla\Delta\varphi\;\mathrm dx \;\mathrm dt + \int_{0}^{T}\int_\Omega \epsilon^{-1}\psi''(\varphi)\nabla\varphi\cdot\nabla\Delta\varphi\;\mathrm dx \;\mathrm dt . \end{equation} For the first term on the r.h.s. of \eqref{APRI_EQ_36}, applying Hölder's and Young's inequality gives \begin{equation} \label{APRI_EQ_37}\left|\int_{0}^{T}\int_\Omega \nabla(\mu + \chi\sigma)\cdot\nabla\Delta\varphi\;\mathrm dx \;\mathrm dt \right| \leq C \left(\norm{\mu}_{L^2(H^1)}^2 + \norm{\sigma}_{L^2(H^1)}^2\right) + \frac{\epsilon}{4}\norm{\nabla\Delta\varphi}_{L^2(\mathbf L^2)}^2. \end{equation} Due to \eqref{LEM_GAGNIR_EST}, \eqref{ASS_PSI_2}-\eqref{ASS_PSI_3} and \eqref{APRI_EQ_28}, using Hölder's and Young's inequalities yields \begin{align} \nonumber \left|\int_{0}^{T}\int_\Omega \epsilon^{-1}\psi''(\varphi)\nabla\varphi\cdot\nabla\Delta\varphi\;\mathrm dx \;\mathrm dt\right| &\leq C \int_{0}^{T}\int_\Omega \left(1+|\varphi|^{\rho-2}\right)|\nabla\varphi||\nabla\Delta\varphi|{\mathrm{d}} x{\mathrm{d}} t\\ \nonumber &\leq C\int_\Omega \left(1+\norml{\infty}{\varphi}^{\rho-2}\right)\normL{2}{\nabla\varphi}\normL{2}{\nabla\Delta\varphi}{\mathrm{d}} t\\ \nonumber &\leq C\int_{0}^{T} \left(1+\normh{2}{\varphi}^{\frac{\rho-2}{2}}\right)\normL{2}{\nabla\Delta\varphi}{\mathrm{d}} t\\ \label{APRI_EQ_38}&\leq C\left(1+\norm{\varphi}_{L^{\rho-2}(H^2)}^{\rho-2}\right) + \frac{\epsilon}{4}\norm{\nabla\Delta\varphi}_{L^2(\mathbf L^2)}^2. \end{align} Recalling that $\rho-2\leq 4$, collecting \eqref{APRI_EQ_36}-\eqref{APRI_EQ_38} and using \eqref{APRI_EQ_28}, \eqref{APRI_EQ_35}, we see that \begin{equation*} \frac{\epsilon}{2}\norm{\nabla\Delta\varphi}_{L^2(\mathbf L^2)}^2\leq C. \end{equation*} Together with \eqref{APRI_EQ_35} and using again elliptic regularity theory, this implies \begin{equation} \label{APRI_EQ_41}\norm{\varphi}_{L^2(H^3)}\leq C. \end{equation} \subsubsection{Regularity for the convection terms and the time derivatives} By Hölder's and Young's inequalities and the continuous embedding $\mathbf H^1\hookrightarrow\mathbf L^6$, we observe that \begin{equation*} \norm{\nabla\varphi\cdot\mathbf{v}}_{L^2(0,T;L^{\frac{3}{2}})}^2 \leq C\int_{0}^{T} \norm{\mathbf{v}}_{\mathbf H^1}^2\norm{\nabla\varphi}_{\mathbf L^2}^2\;\mathrm dt \leq C\norm{\nabla\varphi}_{L^{\infty}(\mathbf L^2)}^2\norm{\mathbf{v}}_{L^2(0,T;\mathbf H^1)}^2. \end{equation*} Using \eqref{APRI_EQ_28}-\eqref{APRI_EQ_29} we see that \begin{equation*} \norm{\varphi\Gamma_{\mathbf{v}}}_{L^2(L^2)}^2 \leq C\norm{\varphi}_{L^{\infty}(H^1)}^2\norm{\Gamma_{\mathbf{v}}}_{L^2(L^3)}^2\leq C. \end{equation*} From the last two inequalities, we deduce that \begin{equation} \label{APRI_EQ_42}\norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(0,T;L^{\frac{3}{2}})}\leq C. \end{equation} Finally, using \eqref{APRI_EQ_28}-\eqref{APRI_EQ_29} and \eqref{APRI_EQ_42}, a comparison argument in \eqref{WFORM_1b} shows that \begin{equation} \label{APRI_EQ_43}\norm{\partial_{t}\varphi}_{L^2((H^1)^*)}. \end{equation} Notice that we have lower time regularity for the time derivative of $\varphi$ compared to the convection term since the regularity of the time derivative depends on the term $\nabla\mu$. Summarising all the estimates, we end up with \begin{align} \nonumber&\norm{\varphi}_{H^1((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3) } + \norm{\sigma}_{L^2(H^1)} + \norm{\mu}_{L^2(H^1)}\\ \label{APRI_EQ_44}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^{\frac{3}{2}})} + \norm{\mathbf{v}}_{L^2(\mathbf H^1)}+ \norm{p}_{L^2(L^2)}\leq C. \end{align} \subsection{Passing to the limit} The a-priori-estimates \eqref{APRI_EQ_44} deduced within the Galerkin scheme are enough to deduce existence of solutions. We refer the reader to \cite{EbenbeckGarcke,GarckeLam1} for details in passing to the limit in the Galerkin scheme. \subsection{Further results on regularity} In the case when $\sigma_{\infty}\in L^4(L^2(\partial\Omega))$, by \eqref{APRI_EQ_3} we obtain \begin{equation} \label{APRI_EQ_45}\norm{\sigma}_{L^4(H^1)}\leq C. \end{equation} In particular, by \textnormal{(A4)} this gives \begin{equation} \label{APRI_EQ_46}\norm{\Gamma_{\mathbf{v}}}_{L^4(L^2)}+\norm{\Gamma_{\varphi}}_{L^4(L^2)}\leq C. \end{equation} Thanks to Lemma \ref{LEM_GAGNIR}, we have the continuous embedding \begin{equation*} L^{\infty}(H^1)\cap L^2(H^3)\hookrightarrow L^{20}(L^{10}). \end{equation*} Hence, the assumptions on $\psi(\cdot)$ and \eqref{APRI_EQ_44} imply \begin{equation} \label{APRI_EQ_47}\norm{\psi'(\varphi)}_{L^4(L^2)}\leq C. \end{equation} Taking $\Phi = \mu+\chi\sigma$ in \eqref{WFORM_1c} and squaring the resulting identity, an application of Hölder's and Young's inequalities gives \begin{equation*} \norml{2}{\mu+\chi\sigma}^4 \leq C\left(\norml{2}{\psi'(\varphi)}^4 + \norm{\nabla(\mu+\chi\sigma)}_{\mathbf L^2}^2\norm{\nabla\varphi}_{\mathbf L^2}^2\right). \end{equation*} Integrating this inequality in time from $0$ to $T$ and using \eqref{APRI_EQ_44}, \eqref{APRI_EQ_47}, we conclude that \begin{equation} \label{APRI_EQ_48}\norm{\mu+\chi\sigma}_{L^4(L^2)}\leq C. \end{equation} We now choose $\boldsymbol{\Phi}=\mathbf{v}$ in \eqref{WFORM_1a}, use Young's, Hölder's and Korn's inequality (see \cite[Thm. 6.3-3]{Ciarlet}) together with the Sobolev embedding $\mathbf H^1\hookrightarrow \mathbf L^6$ to obtain \begin{equation*} \norm{\mathbf{v}}_{\mathbf H^1}^{\frac{8}{3}}\leq C\left(\norml{2}{p}^{\frac{4}{3}}\norml{2}{\Gamma_{\mathbf{v}}}^{\frac{4}{3}} + \norm{(\mu+\chi\sigma)\nabla\varphi}_{\mathbf L^{\frac{6}{5}}}^{\frac{8}{3}}\right). \end{equation*} Integrating this inequality in time from $0$ to $T$, by Hölder's and Young's inequalities we get \begin{equation} \label{APRI_EQ_49}\norm{\mathbf{v}}_{L^{\frac{8}{3}}(\mathbf H^1)}^{\frac{8}{3}}\leq C\left(\norm{p}_{L^2(L^2)}^{\frac{4}{3}}\norm{\Gamma_{\mathbf{v}}}_{L^4(L^2)}^{\frac{4}{3}} + \norm{\mu+\chi\sigma}_{L^4(L^2)}^{\frac{8}{3}}\norm{\nabla\varphi}_{L^8(\mathbf L^3)}^{\frac{8}{3}}\right). \end{equation} Applying Lemma \ref{LEM_GAGNIR}, we have the continuous embedding \begin{equation} \label{APRI_EQ_50}L^{\infty}(H^1)\cap L^4(H^2)\hookrightarrow L^8(W^{1,3}). \end{equation} Hence, using \eqref{APRI_EQ_44}, \eqref{APRI_EQ_47} and \eqref{APRI_EQ_48}, from \eqref{APRI_EQ_49} we conclude that \begin{equation} \label{APRI_EQ_51}\norm{\mathbf{v}}_{L^{\frac{8}{3}}(\mathbf H^1)}\leq C. \end{equation} Furthermore, using Hölder's and Young's inequalities, the continuous embedding $\mathbf H^1\hookrightarrow \mathbf L^6$ and \eqref{APRI_EQ_50}-\eqref{APRI_EQ_51} yields \begin{equation*} \norm{\nabla\varphi\cdot\mathbf{v}}_{L^2(0,T;L^2)}^2 \leq C\int_{0}^{T} \norm{\mathbf{v}}_{\mathbf H^1}^2\norm{\nabla\varphi}_{\mathbf L^3}^2\;\mathrm dt \leq C\norm{\nabla\varphi}_{L^8(\mathbf L^3)}^2\norm{\mathbf{v}}_{L^{\frac{8}{3}}(0,T;\mathbf H^1)}^2\leq C. \end{equation*} Together with the estimate (see \eqref{APRI_EQ_29}, \eqref{APRI_EQ_44}) \begin{equation*} \norm{\varphi\Gamma_{\mathbf{v}}}_{L^2(L^2)}^2 \leq C\norm{\varphi}_{L^{\infty}(H^1)}^2\norm{\Gamma_{\mathbf{v}}}_{L^2(L^3)}^2\leq C, \end{equation*} this implies \begin{equation} \label{APRI_EQ_52}\norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^2)}\leq C. \end{equation} Using \eqref{APRI_EQ_45}, \eqref{APRI_EQ_48}, \eqref{APRI_EQ_51}-\eqref{APRI_EQ_52} and recalling \eqref{APRI_EQ_44}, we obtain \begin{align} \nonumber&\norm{\varphi}_{H^1((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3) } + \norm{\sigma}_{L^4(H^1)} + \norm{\mu}_{L^2(H^1)\cap L^4(L^2)}\\ \label{APRI_EQ_53}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^2)} + \norm{\mathbf{v}}_{L^{\frac{8}{3}}(\mathbf H^1)}+ \norm{p}_{L^2(L^2)}\leq C. \end{align} \subsection{The singular limit of large boundary permeability} Due to Theorem \ref{THM_WSOL_1}, for every $K>0$ there exists a solution quintuple $(\varphi_K,\mu_K,\sigma_K,\mathbf{v}_K,p_K)$ solving \eqref{MEQ}, \eqref{BIC} in the sense of Definition \ref{DEF_WSOL_1} and enjoying the regularity properties stated in Theorem \ref{THM_WSOL_1}. In the following, we assume without loss of generality that $K>1$. Let $E\colon H^{\frac{1}{2}}(\partial\Omega)\to H^1$ be a bounded, linear extension operator satisfying $(Ef)|_{\partial\Omega} = f$ for all $f\in H^{\frac{1}{2}}(\partial\Omega)$ (see \cite[Chap. 2, Thm. 5.7]{Necas} for details). Then, choosing $\Phi = \sigma_K-E\sigma_{\infty}$ in \eqref{WFORM_1d} (in the following, we will omit the operator $E$), we obtain \begin{equation} \label{limit_K_eq_1}\int_\Omega |\nabla\sigma_K|^2 + h(\varphi)|\sigma_K|^2\;\mathrm dx + K\int_{\partial\Omega}|\sigma_K-\sigma_{\infty}|^2{\mathrm{d}}\mathcal{H}^{d-1} = \int_\Omega \nabla\sigma_K\cdot\nabla\sigma_{\infty}+h(\varphi)\sigma_K\sigma_{\infty} \;\mathrm dx . \end{equation} For the first term on the r.h.s. of this equation, we use Hölder's and Young's inequalities and the boundedness of the extension operator to obtain \begin{equation} \label{limit_K_eq_2}\left|\int_\Omega \nabla\sigma_K\cdot\nabla\sigma_{\infty}\;\mathrm dx \right| \leq \frac{1}{4}\normL{2}{\nabla\sigma_K}^2 + \normL{2}{\nabla\sigma_{\infty}}^2\leq \frac{1}{4}\normL{2}{\nabla\sigma_K}^2 + C\norm{\sigma_{\infty}}_{H^{\frac{1}{2}}(\partial\Omega)}^2. \end{equation} With the same arguments and using the boundedness of $h(\cdot)$, we can estimate the second term on the r.h.s. of \eqref{limit_K_eq_1} by \begin{equation} \label{limit_K_eq_3}\left|\int_\Omega h(\varphi)\sigma_K\sigma_{\infty}\;\mathrm dx \right|\leq \delta\norml{2}{\sigma_K}^2 + C_{\delta}\norm{\sigma_{\infty}}_{H^{\frac{1}{2}}(\partial\Omega)}^2. \end{equation} From Poincaré's inequality and the boundedness of the extension operator, we know that \begin{equation} \label{limit_K_eq_4}\norml{2}{\sigma_K}^2\leq \tilde{C}\left(\normL{2}{\nabla\sigma_K}^2 + \norm{\sigma_K-\sigma_{\infty}}_{L^2(\partial\Omega)}^2 + \norm{\sigma_{\infty}}_{H^{\frac{1}{2}}(\partial\Omega)}^2\right), \end{equation} for a positive constant $\tilde{C}$ independent of $K$. Choosing $\delta$ small enough and using this inequality in \eqref{limit_K_eq_3}, we obtain \begin{equation} \label{limit_K_eq_5}\left|\int_\Omega h(\varphi)\sigma_K\sigma_{\infty}\;\mathrm dx \right|\leq \frac{1}{4}\left(\normL{2}{\nabla\sigma_K}^2 + \norm{\sigma_K-\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right) + C\norm{\sigma_{\infty}}_{H^{\frac{1}{2}}(\partial\Omega)}^2. \end{equation} Plugging in \eqref{limit_K_eq_2}, \eqref{limit_K_eq_5} into \eqref{limit_K_eq_1} and neglecting the non-negative term $\int_\Omega h(\varphi)|\sigma_K|^2\;\mathrm dx $ on the l.h.s. of \eqref{limit_K_eq_1}, we arrive at \begin{equation} \label{limit_K_eq_6}\int_\Omega |\nabla\sigma_K|^2 \;\mathrm dx + K\int_{\partial\Omega}|\sigma_K-\sigma_{\infty}|^2{\mathrm{d}}\mathcal{H}^{d-1}\leq C\norm{\sigma_{\infty}}_{H^{\frac{1}{2}}(\partial\Omega)}^2. \end{equation} Multiplying \eqref{limit_K_eq_4} by $\frac{1}{2\tilde{C}}$ and adding the resulting equation to \eqref{limit_K_eq_6} yields \begin{equation*} \normh{1}{\sigma_K}^2 + K\int_{\partial\Omega}|\sigma_K-\sigma_{\infty}|^2{\mathrm{d}}\mathcal{H}^{d-1}\leq C\norm{\sigma_{\infty}}_{H^{\frac{1}{2}}(\partial\Omega)}^2. \end{equation*} Integrating this inequality in time from $0$ to $T$ and using $\sigma_{\infty}\in L^2(H^{\frac{1}{2}}(\partial\Omega))$, we conclude that \begin{equation*} \norm{\sigma_K}_{L^2(H^1)} + \sqrt{K}\norm{\sigma_K-\sigma_{\infty}}_{L^2(L^2(\partial\Omega))}\leq C, \end{equation*} where $C$ is independent of $K$. Then, with exactly the same arguments as above, it follows that \begin{align} \nonumber&\norm{\varphi_K}_{H^1((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3) } + \norm{\sigma_K}_{L^2(H^1)}+ \sqrt{K}\norm{\sigma_K-\sigma_{\infty}}_{L^2(L^2(\partial\Omega))}\\ \label{limit_K_eq_7}&\quad + \norm{\mu_K}_{L^2(H^1)}+\norm{\textnormal{div}(\varphi_K\mathbf{v}_K)}_{L^2(L^{\frac{3}{2}})}+\norm{\mathbf{v}_K}_{L^{2}(\mathbf H^1)}+ \norm{p_K}_{L^2(L^2)}\leq C. \end{align} Using standard compactness arguments (Aubin-Lions theorem (see \cite[Sec. 8, Cor. 4]{Simon}) and reflexive weak compactness), we obtain exactly the convergence results as stated in Theorem \ref{THM_KLIM}. Passing to the limit can be carried out with exactly the same arguments as stated in Section 3.2. We will only present the arguments needed for \eqref{WFORM_1d}. In the following, let $\xi\in H_0^1$ be arbitrary. Multiplying \eqref{WFORM_1d} with $\delta\in C_0^{\infty}(0,T)$, integrating in time from $0$ to $T$ and noting that $H_0^1\subset H^1$, we observe that \begin{equation} \label{limit_K_eq_8}0 = \int_{0}^{T}\int_\Omega \delta (\nabla\sigma_K\cdot\nabla\xi + h(\varphi_K)\sigma_K\xi)\;\mathrm dx \;\mathrm dt \quad\forall \xi\in H_0^1. \end{equation} Since $h(\cdot)$ is a bounded, continuous function, $\delta\xi\in C^{\infty}(H_0^1)$ and $\varphi_K\to\varphi$ a.\,e. in $Q$, the Lebesgue theorem gives that \begin{equation*} \norm{h(\varphi_K)\delta\xi-h(\varphi)\delta\xi}_{L^2(Q)}\to 0\quad\text{as }K\to\infty. \end{equation*} Since $\sigma_K\to\sigma$ weakly in $L^2(Q)$ as $K\to\infty$, by the product of weak-strong convergence we obtain \begin{equation} \label{limit_K_eq_9}\int_{0}^{T}\int_\Omega h(\varphi_K)\sigma_K\xi\;\mathrm dx \;\mathrm dt \to \int_{0}^{T}\int_\Omega h(\varphi)\sigma\xi\;\mathrm dx \;\mathrm dt \quad\text{as }K\to\infty. \end{equation} Furthermore, since $\sigma_K\to\sigma$ weakly in $L^2(H^1)$ and as $\delta\xi\in L^2(H^1)$, it follows that \begin{equation} \label{limit_K_eq_10}\int_{0}^{T}\int_\Omega \delta\nabla\sigma_K\cdot\nabla\xi\;\mathrm dx \;\mathrm dt \to \int_{0}^{T}\int_\Omega \delta\nabla\sigma\cdot\nabla\xi\;\mathrm dx \;\mathrm dt \quad\text{as }K\to\infty. \end{equation} Due to \eqref{limit_K_eq_9}-\eqref{limit_K_eq_10}, we can pass to the limit in \eqref{limit_K_eq_8} to deduce that \begin{equation*} 0 = \int_{0}^{T}\int_\Omega \delta (\nabla\sigma\cdot\nabla\xi + h(\varphi)\sigma\xi)\;\mathrm dx \;\mathrm dt \quad\forall \xi\in H_0^1. \end{equation*} Since this holds for all $\delta\in C_0^{\infty}(0,T)$, we can recover \eqref{WF_KLIM_4}. Finally, from \eqref{limit_K_eq_7}, we know that \begin{equation*} \norm{\sigma_K-\sigma_{\infty}}_{L^2(L^2(\partial\Omega))}\leq \frac{C}{\sqrt{K}}, \end{equation*} where $C$ is independent of $K$. Letting $K\to\infty$ and recalling that $\sigma_K\to\sigma$ weakly in $L^2(L^2(\partial\Omega))$ as $K\to\infty$, it follows that \begin{equation*} \sigma = \sigma_{\infty}\quad\text{a.\,e. on }\Sigma, \end{equation*} which completes the proof. \section{The singular limit of vanishing viscosities} Let $\{\eta_n,\lambda_n\}_{n\in\mathbb N}$ be a sequence of function pairs fulfilling \textnormal{(A3)} such that \begin{equation} \label{VISC_LIM_1}\norm{\eta_n(\cdot)}_{C^0(\mathbb R)}\to 0,\quad \norm{\lambda_n(\cdot)}_{C^0(\mathbb R)}\to 0\quad\text{as }n\to\infty. \end{equation} Without loss of generality, we can assume that \begin{equation} \label{VISC_LIM_2}\norm{\eta_{n}(\cdot)}_{L^{\infty}(\mathbb R)}\leq 1,\quad \norm{\lambda_{n}(\cdot)}_{L^{\infty}(\mathbb R)}\leq 1. \end{equation} Then, by Theorem \ref{THM_WSOL_1}, for every $n\in\mathbb N$ there exists a solution quintuple $(\varphi_{n},\mu_{n},\sigma_{n},\mathbf{v}_{n},p_{n})$ of \eqref{MEQ}, \eqref{BIC} in the sense of Definition \ref{DEF_WSOL_1} fulfilling \begin{align*} &\varphi_{n} \in H^1(0,T;(H^1)^*)\cap L^{\infty}(0,T;H^1)\cap L^4(0,T;H^2)\cap L^2(0,T;H^3),\quad \mu_{n}\in L^2(0,T;H^1),\\ &\sigma_{n}\in L^2(0,T;H^1),\quad \mathbf{v}_{n}\in L^2(0,T;\mathbf H^1),\quad p_{n}\in L^{2}(0,T;L^2), \end{align*} such that \begin{equation} \label{VISC_LIM_3}\textnormal{div}(\mathbf{v}_{n}) = \Gamma_{\mathbf{v}}(\varphi_n,\sigma_n)\text{ a.\,e. in }Q,\quad \partial_\n\varphi_{n} = 0\text{ a.\,e. in }\Sigma,\quad \varphi_{n}(0)=\varphi_0\text{ a.\,e. in }\Omega, \end{equation} and \begin{subequations} \label{VISC_LIM_4} \begin{align} \label{VISC_LIM_4a}0&= \int_\Omega \mathbf{T}_n(\mathbf{v}_{n},p_{n})\colon \nabla\boldsymbol{\Phi}+\nu\mathbf{v}_{n}\cdot\boldsymbol{\Phi} - (\mu_{n}+\chi\sigma_{n})\nabla\varphi_{n}\cdot\boldsymbol{\Phi}{\mathrm{d}} x,\\ \label{VISC_LIM_4b} 0&= \langle\partial_{t}\varphi_{n}{,}\Phi\rangle_{H^1,(H^1)^*} + \int_\Omega m(\varphi_{n})\nabla\mu_{n}\cdot\nabla\Phi + (\nabla\varphi_{n}\cdot\mathbf{v}_n +\varphi_n\Gamma_{\mathbf{v}}(\varphi_n,\sigma_n)-\Gamma_{\varphi}(\varphi_n,\sigma_n))\Phi{\mathrm{d}} x,\\ \label{VISC_LIM_4c} 0 &= \int_\Omega \nabla\sigma_{n}\cdot\nabla\Phi + h(\varphi_{n})\sigma_{n}\Phi{\mathrm{d}} x + \int_{\partial\Omega}K(\sigma_{n}-\sigma_{\infty})\Phi{\mathrm{d}}\mathcal{H}^{d-1}, \end{align} for a.\,e. $t\in(0,T)$ and for all $\boldsymbol{\Phi}\in \mathbf H^1,\,\Phi\in H^1$, where $\mu_n$ is given by \begin{equation} \label{VISC_LIM_4d}\mu_{n} = \epsilon^{-1}\Psi'(\varphi_{n})- \epsilon\Delta\varphi_{n} - \chi\sigma_{n}\quad \text{a.\,e in }Q \end{equation} \end{subequations} and \begin{equation*} \mathbf{T}_n(\mathbf{v}_{n},p_{n})\coloneqq 2\eta_{n}(\varphi_{n}) {\mathbf{D}}\mathbf{v}_{n} + \lambda_{n}(\varphi_{n})\textnormal{div}(\mathbf{v}_{n})\mathbf{I} - p_{n}\mathbf{I}. \end{equation*} We will denote $\Gamma_{\mathbf{v},n} =\Gamma_{\mathbf{v}}(\varphi_n,\sigma_n),\,\Gamma_{\varphi,n} =\Gamma_{\varphi}(\varphi_n,\sigma_n)$. \subsection{A-priori-estimates} In the following, we will derive estimates which are independent of $n\in\mathbb N$. By $C$, we will denote a generic constant depending on the system parameters and on $\Omega$, $T$, but not on $n\in\mathbb N$. Furthermore, we will frequently use Hölder's and Young's inequalities.\newline First, we recall that \textnormal{(A1)}, \textnormal{(A5)} and the continuous embedding $H^1\hookrightarrow L^6$ imply that $\psi(\varphi_0)\in L^1,\, \nabla\varphi_0\in \mathbf L^2$. Then, using \textnormal{(A1)}, \eqref{APRI_EQ_25a}-\eqref{APRI_EQ_26} and \eqref{VISC_LIM_2}, taking the supremum over all $s\in (0,T]$ in \eqref{APRI_EQ_26} yields \begin{align} \nonumber & \esssup_{s\in (0,T]}\left(\norml{1}{\psi(\varphi_n(s))} + \normL{2}{\nabla\varphi_n(s)}^2\right) \\ \label{VISC_LIM_5}&\quad + \int_{0}^{T}\frac{m_0}{4}\normL{2}{\nabla\mu_n}^2+ \frac{\nu}{2}\normL{2}{\mathbf{v}_n}^2 + \normL{2}{\sqrt{\eta_n(\varphi_n)}{\mathbf{D}}\mathbf{v}_n}^2 + \normh{1}{\sigma_n}^2{\mathrm{d}} t\leq C. \end{align} Recalling \eqref{ASS_PSI_4}, using Poincaré's inequality, \eqref{APRI_EQ_12}, \eqref{VISC_LIM_5} and the fact that $\rho\geq 2$, this in particular gives \begin{equation} \label{VISC_LIM_6}\esssup_{s\in (0,T]}\normh{1}{\varphi_n(s)}^2 + \int_{0}^{T} \normh{1}{\mu_n}^2\;\mathrm dt\leq C. \end{equation} Now, using exactly the same arguments as in Subsection 3.1.6, we obtain \begin{equation} \label{VISC_LIM_7}\norm{\varphi_n}_{L^4(H^2)}\leq C. \end{equation} Then, using \textnormal{(A5)}, it is straightforward to check that $\psi'(\varphi)\in L^2(H^1)$ with bounded norm. Together with \eqref{VISC_LIM_5}-\eqref{VISC_LIM_7} and using elliptic regularity theory, this implies \begin{equation} \label{VISC_LIM_7a}\norm{\varphi_n}_{L^4(H^2)\cap L^2(H^3)}\leq C. \end{equation} By the specific form of $\Gamma_{\mathbf{v}}(\cdot{,}\cdot),\,\Gamma_{\varphi}(\cdot{,}\cdot)$ and the continuous embedding $H^1\hookrightarrow L^6$, an application of \eqref{VISC_LIM_3} and \eqref{VISC_LIM_5} yields \begin{equation} \label{VISC_LIM_8}\norm{\Gamma_{\mathbf{v},n}}_{L^2(L^6)} + \norm{\Gamma_{\varphi,n}}_{L^2(L^6)}\leq C. \end{equation} Using \eqref{APRI_EQ_32} and \eqref{VISC_LIM_2}, for every $n\in\mathbb N$ we obtain \begin{equation*} \norml{2}{p_n}^2 \leq C\left(\normL{2}{\sqrt{\eta_n(\varphi_n)}{\mathbf{D}}\mathbf{v}_n}^2 + \norml{2}{\Gamma_{\mathbf{v},n}}^2 + \normL{2}{\mathbf{v}_n}^2 + \norml{3}{\mu_n + \chi\sigma_n}^2\normL{2}{\nabla\varphi_n}^2\right). \end{equation*} Integrating this inequality in time from $0$ to $T$ and using \eqref{VISC_LIM_5}-\eqref{VISC_LIM_6}, \eqref{VISC_LIM_8}, we have \begin{equation} \label{VISC_LIM_9}\norm{p_n}_{L^2(L^2)}\leq C. \end{equation} Using \eqref{VISC_LIM_6}-\eqref{VISC_LIM_8} and the continuous embedding $H^1\hookrightarrow L^3$, we calculate \begin{equation} \label{VISC_LIM_10}\norm{\varphi_n\Gamma_{\mathbf{v},n}}_{L^2(L^2)}\leq \norm{\varphi_n}_{L^{\infty}(L^3)}\norm{\Gamma_{\mathbf{v},n}}_{L^2(L^6)}\leq C\norm{\varphi_n}_{L^{\infty}(H^1)}\norm{\Gamma_{\mathbf{v},n}}_{L^2(L^6)}\leq C. \end{equation} Now, let $\rho\in L^{\frac{8}{3}}(H^1)$. Then, using \eqref{LEM_GAGNIR_EST} and \eqref{VISC_LIM_5}-\eqref{VISC_LIM_7a}, we obtain \begin{align*} \left|\int_{0}^{T}\int_\Omega \nabla\varphi_n\cdot\mathbf{v}_n\rho\;\mathrm dx \;\mathrm dt\right| &\leq \int_{0}^{T} \normL{3}{\nabla\varphi_n}\normL{2}{\mathbf{v}_n}\normh{1}{\rho}\;\mathrm dt \\ &\leq \int_{0}^{T} \normh{1}{\varphi_n}^{\frac{3}{4}}\normh{3}{\varphi_n}^{\frac{1}{4}}\normL{2}{\mathbf{v}_n}\normh{1}{\rho}\;\mathrm dt \\ &\leq \norm{\varphi_n}_{L^{\infty}(H^1)}^{\frac{3}{4}}\norm{\varphi_n}_{L^2(H^3)}^{\frac{1}{4}}\norm{\mathbf{v}_n}_{L^2(\mathbf L^2)}\norm{\rho}_{L^{\frac{8}{3}}(H^1)}\\ &\leq C\norm{\rho}_{L^{\frac{8}{3}}(H^1)}, \end{align*} which implies \begin{equation*} \norm{\nabla\varphi_n\cdot\mathbf{v}_n}_{L^{\frac{8}{5}}((H^1)^*)}\leq C. \end{equation*} Using the continuous embedding $L^2\hookrightarrow (H^1)^*$ and \eqref{VISC_LIM_3}, \eqref{VISC_LIM_10}, we deduce that \begin{equation} \label{VISC_LIM_11}\norm{\textnormal{div}(\varphi_n\mathbf{v}_n)}_{L^{\frac{8}{5}}((H^1)^*)}\leq C. \end{equation} Finally, a comparison argument in \eqref{VISC_LIM_4b} yields \begin{equation} \label{VISC_LIM_12}\norm{\partial_{t}\varphi_n}_{L^{\frac{8}{5}}((H^1)^*)}\leq C, \end{equation} where we used \eqref{VISC_LIM_5}-\eqref{VISC_LIM_8} and \eqref{VISC_LIM_11}. Summarising \eqref{VISC_LIM_5}-\eqref{VISC_LIM_12}, we end up with \begin{align} \nonumber &\norm{\varphi_n}_{W^{1,\frac{8}{5}}((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3)} + \norm{\mu_n}_{L^2(H^1)} + \norm{\sigma_n}_{L^2(H^1)} + \norm{\textnormal{div}(\varphi_n\mathbf{v}_n)}_{L^{\frac{8}{5}}((H^1)^*)}\\ \label{VISC_LIM_13}&\quad + \norm{\textnormal{div}(\mathbf{v}_n)}_{L^2(L^6)}+ \norm{\sqrt{\eta_n(\varphi_n)}{\mathbf{D}}\mathbf{v}_n}_{L^2(\mathbf L^2)} + \norm{\mathbf{v}_n}_{L^2(\mathbf L_{\text{\textnormal{div}}}^2(\Omega))} + \norm{p_n}_{L^2(L^2)}\leq C. \end{align} \subsection{Passing to the limit} Recalling \eqref{TRACE_ESTIM} and \eqref{VISC_LIM_13}, using standard compactness arguments (Aubin-Lions theorem (see \cite[Sec. 8, Cor. 4]{Simon}) and reflexive weak compactness), the compact embeddings \begin{equation*} H^{j+1}(\Omega) = W^{j+1,2}(\Omega)\hookrightarrow\hookrightarrow W^{j,r}\quad \forall j\in \mathbb{Z},\,j\geq 0,\,1\leq r<6, \end{equation*} and $L^2\hookrightarrow\hookrightarrow (H^1)^*$, we obtain, at least for a subsequence which will again be labelled by $n$, the following convergence results: \begin{alignat*}{3} \varphi_{n}&\to\varphi&&\quad\text{weakly-}*&&\quad\text{ in } W^{1,\frac{8}{5}}((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3),\\ \sigma_{n}&\to \sigma &&\quad\text{weakly}&&\quad\text{ in }L^2(H^1),\\ \mu_{n}&\to \mu&&\quad\text{weakly}&&\quad\text{ in }L^2(H^1),\\ p_{n}&\to p&&\quad\text{weakly}&&\quad\text{ in }L^{2}(L^2),\\ \mathbf{v}_{n}&\to\mathbf{v}&&\quad\text{weakly}&&\quad\text{ in }L^2(\mathbf L^2)\cap L^2(L_{\textnormal{div}}^2(\Omega)),\\ \textnormal{div}(\mathbf{v}_{n})&\to\xi &&\quad\text{weakly}&&\quad\text{ in }L^2(L^2),\\ \mathbf{v}_{n}\cdot\mathbf n&\to\mathbf{v}\cdot\mathbf n&&\quad\text{weakly}&&\quad\text{ in }L^2((H^{\frac{1}{2}}(\partial\Omega))^*),\\ \textnormal{div}(\varphi_{n}\mathbf{v}_{n})&\to \tau&&\quad\text{weakly}&&\quad\text{ in }L^{\frac{8}{5}}((H^1)^*), \end{alignat*} for some limit functions $\xi\in L^2(L^2),\,\tau\in L^{\frac{8}{5}}((H^1)^*)$. Furthermore, by (\ref{VISC_LIM_1}), we have the strong convergences \begin{alignat*}{3} &\varphi_{n}\to\varphi\quad\text{strongly}&&\quad\text{ in }C^0(L^r)\cap L^2(W^{2,r})\text{ and a.e. in }Q,\\ &\eta_n(\varphi_n)\to 0&&\quad\text{ in }Q,\\ &\lambda_n(\varphi_n)\to 0&&\quad\text{ in }Q, \end{alignat*} for $r\in [1,6)$. In the following, we fix $\delta\in C_0^{\infty}(0,T),\,\Phi\in H^1,\,\boldsymbol{\Phi}\in \mathbf H^1$ and we note that $\delta\Phi\in C^{\infty}(H^1),\,\delta\boldsymbol{\Phi}\in C^{\infty}(\mathbf H^1)$. Multiplying \eqref{VISC_LIM_4a}-\eqref{VISC_LIM_4d} with $\delta$ and integrating in time from $0$ to $T$ and multiplying \eqref{VISC_LIM_4a} with $\delta\Phi$ and integrating over $Q$, we obtain \begin{subequations} \label{VISC_LIM_14} \begin{align} \nonumber 0 &= \int_{0}^{T}\int_\Omega \delta(t)(2\eta_n(\varphi_n) {\mathbf{D}}\mathbf{v}_{n} + \lambda_n(\varphi_n)\textnormal{div}(\mathbf{v}_n)\mathbf{I} - p_{n}\mathbf{I})\colon \nabla\boldsymbol{\Phi}+\nu\mathbf{v}_{n}\cdot\boldsymbol{\Phi}){\mathrm{d}} x{\mathrm{d}} t \\ \label{VISC_LIM_14a}&- \int_{0}^{T}\int_\Omega \delta(t)(\mu_{n}+\chi\sigma_{n})\nabla\varphi_{n}\cdot\boldsymbol{\Phi}{\mathrm{d}} x{\mathrm{d}} t,\\ \nonumber 0& =\int_{0}^{T} \delta(t)\langle\partial_{t}\varphi_{n}{,}\Phi\rangle_{H^1,(H^1)^*}{\mathrm{d}} t + \int_{0}^{T} \int_\Omega \delta(t)(m(\varphi_{n})\nabla\mu_{n}\cdot\nabla\Phi - \Gamma_{\varphi,n}\Phi){\mathrm{d}} x{\mathrm{d}} t\\ \label{VISC_LIM_14b} &\quad +\int_{0}^{T} \int_\Omega \delta(t)(\nabla\varphi_n\cdot\mathbf{v}_n +\varphi_n\Gamma_{\mathbf{v},n})\Phi{\mathrm{d}} x{\mathrm{d}} t,\\ \label{VISC_LIM_14c}0&=\int_{0}^{T}\int_\Omega \delta(t)\mu_{n}\Phi{\mathrm{d}} x{\mathrm{d}} t - \int_{0}^{T}\int_\Omega \delta(t)(\epsilon^{-1}\Psi'(\varphi_{n})- \epsilon\Delta\varphi_{n} - \chi\sigma_{n})\Phi{\mathrm{d}} x{\mathrm{d}} t,\\ \label{VISC_LIM_14d} 0& = \int_{0}^{T}\int_\Omega \delta(t)(\nabla\sigma_{n}\cdot\nabla\Phi + h(\varphi_{n})\sigma_{n}\Phi){\mathrm{d}} x{\mathrm{d}} t + \int_{0}^{T} \int_{\partial\Omega}\delta(t)(K(\sigma_{n}-\sigma_{\infty})\Phi){\mathrm{d}}\mathcal{H}^{d-1}{\mathrm{d}} t. \end{align} Furthermore, we multiply $\eqref{VISC_LIM_3}_1$ with $\delta\Phi$ and integrate over $Q$ to obtain \begin{equation} \label{VISC_LIM_14e}\int_{0}^{T}\int_\Omega \delta \textnormal{div}(\mathbf{v}_{n})\Phi{\mathrm{d}} x{\mathrm{d}} t = \int_{0}^{T}\int_\Omega \delta(t)\Gamma_{\mathbf{v},n}\Phi{\mathrm{d}} x{\mathrm{d}} t. \end{equation} \end{subequations} We now want to analyse each term individually. For \eqref{VISC_LIM_14c}-\eqref{VISC_LIM_14d}, we omit the details and refer to the arguments used in \cite[Sec. 5]{EbenbeckGarcke}, \cite[Sec. 5]{GarckeLam1}. \paragraph{Step 1 (\eqref{VISC_LIM_14e}):} Since $\varphi_{n}\to \varphi$ a.\,e. in $Q$ and due to the boundedness of $b_{\mathbf{v}}(\cdot),\, f_{\mathbf{v}}(\cdot)$, Lebesgue dominated convergence theorem implies \begin{equation*} \norm{\delta\Phi(b_{\mathbf{v}}(\varphi_{n})-b_{\mathbf{v}}(\varphi))}_{L^2(Q)}\to 0,\quad \norm{\delta\Phi(f_{\mathbf{v}}(\varphi_{n})-f_{\mathbf{v}}(\varphi))}_{L^2(Q)}\to 0 \end{equation*} as $n\to\infty$. Together with the weak convergence $\sigma_{n}\to \sigma$ in $L^2(Q)$ as $n\to\infty$, by the product of weak-strong convergence we obtain \begin{equation} \label{VISC_LIM_15} \int_{0}^{T}\int_\Omega \delta(t)\Gamma_{\mathbf{v},n}\Phi{\mathrm{d}} x{\mathrm{d}} t\to \int_{0}^{T}\int_\Omega \delta(t)\Gamma_{\mathbf{v}}(\varphi,\sigma)\Phi{\mathrm{d}} x{\mathrm{d}} t\quad\text{as }n\to\infty. \end{equation} Using \eqref{TRACE_GENERAL}, we see that \begin{equation*} \int_{0}^{T}\int_\Omega \delta\textnormal{div}(\mathbf{v}_{n})\Phi{\mathrm{d}} x{\mathrm{d}} t = \int_{0}^{T}\delta(t) \langle \mathbf{v}_{n}\cdot\mathbf n{,}\Phi\rangle_{H^{\frac{1}{2}}(\partial\Omega)}{\mathrm{d}} t - \int_{0}^{T}\int_\Omega \delta(t)\mathbf{v}_{n}\cdot\nabla\Phi{\mathrm{d}} x{\mathrm{d}} t. \end{equation*} Since $\mathbf{v}_{n}\cdot\mathbf n\to\mathbf{v}\cdot\mathbf n$ weakly in $L^2((H^{\frac{1}{2}}(\partial\Omega))^*)$, $\mathbf{v}_{n}\to\mathbf{v}$ weakly in $L^2(\mathbf L^2)$ and $\textnormal{div}(\mathbf{v}_{n})\to\xi$ weakly in $L^2(L^2)$ as $n\to\infty$, we can pass to the limit on both sides of this equation to obtain \begin{equation*} \int_{0}^{T}\int_\Omega \delta\xi \Phi{\mathrm{d}} x{\mathrm{d}} t = \int_{0}^{T}\delta(t) \langle \mathbf{v}\cdot\mathbf n{,}\Phi\rangle_{H^{\frac{1}{2}}(\partial\Omega)}{\mathrm{d}} t - \int_{0}^{T}\int_\Omega \delta(t)\mathbf{v}\cdot\nabla\Phi{\mathrm{d}} x{\mathrm{d}} t. \end{equation*} Since $\mathbf{v}\in L_{\textnormal{div}}^2(\Omega)$, we can again use \eqref{TRACE_GENERAL} to obtain \begin{equation*} \int_{0}^{T}\int_\Omega \delta\xi \Phi{\mathrm{d}} x{\mathrm{d}} t = \int_{0}^{T} \int_\Omega \delta\textnormal{div}(\mathbf{v})\Phi{\mathrm{d}} x{\mathrm{d}} t. \end{equation*} This in particular gives \begin{equation} \label{VISC_LIM_16}\textnormal{div}(\mathbf{v}) = \xi \quad \text{a.e. in }Q. \end{equation} From this considerations and recalling \eqref{VISC_LIM_15}, we can pass to the limit $n\to\infty$ in \eqref{VISC_LIM_14e} to obtain \begin{equation*} \int_{0}^{T}\int_\Omega \delta\textnormal{div}(\mathbf{v})\Phi{\mathrm{d}} x{\mathrm{d}} t = \int_{0}^{T}\int_\Omega \delta \Gamma_{\mathbf{v}}(\varphi,\sigma)\Phi{\mathrm{d}} x{\mathrm{d}} t, \end{equation*} which in particular implies \begin{equation} \label{VISC_LIM_17}\textnormal{div}(\mathbf{v}) = \Gamma_{\mathbf{v}}(\varphi,\sigma)\quad\text{a.\,e. in }Q. \end{equation} \paragraph{Step 2 (\eqref{VISC_LIM_14b}):} Since $\delta\Phi\in C^{\infty}(H^1)$ and $\textnormal{div}(\varphi_{n}\mathbf{v}_{n})\to \tau$ weakly in $L^{\frac{8}{5}}((H^1)^*)$, we have \begin{equation} \label{VISC_LIM_18}\int_{0}^{T} \int_\Omega\delta\textnormal{div}(\varphi_{n}\mathbf{v}_{n})\Phi{\mathrm{d}} x{\mathrm{d}} t \to \int_{0}^{T} \delta(t)\langle\tau{,}\Phi\rangle_{H^1}{\mathrm{d}} t\quad \text{as }n\to\infty. \end{equation} Moreover, since $\nabla\varphi_{n}\to\nabla\varphi$ strongly in $L^2(\mathbf L^3)$ and due to the continuous embedding $H^1\hookrightarrow L^6$, we have \begin{align*} \int_{0}^{T}\int_\Omega |\delta|^2|\Phi|^2|\nabla\varphi_{n}-\nabla\varphi|^2{\mathrm{d}} x{\mathrm{d}} t&\leq \int_{0}^{T} |\delta|^2 \norml{6}{\Phi}^2\normL{3}{\nabla\varphi_{n}-\nabla\varphi}^2{\mathrm{d}} t\\ &\leq C\norm{\delta}_{L^{\infty}(0,T)}^2\normh{1}{\Phi}^2\norm{\nabla\varphi_{n}-\nabla\varphi}_{L^2(\mathbf L^3)}^2\to 0 \end{align*} as $n\to\infty$. This implies $\delta\Phi\nabla\varphi_{n}\to\delta\phi\nabla\varphi$ strongly in $L^2(\mathbf L^2)$. Together with the weak convergence $\mathbf{v}_{n}\to\mathbf{v}$ in $L^2(\mathbf L^2)$ as $n\to\infty$, by the product of weak-strong convergence we get \begin{equation} \label{VISC_LIM_19}\int_{0}^{T}\int_\Omega \delta\Phi\nabla\varphi_{n}\cdot\mathbf{v}_{n}{\mathrm{d}} x{\mathrm{d}} t\to \int_{0}^{T}\int_\Omega \delta\Phi\nabla\varphi\cdot\mathbf{v}{\mathrm{d}} x{\mathrm{d}} t\quad\text{as }n\to\infty. \end{equation} Since $\varphi_{n}\to\varphi$ strongly in $L^2(L^3)$ and a.\,e. in $Q$ as $n\to\infty$, the boundedness of $b_{\mathbf{v}}(\cdot),\,f_{\mathbf{v}}(\cdot)$ and Lebesgue dominated convergence theorem imply \begin{equation*} \norm{(b_{\mathbf{v}}(\varphi_{n})\varphi_{n}-b_{\mathbf{v}}(\varphi)\varphi)\delta\Phi}_{L^2(Q)}\to 0,\qquad \norm{(f_{\mathbf{v}}(\varphi_{n})\varphi_{n}-f_{\mathbf{v}}(\varphi)\varphi)\delta\Phi}_{L^2(Q)}\to 0 \end{equation*} as $n\to \infty$, where we used that $\Phi\in H^1\hookrightarrow L^6$. Together with the weak convergence $\sigma_{n}\to\sigma$ in $L^2(Q)$ as $n\to\infty$, this implies \begin{equation} \label{VISC_LIM_20}\int_{0}^{T}\int_\Omega \delta\Phi \Gamma_{\mathbf{v},n}\varphi_{n}{\mathrm{d}} x{\mathrm{d}} t\to \int_{0}^{T}\int_\Omega \delta\Phi \Gamma_{\mathbf{v}}(\varphi,\sigma)\varphi{\mathrm{d}} x{\mathrm{d}} t\quad \text{as }n\to\infty. \end{equation} Using $\eqref{VISC_LIM_3}_1$, we see that \begin{equation*} \int_{0}^{T} \int_\Omega \delta\textnormal{div}(\varphi_{n}\mathbf{v}_{n})\Phi{\mathrm{d}} x{\mathrm{d}} t = \int_{0}^{T}\int_\Omega \delta\Phi\nabla\varphi_{n}\cdot\mathbf{v}_{n}{\mathrm{d}} x{\mathrm{d}} t + \int_{0}^{T}\int_\Omega \delta\Phi \Gamma_{\mathbf{v},n}\varphi_{n}{\mathrm{d}} x{\mathrm{d}} t. \end{equation*} Passing to the limit $n\to\infty$ on both sides of this equation and using \eqref{VISC_LIM_19}-\eqref{VISC_LIM_20}, we obtain \begin{equation} \label{VISC_LIM_21}\int_{0}^{T} \delta(t)\langle\tau{,}\Phi\rangle_{H^1}{\mathrm{d}} t = \int_{0}^{T}\int_\Omega \delta\Phi\nabla\varphi\cdot\mathbf{v}{\mathrm{d}} x{\mathrm{d}} t + \int_{0}^{T}\int_\Omega \delta\Phi \Gamma_{\mathbf{v}}(\varphi,\sigma)\varphi{\mathrm{d}} x{\mathrm{d}} t. \end{equation} Together with \eqref{VISC_LIM_17}, this gives \begin{equation} \label{VISC_LIM_22}\int_{0}^{T} \delta(t)\langle\tau{,}\Phi\rangle_{H^1,(H^1)^*}{\mathrm{d}} t = \int_{0}^{T}\int_\Omega \delta(t)\textnormal{div}(\varphi\mathbf{v})\Phi{\mathrm{d}} x{\mathrm{d}} t, \end{equation} hence $\textnormal{div}(\varphi\mathbf{v}) = \tau$ in the sense of distributions. For the remaining terms in \eqref{VISC_LIM_14b}, we again refer to \cite[Sec. 5]{EbenbeckGarcke}, \cite[Sec. 5]{GarckeLam1}, where they used similar arguments. \paragraph{Step 3 (\eqref{VISC_LIM_14a}):} With exactly the same arguments as used for \eqref{VISC_LIM_19}, it follows that $\delta\boldsymbol{\Phi}\cdot\nabla\varphi_{n}\to \delta\boldsymbol{\Phi}\cdot\nabla\varphi$ strongly in $L^2(L^2)$ as $n\to\infty$. Then, recalling that $\mu_{n} +\chi\sigma_{n}\to \mu + \chi\sigma$ weakly in $L^2(L^2)$ as $n\to\infty$, by the product of weak-strong convergence we obtain \begin{equation} \label{VISC_LIM_23}\int_{0}^{T}\int_\Omega \delta(t)(\mu_{n}+\chi\sigma_{n})\nabla\varphi_{n}\cdot\boldsymbol{\Phi}{\mathrm{d}} x{\mathrm{d}} t\to \int_{0}^{T}\int_\Omega \delta(t)(\mu+\chi\sigma)\nabla\varphi\cdot\boldsymbol{\Phi}{\mathrm{d}} x{\mathrm{d}} t\quad\text{as }n\to\infty. \end{equation} Now, since $p_{n}\to p,\,\mathbf{v}_{n}\to\mathbf{v}$ weakly in $L^2(L^2)$ and $L^2(\mathbf L^2)$ respectively and observing \begin{equation*} \int_{0}^{T}\int_\Omega \delta p_{n}\mathbf{I}\colon\nabla\boldsymbol{\Phi}{\mathrm{d}} x{\mathrm{d}} t = \int_{0}^{T}\int_\Omega \delta p_{n}\textnormal{div}(\boldsymbol{\Phi}){\mathrm{d}} x{\mathrm{d}} t, \end{equation*} we obtain that \begin{equation} \label{VISC_LIM_24}\int_{0}^{T}\int_\Omega \delta (-p_{n}\textnormal{div}(\boldsymbol{\Phi}) +\nu\mathbf{v}_{n}\cdot\boldsymbol{\Phi}){\mathrm{d}} x{\mathrm{d}} t\to \int_{0}^{T}\int_\Omega \delta (-p\textnormal{div}(\boldsymbol{\Phi}) +\nu\mathbf{v}\cdot\boldsymbol{\Phi}){\mathrm{d}} x{\mathrm{d}} t\quad\text{as }n\to \infty. \end{equation} Finally, we recall that $\eta_n(\varphi_n) \to 0$ a.\,e. in $Q$ as $n\to\infty$. Consequently, applying \eqref{VISC_LIM_13} yields \begin{align} \nonumber \left|\int_{0}^{T}\int_\Omega \delta(t)2\eta_n(\varphi_n) {\mathbf{D}}\mathbf{v}_{n}\colon\nabla\boldsymbol{\Phi}{\mathrm{d}} x{\mathrm{d}} t\right| &\leq C\norm{\sqrt{\eta_n(\varphi_n)}{\mathbf{D}}\mathbf{v}_n}_{L^2(\mathbf L^2)}\norm{\sqrt{\eta_n(\varphi_n)}}_{L^{\infty}(Q)}\norm{\delta}_{L^{\infty}(0,T)}\normH{1}{\boldsymbol{\Phi}}\\ \nonumber &\leq C \norm{\sqrt{\eta_n(\varphi_n)}}_{L^{\infty}(Q)}\norm{\delta}_{L^{\infty}(0,T)}\normH{1}{\boldsymbol{\Phi}}\\ \label{VISC_LIM_25} &\to 0\quad\text{as }n \to \infty. \end{align} Using that $\lambda_n(\varphi_n) \to 0$ a.\,e. in $Q$ as $n\to\infty$ and applying \eqref{VISC_LIM_13}, it follows that \begin{align} \nonumber \left|\int_{0}^{T}\int_\Omega \delta(t)\lambda_n(\varphi_n) \textnormal{div}(\mathbf{v}_n)\mathbf{I}\colon\nabla\boldsymbol{\Phi}{\mathrm{d}} x{\mathrm{d}} t\right| &\leq C\norm{\textnormal{div}(\mathbf{v}_n)}_{L^2(L^2)}\norm{\lambda_n(\varphi_n)}_{L^{\infty}(Q)}\norm{\delta}_{L^{\infty}(0,T)}\normH{1}{\boldsymbol{\Phi}}\\ \nonumber &\leq C \norm{\lambda_n(\varphi_n)}_{L^{\infty}(Q)}\norm{\delta}_{L^{\infty}(0,T)}\normH{1}{\boldsymbol{\Phi}}\\ \label{VISC_LIM_26} &\to 0\quad\text{as }n \to \infty. \end{align} \paragraph{Step 4:} Due to \eqref{VISC_LIM_15}-\eqref{VISC_LIM_26}, we have enough to pass to the limit $n\to\infty$ in \eqref{VISC_LIM_14} to obtain that \begin{subequations}\label{VISC_LIM_27} \begin{align} \label{VISC_LIM_27a}0&=\int_{0}^{T}\delta(t)\left(\int_\Omega -p\textnormal{div}(\boldsymbol{\Phi}) +(\nu\mathbf{v}-(\mu+\chi\sigma)\nabla\varphi)\cdot\boldsymbol{\Phi}{\mathrm{d}} x\right){\mathrm{d}} t,\\ \label{VISC_LIM_27b} 0&=\int_{0}^{T} \delta(t)\left(\langle\partial_{t}\varphi{,}\Phi\rangle_{H^1} + \int_\Omega m(\varphi)\nabla\mu\cdot\nabla\Phi + (\nabla\varphi\cdot\mathbf{v} +\varphi\Gamma_{\mathbf{v}}(\varphi,\sigma)-\Gamma_{\varphi}(\varphi,\sigma))\Phi{\mathrm{d}} x\right){\mathrm{d}} t,\\ \label{VISC_LIM_27c}0&=\int_{0}^{T}\delta(t)\left(\int_\Omega (\mu-\epsilon^{-1}\Psi'(\varphi)+ \epsilon\Delta\varphi + \chi\sigma)\Phi{\mathrm{d}} x\right){\mathrm{d}} t,\\ \label{VISC_LIM_27d} 0& = \int_{0}^{T} \delta(t)\left(\int_\Omega \nabla\sigma\cdot\nabla\Phi + h(\varphi)\sigma\Phi{\mathrm{d}} x + \int_{\partial\Omega}K(\sigma-\sigma_{\infty})\Phi{\mathrm{d}}\mathcal{H}^{d-1}\right){\mathrm{d}} t, \end{align} \end{subequations} for all $\delta\in C_0^{\infty}(0,T)$ and \begin{subequations} \label{VISC_LIM_28} \begin{equation} \label{VISC_LIM_28a}\textnormal{div}(\mathbf{v})=\Gamma_{\mathbf{v}}(\varphi,\sigma)\quad \text{a.\,e. in }Q. \end{equation} Since (\ref{VISC_LIM_27}) holds for all $\delta\in C_0^{\infty}(0,T)$, we deduce that \begin{align} \label{VISC_LIM_28b}0&=\int_\Omega -p\textnormal{div}(\boldsymbol{\Phi}) +(\nu\mathbf{v}-(\mu+\chi\sigma)\nabla\varphi)\cdot\boldsymbol{\Phi}{\mathrm{d}} x,\\ \label{VISC_LIM_28c} 0&= \langle\partial_{t}\varphi{,}\Phi\rangle_{H^1} +\int_\Omega m(\varphi)\nabla\mu\cdot\nabla\Phi + (\nabla\varphi\cdot\mathbf{v}+\varphi\Gamma_{\mathbf{v}}(\varphi,\sigma) -\Gamma_{\varphi}(\varphi,\sigma))\Phi{\mathrm{d}} x,\\ \label{VISC_LIM_28d}0&=\int_\Omega (\mu-\epsilon^{-1}\Psi'(\varphi)+ \epsilon\Delta\varphi + \chi\sigma)\Phi{\mathrm{d}} x,\\ \label{VISC_LIM_28e}0&= \int_\Omega \nabla\sigma\cdot\nabla\Phi + h(\varphi)\sigma\Phi{\mathrm{d}} x + \int_{\partial\Omega}K(\sigma-\sigma_{\infty})\Phi{\mathrm{d}}\mathcal{H}^{d-1} , \end{align} \end{subequations} holds for a.\,e. $t\in (0,T)$ and all $\Phi\in H^1,\,\boldsymbol{\Phi}\in\mathbf H^1$. The initial condition is satisfied since $\varphi_{n}(0) = \varphi_0$ a.\,e. in $\Omega$ and by the strong convergence $\varphi_{n}\to\varphi$ in $C^0(L^2)$ as $n\to\infty$. By the weak (weak-star) lower semi-continuity of norms and \eqref{VISC_LIM_13}, we obtain that $(\varphi,\mu,\sigma,\mathbf{v},p)$ satisfies \begin{align} \nonumber &\norm{\varphi}_{W^{1,\frac{8}{5}}((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3)} + \norm{\mu}_{L^2(H^1)} + \norm{\sigma}_{L^2(H^1)} + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^{\frac{8}{5}}((H^1)^*)}\\ \label{VISC_LIM_29}&\quad + \norm{\textnormal{div}(\mathbf{v})}_{L^2(L^6)} + \norm{\mathbf{v}}_{L^2(\mathbf L_{\textnormal{div}}^2(\Omega))} + \norm{p}_{L^2(L^2)}\leq C. \end{align} \paragraph{Step 5:} Using \eqref{VISC_LIM_28b} and \eqref{VISC_LIM_29}, we obtain that $p$ has a weak derivative in $L^{\frac{8}{5}}(L^2)$ and it holds \begin{equation} \label{VISC_LIM_30}\nabla p = -\nu\mathbf{v} + (\mu+\chi\sigma)\nabla\varphi\quad\text{a.\,e. in }Q. \end{equation} By \eqref{LEM_GAGNIR_EST}, we have \begin{equation*} \normL{3}{\nabla\varphi}\leq C\normL{2}{\nabla\varphi}^{\frac{3}{4}}\normH{2}{\nabla\varphi}^{\frac{1}{4}}\leq C\normh{1}{\varphi}^{\frac{3}{4}}\normh{3}{\varphi}^{\frac{1}{4}}. \end{equation*} Using \eqref{VISC_LIM_29}, this implies \begin{equation*} \norm{\nabla\varphi}_{L^8(\mathbf L^3)}\leq C. \end{equation*} Then, using the continuous embedding $H^1\hookrightarrow L^6$ and \eqref{VISC_LIM_29} again, we obtain \begin{equation*} \norm{(\mu+\chi\sigma)\nabla\varphi}_{L^\frac{8}{5}(\mathbf L^2)}\leq C. \end{equation*} Since $\mathbf{v}\in L^2(\mathbf L^2)$ and $p\in L^2(L^2)$ with bounded norm, from \eqref{VISC_LIM_30} we obtain \begin{equation} \label{VISC_LIM_31}\norm{p}_{L^2(L^2)\cap L^\frac{8}{5}(H^1)}\leq C. \end{equation} Integrating \eqref{VISC_LIM_28b} by parts, we obtain \begin{equation*} -\int_{\partial\Omega}p\boldsymbol{\Phi}\cdot\mathbf n{\mathrm{d}}\mathcal{H}^{d-1} = \int_\Omega (-\nabla p + (\mu+\chi\sigma)\nabla\varphi -\nu\mathbf{v})\cdot\boldsymbol{\Phi}{\mathrm{d}} x \end{equation*} for all $\boldsymbol{\Phi}\in \mathbf H^1$ and a.\,e. $t\in (0,T)$. Because of \eqref{VISC_LIM_30}, this implies \begin{equation*} \int_{\partial\Omega}p\boldsymbol{\Phi}\cdot\mathbf n{\mathrm{d}}\mathcal{H}^{d-1} = 0 \end{equation*} for all $\boldsymbol{\Phi}\in \mathbf H^1$ and a.\,e. $t\in (0,T)$. Therefore, we obtain \begin{equation} \label{VISC_LIM_32}p = 0\quad\text{a.\,e. on }\Sigma. \end{equation} With similar arguments, it is straightforward to show that \begin{equation} \label{VISC_LIM_33}\mu = \epsilon^{-1}\psi'(\varphi)-\epsilon\Delta\varphi -\chi\sigma\quad\text{a.\,e. in }Q,\qquad \partial_\n\varphi = 0\quad\text{a.\,e. on }\Sigma, \end{equation} which completes the proof. \section{Continuous dependence} In the following, since it has no bearing on the further analysis, we set $\epsilon = 1$. Let $(\varphi_i,\mu_i,\sigma_i,\mathbf{v}_i,p_i)_{i=1,2}$ be two solutions of \eqref{MEQ}, \eqref{BIC} in the sense of Definition \ref{DEF_WSOL_1}. We denote $\Gamma_{\mathbf{v}}(\varphi_i,\sigma_i)\coloneqq \Gamma_{\mathbf{v},i},\,\Gamma_{\varphi}(\varphi_i,\sigma_i)\coloneqq \Gamma_{\varphi,i},\,i=1,2,$ and $\sigma_{\infty} \coloneqq \sigma_{1,\infty}-\sigma_{2,\infty}$. Then, the difference $f\coloneqq f_1-f_2$, $f_i\in\{\varphi_i,\mu_i,\sigma_i,\mathbf{v}_i,p_i\},\,i=1,2,$ satisfies \begin{equation*} \textnormal{div}(\mathbf{v}) = \Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}\quad\text{ a.\,e. in }Q,\quad \varphi(0)=\varphi_{1,0}-\varphi_{2,0}\eqqcolon \varphi_0\quad\text{ a.\,e. in }\Omega, \end{equation*} and \begin{subequations} \label{DIFF} \begin{align} \nonumber 0 &=\int_\Omega (2\eta(\varphi_1){\mathbf{D}}\mathbf{v} + \lambda(\varphi_1)\textnormal{div}(\mathbf{v})\mathbf{I} - p\mathbf{I} )\colon \nabla\boldsymbol{\Phi}+\nu\mathbf{v}\cdot\boldsymbol{\Phi}\;\mathrm dx - \int_\Omega (\mu+\chi\sigma)\nabla\varphi_1\cdot\boldsymbol{\Phi} + (\mu_2+\chi\sigma_2)\nabla\varphi\cdot\boldsymbol{\Phi}\;\mathrm dx\\ \label{DIFF_1}&\quad +\int_\Omega (2(\eta(\varphi_1)-\eta(\varphi_2)){\mathbf{D}}\mathbf{v}_2 + (\lambda(\varphi_1)-\lambda(\varphi_2))\textnormal{div}(\mathbf{v}_2)\mathbf{I} )\colon \nabla\boldsymbol{\Phi} \;\mathrm dx, \\ \nonumber 0 &=\langle\partial_{t}\varphi{,}\Phi\rangle_{H^1,(H^1)^*} +\int_\Omega \nabla\mu\cdot\nabla\Phi + (\varphi_2(\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2})-(\Gamma_{\varphi,1}-\Gamma_{\varphi,2}))\Phi\;\mathrm dx \\ \label{DIFF_2} &\quad +\int_\Omega (\nabla\varphi_1\cdot\mathbf{v} + \nabla\varphi\cdot\mathbf{v}_2) \Phi +\varphi\Gamma_{\mathbf{v},1}\Phi\;\mathrm dx ,\\ \label{DIFF_4} 0 &= \int_\Omega \nabla\sigma\cdot\nabla\Phi\;\mathrm dx + \int_\Omega (h(\varphi_1)\sigma + \sigma_2(h(\varphi_1)-h(\varphi_2))\Phi\;\mathrm dx + \int_{\partial\Omega}K(\sigma -\sigma_{\infty})\Phi{\mathrm{d}}\mathcal{H}^{d-1}, \end{align} \end{subequations} for a.\,e. $t\in(0,T)$ and for all $\boldsymbol{\Phi}\in \mathbf H^1,\,\Phi\in H^1$, where $\mu$ is given by \begin{equation} \label{DIFF_3} \mu = \psi'(\varphi_1)-\psi'(\varphi_2)-\Delta\varphi -\chi\sigma. \end{equation} \paragraph{Step 1:} Taking $\Phi = \sigma$ in \eqref{DIFF_4}, we obtain \begin{equation*} \int_\Omega |\nabla\sigma|^2\;\mathrm dx + K\int_{\partial\Omega}|\sigma|^2{\mathrm{d}}\mathcal{H}^{d-1} + \int_\Omega h(\varphi_1)|\sigma|^2\;\mathrm dx = -\int_\Omega \sigma_2(h(\varphi_1)-(\varphi_2))\sigma\;\mathrm dx + K\int_{\partial\Omega}\sigma\sigma_{\infty}{\mathrm{d}}\mathcal{H}^{d-1}. \end{equation*} Using the non-negativity of $h(\cdot)$, we can neglect the third term on the l.h.s. of this equation. Applying Hölder's and Young's inequalities, we therefore obtain \begin{equation} \label{UNIQUE_1}\int_\Omega |\nabla\sigma|^2\;\mathrm dx + K\int_{\partial\Omega}|\sigma|^2{\mathrm{d}}\mathcal{H}^{d-1} \leq \frac{L_h^2}{4\delta}\norml{3}{\sigma_2}^2\norml{2}{\varphi}^2 + \frac{K}{2}\left(\norm{\sigma}_{L^2(\partial\Omega)}^2 + \norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\right) +\delta\norml{6}{\sigma}^2, \end{equation} with $\delta >0$ to be chosen and where we used (B2). Using Poincaré's inequality and the continuous embedding $H^1\hookrightarrow L^6$, we have \begin{equation} \label{UNIQUE_2}\norml{6}{\sigma}^2\leq 2C_P^2\left(\normL{2}{\nabla\sigma}^2 + \norm{\sigma}_{L^2(\partial\Omega)}^2\right). \end{equation} Choosing \begin{equation*} \delta = \frac{1}{8C_P^2}\min\{1,K\}, \end{equation*} and using \eqref{UNIQUE_2}, this implies \begin{equation} \label{UNIQUE_3}\normh{1}{\sigma} \leq C(K,L_h,\Omega)\left(\norml{6}{\sigma_2}\norml{2}{\varphi}+\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}\right). \end{equation} \paragraph{Step 2:} With similar arguments as in Section 3, we deduce the existence of a solution $\mathbf{u} \in \mathbf H^1$ of the problem \begin{equation*} \textnormal{div}(\mathbf{u} ) = \Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2} \quad\text{in }\Omega,\quad \mathbf{u} = \frac{1}{|\partial\Omega|}\left(\int_\Omega\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}\;\mathrm dx \right)\mathbf n \quad\text{on }\partial\Omega, \end{equation*} satisfying the estimate \begin{equation} \label{UNIQUE_4}\normH{1}{\mathbf{u} }\leq c\norml{2}{\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}}, \end{equation} with a constant $c$ depending only on $\Omega$. Choosing $\boldsymbol{\Phi} = \mathbf{v}-\mathbf{u} $ in \eqref{DIFF_1} and $\Phi = \varphi -\Delta\varphi$ in \eqref{DIFF_2}, integrating by parts, using \eqref{DIFF_3} and summing the resulting identities, we obtain \begin{align} \nonumber &\frac{{\mathrm{d}}}{\;\mathrm dt }\frac{1}{2}(\norm{\nabla\varphi}_{\mathbf L^2}^2 + \norml{2}{\varphi}^2) + \int_\Omega 2\eta|{\mathbf{D}}\mathbf{v}|^2 + \nu|\mathbf{v}|^2 + |\Delta\varphi|^2+ |\nabla\Delta\varphi|^2\;\mathrm dx \\ \nonumber&= \int_\Omega \nabla(\psi'(\varphi_1)-\psi'(\varphi_2))\cdot(\nabla\Delta\varphi -\nabla\varphi) - ((\Gamma_{\varphi,1}-\Gamma_{\varphi,2})-\varphi_2(\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}))\Delta\varphi\;\mathrm dx \\ \nonumber&\quad +\int_\Omega (\nabla\varphi\cdot\mathbf{v}_2 +\varphi\Gamma_{\mathbf{v},1})\Delta\varphi + (\psi'(\varphi_1)-\psi'(\varphi_2))\nabla\varphi_1\cdot(\mathbf{v}-\mathbf{u} )\;\mathrm dx,\\ \nonumber&\quad + \int_\Omega (\mu_2+\chi\sigma_2)\nabla\varphi\cdot(\mathbf{v}-\mathbf{u} ) + \Delta\varphi\nabla\varphi_1\cdot\mathbf{u} \;\mathrm dx +\int_\Omega \chi\nabla\sigma\cdot(\nabla\varphi-\nabla\Delta\varphi){\mathrm{d}} x \\ \nonumber&\quad + \int_\Omega ((\Gamma_{\varphi,1}-\Gamma_{\varphi,2})-\varphi_2(\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}))\varphi -(\nabla\varphi_1\cdot\mathbf{v} + \nabla\varphi\cdot\mathbf{v}_2) \varphi -\Gamma_{\mathbf{v},1}|\varphi|^2\;\mathrm dx \\ \label{UNIQUE_5}&\quad + \int_\Omega 2\eta(\varphi_1) {\mathbf{D}}\mathbf{v}\colon \nabla\mathbf{u} + \nu\mathbf{v}\cdot\mathbf{u} -2(\eta(\varphi_1)-\eta(\varphi_2)){\mathbf{D}}\mathbf{v}_2\colon \nabla(\mathbf{v}-\mathbf{u} )\;\mathrm dx. \end{align} \paragraph{Step 3:} We now estimate the terms on the r.h.s. of \eqref{UNIQUE_5}. In the following, we will frequently use Young's, Hölder's inequalities and Lemma \ref{LEM_GAGNIR}. First of all, using \textnormal{(A4)} and (B2), we obtain \begin{equation*} |\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}|\leq C(|\sigma| + |\sigma_2||\varphi| + |\varphi|),\quad |\Gamma_{\varphi,1}-\Gamma_{\varphi,2}|\leq C(|\sigma| + |\sigma_2||\varphi| + |\varphi|). \end{equation*} Hence, applying \eqref{UNIQUE_3} and the continuous embedding $H^1\hookrightarrow L^6$ yields \begin{equation} \label{UNIQUE_6}\norml{2}{\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}}+ \norml{2}{\Gamma_{\varphi,1}-\Gamma_{\varphi,2}}\leq C\left(1+\norml{6}{\sigma_2}\right) \normh{1}{\varphi}+C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}. \end{equation} Using \eqref{THM_WSOL_1_EST_1}, \eqref{UNIQUE_6} and the continuous embedding $H^1\hookrightarrow L^6$, we obtain \begin{align} \nonumber&\left|\int_\Omega ((\Gamma_{\varphi,1}-\Gamma_{\varphi,2})-\varphi_2(\Gamma_{\mathbf{v},1}-\Gamma_{\mathbf{v},2}))\Delta\varphi\;\mathrm dx \right|\\ \nonumber &\quad\leq C\big(1+\norml{\infty}{\varphi_2}^2\big)\big((1+\norml{6}{\sigma_2}^2)\normh{1}{\varphi}^2 +\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2\big) + \frac{1}{8}\norml{2}{\Delta\varphi}^2\\ \label{UNIQUE_7} &\quad\leq C\left(1+\normh{2}{\varphi_2}+\normh{1}{\sigma_2}^4\right)\normh{1}{\varphi}^2+ C\big(1+\normh{2}{\varphi_2}\big)\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2 + \frac{1}{8}\norml{2}{\Delta\varphi}^2 . \end{align} By the specific form of $\Gamma_{\mathbf{v}}$ and \textnormal{(A4)}, applying the continuous embedding $H^1\hookrightarrow L^6$ gives \begin{equation} \label{UNIQUE_8}\left|\int_\Omega \varphi\Gamma_{\mathbf{v},1}\Delta\varphi\;\mathrm dx \right|\leq C\left(1+\norml{3}{\sigma_1}^2\right)\normh{1}{\varphi}^2 + \frac{1}{8}\norml{2}{\Delta\varphi}^2. \end{equation} Hence, recalling the continuous embedding $\mathbf H^1\hookrightarrow \mathbf L^6$ and \eqref{ELLIPTIC_EST}, we calculate \begin{align} \nonumber \left|\int_\Omega \nabla\varphi\cdot\mathbf{v}_2\Delta\varphi\;\mathrm dx \right| &\leq \normL{3}{\nabla\varphi}\normL{6}{\mathbf{v}_2}\norml{2}{\Delta\varphi}\\ \nonumber &\leq C\normH{1}{\mathbf{v}_2}\normL{2}{\nabla\varphi}\normh{2}{\varphi}^\frac{1}{2}\normL{2}{\nabla\Delta\varphi}^{\frac{1}{2}}\\ \nonumber &\leq C\normH{1}{\mathbf{v}_2}\normh{1}{\varphi}\left(\norml{2}{\varphi} + \norml{2}{\Delta\varphi}+\normL{2}{\nabla\Delta\varphi}\right)\\ \label{UNIQUE_9}&\leq C\left(1+\normH{1}{\mathbf{v}_2}^2\right)\normh{1}{\varphi}^2 + \frac{1}{8}\norml{2}{\Delta\varphi}^2 + \frac{1}{4}\normL{2}{\nabla\Delta\varphi}^2. \end{align} Due to \textnormal{(B3)}, \eqref{LEM_GAGNIR_EST}, \eqref{THM_WSOL_1_EST_1} and the continuous embedding $H^1\hookrightarrow L^6$, we obtain \begin{equation} \label{UNIQUE_10} \norml{2}{\psi'(\varphi_1)-\psi'(\varphi_2)}^2\leq C\left(1+\normh{2}{\varphi_1}^2+\normh{2}{\varphi_2}^2\right)\normh{1}{\varphi}^2. \end{equation} By the continuous embedding $\mathbf H^1\hookrightarrow \mathbf L^6$ and \eqref{THM_WSOL_1_EST_1}, this gives \begin{align} \nonumber \left|\int_\Omega (\psi'(\varphi_2)-\psi'(\varphi_1))\nabla\varphi_1\cdot\mathbf{v}\;\mathrm dx \right|&\leq C\norml{2}{\psi'(\varphi_1)-\psi'(\varphi_2)}^2\normL{3}{\nabla\varphi_1}^2 + \delta_1\normL{6}{\mathbf{v}}^2\\ \label{UNIQUE_11}&\leq C\left(1+\normh{2}{\varphi_1}^4+\normh{2}{\varphi_2}^4\right)\normh{1}{\varphi}^2 + C\delta_1\normH{1}{\mathbf{v}}^2, \end{align} with $\delta_1>0$ to be chosen later. Recalling \eqref{UNIQUE_4} and \eqref{UNIQUE_6}, we obtain with similar arguments that \begin{align} \nonumber \left|\int_\Omega (\psi'(\varphi_2)-\psi'(\varphi_1))\nabla\varphi_1\cdot\mathbf{u} \;\mathrm dx \right|&\leq C\norml{2}{\psi'(\varphi_1)-\psi'(\varphi_2)}\normL{3}{\nabla\varphi_1}\normL{6}{\mathbf{u}}\\ \nonumber &\leq C\left(1+\norml{12}{\varphi_1}^4+\norml{12}{\varphi_2}^4\right)\normh{1}{\varphi}\normL{3}{\nabla\varphi_1}\normH{1}{\mathbf{u}}\\ \label{UNIQUE_12} &\leq C\left(1+\normh{2}{\varphi_1}^4+\normh{2}{\varphi_2}^4 + \norml{6}{\sigma_2}^4\right)\normh{2}{\varphi}^2 + C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2 . \end{align} Again using \eqref{UNIQUE_4} and \eqref{UNIQUE_6} and the continuous embeddings $H^1\hookrightarrow L^6,\,\mathbf H^1\hookrightarrow \mathbf L^6$, we obtain \begin{equation} \label{UNIQUE_13} \left|\int_\Omega (\mu_2+\chi\sigma_2)\nabla\varphi\cdot(\mathbf{v}-\mathbf{u} )\;\mathrm dx \right|\leq C\left(1+\normh{1}{\mu_2}^2+\normh{1}{\sigma_2}^2\right)\normh{1}{\varphi}^2 + C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2 + \delta_2\normH{1}{\mathbf{v}}^2, \end{equation} with $\delta_2>0$ to be chosen later. Once more using \eqref{THM_WSOL_1_EST_1},\eqref{UNIQUE_4} and \eqref{UNIQUE_6} and the continuous embedding $\mathbf H^1\hookrightarrow \mathbf L^6$, we deduce that \begin{align} \nonumber\left|\int_\Omega \Delta\varphi\nabla\varphi_1\cdot\mathbf{u} \;\mathrm dx \right|&\leq \norml{2}{\Delta\varphi}\normL{3}{\nabla\varphi_1}\normL{6}{\mathbf{u}}\\ &\nonumber \leq C\normL{3}{\nabla\varphi_1}^2\normH{1}{\mathbf{u}}^2 + \frac{1}{8}\norml{2}{\Delta\varphi}^2\\ \label{UNIQUE_14}&\leq C\left(1+\normh{2}{\varphi_1}^2 + \normh{1}{\sigma_2}^4\right)\normh{1}{\varphi}^2 + C\normh{2}{\varphi_1}\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2+ \frac{1}{8}\norml{2}{\Delta\varphi}^2. \end{align} Using the assumptions on $\eta(\cdot)$, we obtain \begin{align} \nonumber\left|\int_\Omega 2\eta(\varphi_1) {\mathbf{D}}\mathbf{v}\colon\nabla\mathbf{u} + \nu\mathbf{v}\cdot\mathbf{u} \;\mathrm dx \right|&\leq \frac{\eta_0}{2}\normL{2}{{\mathbf{D}}\mathbf{v}}^2 + \frac{\nu}{2}\normL{2}{\mathbf{v}}^2 + C\normH{1}{\mathbf{u}}^2\\ \nonumber&\leq \frac{\eta_0}{2}\normL{2}{{\mathbf{D}}\mathbf{v}}^2 + \frac{\nu}{2}\normL{2}{\mathbf{v}}^2 + C\left(1+\normh{1}{\sigma_2}^2\right)\normh{1}{\varphi}^2\\ \label{UNIQUE_15}&\quad +C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2 . \end{align} Furthermore, using the boundedness of $\eta'(\cdot)$, elliptic regularity and \eqref{LEM_GAGNIR_EST}, \eqref{UNIQUE_4}, \eqref{UNIQUE_6}, we obtain \begin{align} \nonumber &\left|\int_\Omega 2(\eta(\varphi_1)-\eta(\varphi_2){\mathbf{D}}\mathbf{v}_2\colon\nabla(\mathbf{v}-\mathbf{u} )\;\mathrm dx \right| \\ \nonumber &\leq C\norml{\infty}{\varphi}\normL{2}{{\mathbf{D}}\mathbf{v}_2}\left(\normL{2}{\nabla\mathbf{v}}+\normL{2}{\nabla\mathbf{u} }\right)\\ \nonumber &\leq C\normh{1}{\varphi}^{\frac{3}{4}}\normh{3}{\varphi}^{\frac{1}{4}}\normL{2}{{\mathbf{D}}\mathbf{v}_2}\left(\normL{2}{\nabla\mathbf{v}}+\normL{2}{\nabla\mathbf{u}}\right)\\ \nonumber &\leq \delta_3\normL{2}{\nabla\mathbf{v}}^2+C\normL{2}{\nabla\mathbf{u}}^2 + C\normh{1}{\varphi}^{\frac{3}{2}}\normh{3}{\varphi}^{\frac{1}{2}}\normL{2}{{\mathbf{D}}\mathbf{v}_2}^2\\ &\leq \delta_3\normL{2}{\nabla\mathbf{v}}^2+C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2+ \frac{1}{4}\normL{2}{\nabla\Delta\varphi}^2 + C\left(1+\normL{2}{{\mathbf{D}}\mathbf{v}_2}^{\frac{8}{3}}+\normh{1}{\sigma_2}^2\right)\normh{1}{\varphi}^2, \end{align} with $\delta_3>0$ to be chosen later. Due to \eqref{UNIQUE_6} and since $\varphi_2\in L^{\infty}(H^1)$ with bounded norm, we get \begin{equation} \label{UNIQUE_16}\left|\int_\Omega (1-\varphi_2)(\Gamma(\varphi_1,\sigma_1)-\Gamma(\varphi_2,\sigma_2))\varphi\;\mathrm dx \right|\leq C\left(1+\norm{\sigma_2}_{L^6}\right)\normh{1}{\varphi}^2+C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2. \end{equation} Applying \eqref{THM_WSOL_1_EST_1} and the continuous embeddings $H^1\hookrightarrow L^6,\,\mathbf H^1\hookrightarrow\mathbf L^6$ gives \begin{align} \nonumber \left|\int_\Omega (\nabla\varphi_1\cdot\mathbf{v} + \nabla\varphi\cdot\mathbf{v}_2)\varphi\;\mathrm dx \right| &\leq \norm{\nabla\varphi_1}_{\mathbf L^3}\norm{\mathbf{v}}_{\mathbf L^6}\norml{2}{\varphi} + \norm{\nabla\varphi}_{\mathbf L^2}\norm{\mathbf{v}_2}_{\mathbf L^6}\norm{\varphi}_{L^3}\\ \nonumber &\leq C\left(\norm{\nabla\varphi_1}_{\mathbf L^3}\norm{\mathbf{v}}_{\mathbf H^1}\normh{1}{\varphi} + \normh{1}{\varphi}^2\norm{\mathbf{v}_2}_{\mathbf H^1}\right)\\ \label{UNIQUE_17}&\leq C\left(\normh{2}{\varphi_1} + \norm{\mathbf{v}_2}_{\mathbf H^1}\right)\normh{1}{\varphi}^2 + \delta_4\norm{\mathbf{v}}_{\mathbf H^1}^2, \end{align} with $\delta_4>0$ to be chosen later. For the last term on the r.h.s. of \eqref{UNIQUE_5}, we use \textnormal{(A4)} and the continuous embedding $H^1\hookrightarrow L^6$ to obtain \begin{equation} \label{UNIQUE_18}\left|\int_\Omega \Gamma_{\mathbf{v},1}|\varphi|^2\;\mathrm dx \right|\leq C\left(1+\norml{6}{\sigma_2}\right)\normh{1}{\varphi}^2. \end{equation} We now estimate the first term on the r.h.s. of \eqref{UNIQUE_5}. To this end, we first observe that \begin{equation*} \nabla(\psi'(\varphi_1)-\psi'(\varphi_2)) = \psi''(\varphi_1)\nabla\varphi_1 - \psi''(\varphi_2)\nabla\varphi_2 = \psi''(\varphi_1)\nabla\varphi + \nabla\varphi_2(\psi''(\varphi_1)-\psi''(\varphi_2)). \end{equation*} Due to \eqref{ASS_PSI_2}-\eqref{ASS_PSI_3} and \eqref{THM_WSOL_1_EST_1}, we obtain \begin{equation*} \int_\Omega |\psi''(\varphi_1)\nabla\varphi|^2\;\mathrm dx \leq C\int_\Omega (1+|\varphi_1|^8)|\nabla\varphi|^2\;\mathrm dx \leq C\left(1+\norml{\infty}{\varphi_1}^8\right)\normL{2}{\nabla\varphi}^2\leq C\left(1+\normh{3}{\varphi_1}^2\right)\normh{1}{\varphi}^2. \end{equation*} Applying \eqref{THM_WSOL_1_EST_1} and \eqref{ASS_PSI_7}, we conclude that \begin{align*} \int_\Omega |\nabla\varphi_2(\psi''(\varphi_1)-\psi''(\varphi_2))|^2\;\mathrm dx &\leq C\int_\Omega (1+|\varphi_1|^6+|\varphi_2|^6)|\nabla\varphi_2|^2|\varphi|^2\;\mathrm dx \\ &\leq C\left(1+\norml{18}{\varphi_1}^6 +\norml{18}{\varphi_2}^6\right)\normL{6}{\nabla\varphi_2}^2\norml{6}{\varphi}^2\\ &\leq C\left(1+\normh{2}{\varphi_1}^4 +\normh{2}{\varphi_2}^4\right)\normh{1}{\varphi}^2. \end{align*} Combining the last two estimates, we obtain \begin{equation} \label{UNIQUE_19}\norm{\nabla(\psi'(\varphi_1)-\psi'(\varphi_2))}_{\mathbf L^2}^2\leq C\left(1+\normh{2}{\varphi_1}^4 +\normh{2}{\varphi_2}^4 + \normh{3}{\varphi_1}^2\right)\normh{1}{\varphi}^2. \end{equation} From this inequality, we deduce that \begin{align} \nonumber \left|\int_\Omega \nabla(\psi'(\varphi_1)-\psi'(\varphi_2))\cdot(\nabla\Delta\varphi-\nabla\varphi)\;\mathrm dx \right|&\leq C\left(1+\normh{2}{\varphi_1}^4 +\normh{2}{\varphi_2}^4 + \normh{3}{\varphi_1}^2\right)\normh{1}{\varphi}^2\\ \label{UNIQUE_20}&\quad + \frac{1}{8}\normL{2}{\nabla\Delta\varphi}^2. \end{align} Finally, using \eqref{UNIQUE_3}, it is easy to check that \begin{equation} \label{UNIQUE_20a}\left|\int_\Omega \chi\nabla\sigma\cdot(\nabla\varphi-\nabla\Delta\varphi){\mathrm{d}} x\right|\leq C\left(1+\norml{6}{\sigma_2}^2\right)\normh{1}{\varphi}^2 + C\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2 + \frac{1}{8}\normL{2}{\nabla\Delta\varphi}^2. \end{equation} Using \eqref{UNIQUE_7}-\eqref{UNIQUE_20a} in \eqref{UNIQUE_5} and choosing \begin{equation*} \delta_1 = \delta_2 =\delta_3= \delta_4= \frac{\min\{\frac{\nu}{2},\eta\}}{8C_K^2}\eqqcolon C_1, \end{equation*} where $C_K$ is Korn's constant, we end up with \begin{equation} \label{UNIQUE_21}\frac{{\mathrm{d}}}{\;\mathrm dt }\frac{1}{2}\left(\normL{2}{\nabla\varphi}^2 + \norml{2}{\varphi}^2\right) + 4C_1\normH{1}{\mathbf{v}}^2 + \frac{1}{2}\int_\Omega |\Delta\varphi|^2+ |\nabla\Delta\varphi|^2\;\mathrm dx \leq \alpha_1(t)\normh{1}{\varphi}^2+ \alpha_2(t)\norm{\sigma_{\infty}}_{L^2(\partial\Omega)}^2, \end{equation} where \begin{align*} \alpha_1(t)&\coloneqq C\left(1+\normh{2}{\varphi_1}^4 + \normh{2}{\varphi_2}^4 + \normh{3}{\varphi_1}^2+\normh{2}{\varphi_2}^2+\normH{1}{\mathbf{v}_2}^{\frac{8}{3}} + \normh{1}{\mu_2}^2+\normh{1}{\sigma_1}^2+\normh{1}{\sigma_2}^4\right)\\ \alpha_2(t)&\coloneqq C\left(1+\normh{2}{\varphi_1} + \normh{2}{\varphi_2}\right). \end{align*} Due to \eqref{THM_WSOL_1_EST_1}-\eqref{THM_WSOL_1_EST_2}, it follows that $\alpha_1\in L^1(0,T),\,\alpha_2\in L^4(0,T)$. We remark that $\alpha_1\in L^1(0,T)$ only holds provided $\sigma_2\in L^4(H^1)$ with bounded norm. Then, using a Gronwall argument (see \cite[Lemma 3.1]{GarckeLam3}) in \eqref{UNIQUE_21} yields \begin{equation*} \sup_{s\in (0,T]}\normh{1}{\varphi(s)}^2 + \int_{0}^{T}\normH{1}{\mathbf{v}}^2{\mathrm{d}} s + \int_{0}^{T}\int_\Omega |\Delta\varphi|^2+ |\nabla\Delta\varphi|^2\;\mathrm dx {\mathrm{d}} s \leq C\left(\normh{1}{\varphi_0}^2 + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}^2\right). \end{equation*} Together with elliptic regularity theory, this gives \begin{equation} \label{UNIQUE_22}\norm{\varphi}_{L^{\infty}(H^1)\cap L^2(H^3)} + \norm{\mathbf{v}}_{L^2(\mathbf H^1)} \leq C\left( \normh{1}{\varphi_0} + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}\right). \end{equation} Now, from \eqref{UNIQUE_3} and \eqref{UNIQUE_22}, we immediately obtain \begin{equation} \label{UNIQUE_23}\norm{\sigma}_{L^2(H^1)}\leq C\left(\normh{1}{\varphi_0} + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}\right). \end{equation} Using \eqref{LEM_GAGNIR_EST}, \eqref{UNIQUE_10}, \eqref{UNIQUE_19}, \eqref{UNIQUE_22} and the boundedness of $\varphi_1,\varphi_2\in L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3)$, it is straightforward to check that \begin{equation*} \norm{\psi'(\varphi_1)-\psi'(\varphi_2)}_{L^2(H^1)}\leq C\left(\normh{1}{\varphi_0} + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}\right). \end{equation*} Together with \eqref{UNIQUE_22}-\eqref{UNIQUE_23}, a comparison argument in \eqref{DIFF_3} yields \begin{equation} \label{UNIQUE_24} \norm{\mu}_{L^2(H^1)} \leq C\left( \normh{1}{\varphi_0} + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}\right). \end{equation} Using \eqref{UNIQUE_22}-\eqref{UNIQUE_24}, a comparison argument in \eqref{DIFF_2} yields \begin{equation} \label{UNIQUE_25} \norm{\varphi}_{H^1((H^1)^*)} \leq C\left( \normh{1}{\varphi_0} + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}\right). \end{equation} Combining \eqref{UNIQUE_22}-\eqref{UNIQUE_25}, we deduce the estimate \begin{align} \nonumber&\norm{\varphi}_{H^1((H^1)^*)\cap L^{\infty}(H^1)\cap L^2(H^3)} + \norm{\mu}_{L^2(H^1)} + \norm{\sigma}_{L^2(H^1)} + \norm{\mathbf{v}}_{L^2(\mathbf H^1)}\\ \label{UNIQUE_26}&\quad \leq C\left( \normh{1}{\varphi_0} + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}\right). \end{align} \paragraph{Step 4:} It remains to get an estimate on the pressure. To this end, let ${\mathbf{q}}\in \mathbf H^1$ be a solution of \begin{equation*} \textnormal{div}({\mathbf{q}}) = p\quad\text{in } \Omega,\quad {\mathbf{q}} = \frac{1}{|\partial\Omega|}\left(\int_\Omega p\;\mathrm dx \right)\mathbf n \quad\text{on }\partial\Omega, \end{equation*} such that \begin{equation} \label{UNIQUE_27}\normH{1}{{\mathbf{q}}}\leq c\norml{2}{p}, \end{equation} with $c$ depending only on $\Omega$. Then, choosing $\boldsymbol{\Phi} = {\mathbf{q}}$ in \eqref{DIFF_1}, we obtain \begin{align} \nonumber \norml{2}{p}^2 &= \int_\Omega (2\eta(\varphi_1){\mathbf{D}}\mathbf{v} + \lambda(\varphi_1)\textnormal{div}(\mathbf{v})\mathbf{I})\colon \nabla{\mathbf{q}}+\nu\mathbf{v}\cdot{\mathbf{q}}\;\mathrm dx - \int_\Omega (\mu+\chi\sigma)\nabla\varphi_1\cdot{\mathbf{q}} + (\mu_2+\chi\sigma_2)\nabla\varphi\cdot{\mathbf{q}}\;\mathrm dx\\ \label{UNIQUE_28}&\quad +\int_\Omega (2(\eta(\varphi_1)-\eta(\varphi_2)){\mathbf{D}}\mathbf{v}_2 + (\lambda(\varphi_1)-\lambda(\varphi_2))\textnormal{div}(\mathbf{v}_2)\mathbf{I} )\colon \nabla{\mathbf{q}} \;\mathrm dx. \end{align} Using \eqref{THM_WSOL_1_EST_1}-\eqref{THM_WSOL_1_EST_2} and (A3), a straightforward calculation shows that \begin{align*} &\left|\int_\Omega (2\eta(\varphi_1){\mathbf{D}}\mathbf{v} + \lambda(\varphi_1)\textnormal{div}(\mathbf{v})\mathbf{I})\colon \nabla{\mathbf{q}}+\nu\mathbf{v}\cdot{\mathbf{q}}\;\mathrm dx - \int_\Omega (\mu+\chi\sigma)\nabla\varphi_1\cdot{\mathbf{q}} + (\mu_2+\chi\sigma_2)\nabla\varphi\cdot{\mathbf{q}}\;\mathrm dx\right|\\ &\leq C\left(\normH{1}{\mathbf{v}}^2 + \normh{1}{\mu+\chi\sigma}^2 + \normh{1}{\mu_2+\chi\sigma_2}^2\normh{1}{\varphi}^2\right) + \frac{1}{4}\norml{2}{p}^2. \end{align*} For the remaining terms, we use again \eqref{THM_WSOL_1_EST_1}-\eqref{THM_WSOL_1_EST_2} and (A3) to obtain \begin{equation*} \left|\int_\Omega (2(\eta(\varphi_1)-\eta(\varphi_2)){\mathbf{D}}\mathbf{v}_2 + (\lambda(\varphi_1)-\lambda(\varphi_2))\textnormal{div}(\mathbf{v}_2)\mathbf{I} )\colon \nabla{\mathbf{q}} \;\mathrm dx\right|\\ \leq C\normH{1}{\mathbf{v}_2}^2\norml{\infty}{\varphi}^2 + \frac{1}{4}\norml{2}{p}^2. \end{equation*} Using the last two inequalities in \eqref{UNIQUE_28}, integrating the resulting estimate in time from $0$ to $T$ and using Young's generalised inequality, we deduce that \begin{equation*} \norm{p}_{L^2(L^2)}^2\leq C\left(\norm{\mathbf{v}}_{L^2(\mathbf H^1)}^2 + \norm{\mu+\chi\sigma}_{L^2(H^1)}^2 + \norm{\mu_2+\chi\sigma_2}_{L^2(H^1)}^2\norm{\varphi}_{L^{\infty}(H^1)}^2 + \norm{\mathbf{v}_2}_{L^{\frac{8}{3}}(\mathbf H^1)}^2\norm{\varphi}_{L^8(L^{\infty})}^2\right). \end{equation*} Therefore, using \eqref{THM_WSOL_1_EST_1}-\eqref{THM_WSOL_1_EST_2}, \eqref{UNIQUE_26} and the continuous embedding $L^{8}(L^{\infty})\hookrightarrow L^{\infty}(H^1)\cap L^2(H^3)$, the last inequality implies \begin{equation} \label{UNIQUE_29}\norm{p}_{L^2(L^2)}\leq C\left(\normh{1}{\varphi_0} + \norm{\sigma_{\infty}}_{L^4(L^2(\partial\Omega))}\right). \end{equation} Together with \eqref{UNIQUE_26}, we obtain \eqref{THM_CONTDEP_EST}, whence the proof is complete. \section{Existence of strong solutions} In this section, we will prove Theorem \ref{THM_SSOL}. The testing procedure can again be justified by a Galerkin scheme. In the following, we assume for simplicity and as it has no further consequence for the analysis that $\epsilon = 1$. Then, with similar arguments as before, we can arrive at \begin{align} \nonumber&\norm{\varphi}_{H^1((H^1)^*)\cap L^{\infty}(H^1)\cap L^4(H^2)\cap L^2(H^3) } + \norm{\sigma}_{L^4(H^1)} + \norm{\mu}_{L^2(H^1)\cap L^4(L^2)}\\ \label{APRI_SSOL_1}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^2)} + \norm{\mathbf{v}}_{L^{\frac{8}{3}}(\mathbf H^1)}+ \norm{p}_{L^2(L^2)}\leq C. \end{align} We will now show the result in a series of higher order estimates.\newline \paragraph{Step 1:} Observing that \eqref{WFORM_1d} is the weak formulation of \begin{alignat*}{3} -\Delta\sigma + h(\varphi)\sigma &= 0&&\quad\text{a.~e. in }\Omega,\\ \partial_\n\sigma + K\sigma &= K\sigma_{\infty}&& \quad \text{a.~e. on }\partial\Omega, \end{alignat*} due to the assumptions on $h(\cdot)$ and using \cite[Thm. 2.4.2.6]{Grisvard} we deduce that \begin{equation*} \normh{2}{\sigma}\leq C\norm{K\sigma_{\infty}}_{H^{\frac{1}{2}}(\partial\Omega)}. \end{equation*} Therefore, using the continuous embedding $H^2\hookrightarrow L^{\infty}$ and the fact that $\sigma_{\infty}\in H^1(H^{\frac{1}{2}}(\partial\Omega))\hookrightarrow C^0(H^{\frac{1}{2}}(\partial\Omega))$, we have \begin{equation} \label{APRI_SSOL_3}\norm{\sigma}_{L^{\infty}(H^2)\cap L^{\infty}(L^{\infty})}\leq C. \end{equation} By \textnormal{(A3)}, this yields \begin{equation} \label{APRI_SSOL_4}\norm{\textnormal{div}(\mathbf{v})}_{L^{\infty}(L^{\infty})} + \norm{\Gamma_{\varphi}}_{L^{\infty}(L^{\infty})}\leq C. \end{equation} Now, for $h>0$ we introduce the incremental ratio \begin{equation*} \partial_t^h u(t) = \frac{1}{h}(u(t+h)-u(t)). \end{equation*} Then, using \eqref{WFORM_1d}, we see that \begin{equation*} 0 = \int_\Omega \nabla \partial_t^h\sigma(t)\cdot\nabla\Phi + \big(\partial_t^h(h(\varphi(t)))\sigma(t+h) + \partial_t^h\sigma(t)h(\varphi(t)))\Phi\;\mathrm dx + \int_{\partial\Omega}K(\partial_t^h\sigma(t)-\partial_t^h\sigma_{\infty}(t))\Phi{\mathrm{d}}\mathcal{H}^{d-1} \end{equation*} holding for almost every $t\in (0,T-h]$. Testing this equation with $\Phi = \partial_t^h\sigma(t)$, integrating in time from $0$ to $T-h$ and using the non-negativity of $h(\cdot)$, we obtain \begin{align} \nonumber\int_{0}^{T-h}\normL{2}{\nabla\partial_t^h\sigma(t)}^2{\mathrm{d}} t + K\int_{0}^{T-h}\norm{\partial_t^h\sigma(t)}_{L^2(\partial\Omega)}^2{\mathrm{d}} t&\leq \int_{0}^{T-h}\int_\Omega \partial_t^h(h(\varphi(t)))\sigma(t+h)\partial_t^h\sigma(t){\mathrm{d}} x{\mathrm{d}} t\\ \label{APRI_SSOL_5} &\quad + \int_{0}^{T-h}\int_{\partial\Omega}K\partial_t^h\sigma(t)\partial_t^h\sigma_{\infty}(t){\mathrm{d}}\mathcal{H}^{d-1}{\mathrm{d}} t. \end{align} To estimate the r.h.s. of this equation, we use \eqref{APRI_SSOL_3} and the Lipschitz-continuity of $h(\cdot)$ to obtain \begin{align} \nonumber\left|\int_{0}^{T-h}\int_\Omega \partial_t^h(h(\varphi(t)))\sigma(t+h)\partial_t^h\sigma(t){\mathrm{d}} x{\mathrm{d}} t\right| &\leq C\int_{0}^{T-h}\int_\Omega |\partial_t^h\varphi(t)\partial_t^h\sigma(t)|{\mathrm{d}} x{\mathrm{d}} t\\ \nonumber&\leq C\norm{\partial_t^h\varphi}_{L^2(0,T-h;(H^1)^*)}\norm{\partial_t^h\sigma}_{L^2(0,T-h;H^1)}\\ \label{APRI_SSOL_6}&\leq C\norm{\partial_t\varphi}_{L^2(0,T;(H^1)^*)}\norm{\partial_t^h\sigma}_{L^2(0,T-h;H^1)}. \end{align} With similar arguments and using the trace theorem, the remaining term can be estimated by \begin{equation} \label{APRI_SSOL_6a}\left|\int_{0}^{T-h}\int_{\partial\Omega}K\partial_t^h\sigma(t)\partial_t^h\sigma_{\infty}(t){\mathrm{d}}\mathcal{H}^{d-1}{\mathrm{d}} t\right|\leq C\norm{\partial_t\sigma_{\infty}}_{L^2(0,T;H^{\frac{1}{2}}(\partial\Omega))}\norm{\partial_t^h\sigma}_{L^2(0,T-h;H^1)}. \end{equation} Using \eqref{APRI_SSOL_5}-\eqref{APRI_SSOL_6a} together with \eqref{APRI_SSOL_1} and \textnormal{(C2)}, an application of Poincaré's inequality implies \begin{equation} \label{APRI_SSOL_7}\norm{\partial_t^h\sigma}_{L^2(0,T-h;H^1)}\leq C\left(\norm{\partial_t\varphi}_{L^2(0,T;(H^1)^*)}+\norm{\partial_t\sigma_{\infty}}_{L^2(0,T;H^{\frac{1}{2}})}\right)\leq C. \end{equation} Since the constant $C$ is independent of $h>0$, this yields \begin{equation*} \norm{\partial_t\sigma}_{L^2(H^1)}\leq C. \end{equation*} Combining this inequality with \eqref{APRI_SSOL_3} and using the continuous embedding $H^1(H^1)\hookrightarrow C^0(H^1)$, we obtain \begin{equation} \label{APRI_SSOL_8}\norm{\sigma}_{H^1(H^1)\cap C^0(H^1)\cap L^{\infty}(H^2)}\leq C. \end{equation} \paragraph{Step 2:} Choosing $\Phi = \partial_t\varphi$ in \eqref{WFORM_1b} and $\Phi = \Delta\partial_t\varphi$ in \eqref{WFORM_1c} and integrating by parts, we obtain \begin{align} \nonumber \frac{{\mathrm{d}}}{\;\mathrm dt }\frac{1}{2}\int_\Omega |\Delta\varphi|^2\;\mathrm dx + \int_\Omega |\partial_t\varphi|^2 &= -\int_\Omega (\textnormal{div}(\varphi\mathbf{v}) -\Gamma_{\varphi})\partial_t\varphi \;\mathrm dx + \chi\int_\Omega \nabla\sigma\cdot\nabla\partial_t\varphi{\mathrm{d}} x \\ \label{APRI_SSOL_9}&\quad + \int_\Omega \psi'''(\varphi)|\nabla\varphi|^2\partial_t\varphi + \psi''(\varphi)\Delta\varphi\partial_t\varphi\;\mathrm dx . \end{align} We recall that $\Gamma_{\varphi},~\Gamma_{\mathbf{v}}\in L^2(L^2)$ with bounded norm. Then, using Hölder's and Young's inequalities, we can estimate the first three terms on the r.h.s. of \eqref{APRI_SSOL_9} by \begin{equation} \label{APRI_SSOL_10}\left|\int_\Omega (\textnormal{div}(\varphi\mathbf{v}) -\Gamma_{\varphi} )\partial_{t}\varphi \;\mathrm dx \right|\leq C\left(\norml{2}{\textnormal{div}(\varphi\mathbf{v})}^2 +\norml{2}{\Gamma_{\varphi}}^2\right) +\frac{1}{4}\norml{2}{\partial_{t}\varphi}^2. \end{equation} For the last term on the r.h.s. of \eqref{APRI_SSOL_9}, we use Hölder's and Young's inequalities together with \eqref{LEM_GAGNIR_EST}, \eqref{ASS_PSI_2}-\eqref{ASS_PSI_3} and \eqref{APRI_SSOL_1} to obtain \begin{equation} \label{APRI_SSOL_11} \left|\int_\Omega \psi''(\varphi)\Delta\varphi\partial_{t}\varphi\;\mathrm dx \right|\leq C\left(1+\norml{\infty}{\varphi}^4\right)\norml{2}{\Delta\varphi}\norml{2}{\partial_{t}\varphi}\leq \frac{1}{4}\norml{2}{\partial_{t}\varphi}^2 + C\left(1+\normh{3}{\varphi}^2\right)\norml{2}{\Delta\varphi}^2. \end{equation} Now, using \textnormal{(C3)}, Hölder's and Young's inequalities, \eqref{ELLIPTIC_EST}, \eqref{LEM_GAGNIR_EST} and \eqref{APRI_SSOL_1}, we obtain \begin{align} \nonumber\left|\int_\Omega \psi'''(\varphi)|\nabla\varphi|^2\partial_{t}\varphi\;\mathrm dx\right|&\leq C\left(1+\norml{\infty}{\varphi}^3\right)\normL{4}{\nabla\varphi}^2\norml{2}{\partial_{t}\varphi}\\ \nonumber &\leq C\left(1+\normh{3}{\varphi}^{\frac{3}{4}}\right)\normh{2}{\varphi}^{\frac{3}{2}}\norml{2}{\partial_{t}\varphi}\\ \nonumber &\leq C\left(1+\normh{3}{\varphi}\right)\left(\norml{2}{\varphi}+\norml{2}{\Delta\varphi}\right)\norml{2}{\partial_{t}\varphi}\\ \label{APRI_SSOL_13}&\leq C\left(1+\normh{3}{\varphi}^2\right)\left(1+\norml{2}{\Delta\varphi}^2\right) +\frac{1}{4}\norml{2}{\partial_{t}\varphi}^2. \end{align} For the remaining term on the r.h.s. of \eqref{APRI_SSOL_9}, we observe that \begin{equation*} \chi\int_\Omega \nabla\sigma\cdot\nabla\partial_{t}\varphi{\mathrm{d}} x = \frac{{\mathrm{d}}}{{\mathrm{d}} t}\chi\int_\Omega\nabla\sigma\cdot\nabla\varphi{\mathrm{d}} x - \int_\Omega \nabla\partial_{t}\sigma\cdot\nabla\varphi \quad\text{for a.\,e. }t\in (0,T). \end{equation*} Collecting \eqref{APRI_SSOL_9}-\eqref{APRI_SSOL_11}, \eqref{APRI_SSOL_13} and using the last identity, we end up with \begin{align*} \frac{{\mathrm{d}}}{\;\mathrm dt}\frac{1}{2}\int_\Omega |\Delta\varphi|^2\;\mathrm dx + \frac{1}{4}\int_\Omega |\partial_{t}\varphi|^2 &\leq C\left(1+\norml{2}{\textnormal{div}(\varphi\mathbf{v})}^2 + \norml{2}{\Gamma_{\varphi}}^2 +\normh{3}{\varphi}^2\right)+C\left(1 + \normh{3}{\varphi}^2\right)\norml{2}{\Delta\varphi}^2\\ &\quad + \frac{{\mathrm{d}}}{{\mathrm{d}} t}\chi\int_\Omega\nabla\sigma\cdot\nabla\varphi{\mathrm{d}} x - \int_\Omega \nabla\partial_{t}\sigma\cdot\nabla\varphi{\mathrm{d}} x. \end{align*} Integrating this inequality in time from $0$ to $s\in (0,T]$, we obtain \begin{align} \nonumber\frac{1}{2}\norml{2}{\Delta\varphi(s)}^2 + \frac{1}{4}\norm{\partial_{t}\varphi}_{L^2(0,s;L^2)}^2 &\leq \norml{2}{\Delta\varphi_0}^2 + \int_{0}^{s}\alpha_1(t) + \alpha_2(t)\norml{2}{\Delta\varphi(t)}^2{\mathrm{d}} t\\ \nonumber&\quad + \chi\int_\Omega\nabla\sigma(s)\cdot\nabla\varphi(s){\mathrm{d}} x - \chi\int_\Omega\nabla\sigma(0)\cdot\nabla\varphi_0{\mathrm{d}} x\\ \label{APRI_SSOL_15}&\quad - \chi\int_{0}^{s}\int_\Omega \nabla\partial_t\sigma\cdot\nabla\varphi{\mathrm{d}} x{\mathrm{d}} t, \end{align} where \begin{equation*} \alpha_1(t) \coloneqq C\left(1+\norml{2}{\textnormal{div}(\varphi\mathbf{v})}^2 + \norml{2}{\Gamma_{\varphi}}^2 + \normh{3}{\varphi}^2\right),\qquad \alpha_2(t)\coloneqq C\left(1+ \normh{3}{\varphi}^2\right). \end{equation*} Now, using \eqref{APRI_SSOL_1}, \eqref{APRI_SSOL_8} and $\varphi_0\in H_N^2$, we obtain \begin{align*} \left|\chi\int_\Omega\nabla\sigma(0)\cdot\nabla\varphi_0{\mathrm{d}} x \right| & = \left|\chi\int_\Omega \sigma(0)\Delta\varphi_0{\mathrm{d}} x\right|\leq C\left(1+\norml{2}{\Delta\varphi_0}^2\right),\\ \left|\chi\int_\Omega \nabla\sigma(s)\cdot\nabla\varphi(s){\mathrm{d}} x\right| &\leq \norm{\sigma}_{C^0(H^1)}\sup_{s\in (0,T]}\norm{\nabla\varphi(s)}\leq C,\\ \left|\chi\int_{0}^{s}\int_\Omega \nabla\partial_t\sigma\cdot\nabla\varphi{\mathrm{d}} x{\mathrm{d}} t\right|&\leq C\int_{0}^{s}\normh{1}{\partial_t\sigma(t)}{\mathrm{d}} t. \end{align*} Together with \eqref{APRI_SSOL_15}, this implies \begin{align} \label{APRI_SSOL_16}\frac{1}{2}\norml{2}{\Delta\varphi(s)}^2 + \frac{1}{4}\norm{\partial_{t}\varphi}_{L^2(0,s;L^2)}^2 &\leq C\left(1+\norml{2}{\Delta\varphi_0}^2\right) + \int_{0}^{s}\beta_1(t) + \beta_2(t)\norml{2}{\Delta\varphi(t)}^2{\mathrm{d}} t, \end{align} where \begin{equation*} \beta_1(t) \coloneqq C\left(1+\normh{1}{\partial_t\sigma}+\norml{2}{\textnormal{div}(\varphi\mathbf{v})}^2 + \norml{2}{\Gamma_{\varphi}}^2 + \normh{3}{\varphi}^2\right),\qquad\beta_2(t)\coloneqq C\left(1 + \normh{3}{\varphi}^2\right). \end{equation*} Due to \eqref{APRI_SSOL_1}, \eqref{APRI_SSOL_4} and \eqref{APRI_SSOL_8}, we know that $\beta_1,\beta_2\in L^1(0,T)$. Together with the assumption $\varphi_0\in H_N^2$, an application of Gronwall's lemma in \eqref{APRI_SSOL_16} yields \begin{equation} \label{APRI_SSOL_17}\norm{\Delta\varphi}_{L^{\infty}(L^2)} + \norm{\partial_{t}\varphi}_{L^2(L^2)}\leq C. \end{equation} \paragraph{Step 3:} Combining \eqref{APRI_SSOL_1} and \eqref{APRI_SSOL_17}, from elliptic regularity theory we obtain \begin{align} \nonumber&\norm{\varphi}_{H^1(L^2)\cap L^{\infty}(H^2)\cap L^2(H^3) } + \norm{\sigma}_{H^1(H^1)\cap C^0(H^1)\cap L^{\infty}(H^2)} + \norm{\mu}_{L^2(H^1)\cap L^4(L^2)}\\ \label{APRI_SSOL_18}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^2)} + \norm{\mathbf{v}}_{L^{\frac{8}{3}}(\mathbf H^1)}+ \norm{p}_{L^2(L^2)}\leq C. \end{align} Using \eqref{APRI_SSOL_18} and applying elliptic regularity theory in \eqref{WFORM_1b}, we obtain \begin{equation} \label{APRI_SSOL_19}\norm{\mu}_{L^2(H^2)}\leq C. \end{equation} Using a comparison argument in \eqref{WFORM_1c} and \eqref{APRI_SSOL_18}-\eqref{APRI_SSOL_19}, it follows that \begin{equation} \label{APRI_SSOL_20}\norm{\mu}_{L^{\infty}(L^2)}\leq C. \end{equation} \paragraph{Step 4:} We now want to apply Lemma \ref{LEM_STOKES} with $q=2$. Using the generalised chain rule for Sobolev functions, \eqref{APRI_SSOL_18} and the assumptions on $\Gamma_{\mathbf{v}}$, it is straightforward to check that \begin{equation} \label{APRI_SSOL_21}\norm{\Gamma_{\mathbf{v}}}_{ L^{\infty}(H^1)}\leq C. \end{equation} Furthermore, since $\nabla\varphi\in L^4(\mathbf L^{\infty}),~\mu+\chi\sigma\in L^{\infty}(L^2)$ with bounded norm, we observe that \begin{equation} \label{APRI_SSOL_22}\norm{(\mu+\chi\sigma)\nabla\varphi}_{L^4(\mathbf L^2)}\leq C. \end{equation} Hence, using the assumptions on $\lambda(\cdot),~\eta(\cdot)$, an application of \eqref{STOKES_EST} yields \begin{equation*} \normH{2}{\mathbf{v}} + \normh{1}{p}\leq C\big(\eta_0,\eta_1,\lambda_0,\norm{\varphi}_{W^{1,4}}\big)\big(\normL{2}{(\mu+\chi\sigma)\nabla\varphi} + \normh{1}{\Gamma_{\mathbf{v}}}\big) . \end{equation*} Integrating this inequality in time from $0$ to $T$, using \eqref{APRI_SSOL_21}-\eqref{APRI_SSOL_22} and recalling $\varphi\in L^{\infty}(W^{1,4})$ (since $H^2\hookrightarrow W^{1,4}$), we obtain \begin{equation} \label{APRI_SSOL_23}\norm{\mathbf{v}}_{L^4(\mathbf H^2)} + \norm{p}_{L^4(H^1)}\leq C. \end{equation} \paragraph{Step 5:} Finally, due to the compact embedding $H^2\hookrightarrow C^0(\bar{\Omega})$ and because of \eqref{APRI_SSOL_18}, we obtain \begin{equation*} \norm{\varphi}_{C^0(\bar{Q})}\leq C. \end{equation*} For completeness, we summarize all the estimates we deduced in this Section, given by \begin{align} \nonumber&\norm{\varphi}_{ H^1(L^2)\cap C^0(\bar{Q})\cap L^{\infty}(H^2)\cap L^2(H^3)} + \norm{\sigma}_{H^1(H^1)\cap C^0(H^1)\cap L^{\infty}(H^2)} + \norm{\mu}_{L^{\infty}(L^2)\cap L^2(H^2)}\\ \label{APRI_SSOL_24}&\quad + \norm{\textnormal{div}(\varphi\mathbf{v})}_{L^2(L^2)} + \norm{\mathbf{v}}_{L^4(\mathbf H^2)}+ \norm{p}_{L^4(H^1)}\leq C. \end{align} These a-priori-estimates together with a Galerkin-scheme are enough to pass to the limit in the weak formulation to show existence of strong solutions. For the details when passing to the limit, we again refer to \cite{EbenbeckGarcke}, \cite{GarckeLam1}. \paragraph{Step 6:} Since \eqref{MEQ_4} holds a.\,e. in $Q$, we see that $\varphi$ is a solution of \begin{alignat*}{3} \Delta\varphi &= \psi'(\varphi) - \mu - \chi\sigma&&\quad\text{a.\,e. in }Q,\\ \nabla\varphi\cdot\mathbf n &= 0&&\quad\text{a.\,e. on }\Sigma. \end{alignat*} Since $\psi'(\varphi) - \mu - \chi\sigma\in L^2(H^2)$ with bounded norm, elliptic regularity theory implies \begin{equation} \label{APRI_SSOL_25}\norm{\varphi}_{L^2(H^4)}\leq C. \end{equation} Due to the continuous embedding $L^{\infty}(H^1)\cap L^2(H^3)\hookrightarrow L^8(L^{\infty})$ and by \eqref{APRI_SSOL_24}, this implies $(\mu+\chi\sigma)\nabla\varphi\in L^8(\mathbf L^2)$ with bounded norm. Consequently, with the same arguments as used for \eqref{APRI_SSOL_23}, we deduce that \begin{equation} \label{APRI_SSOL_26}\norm{\mathbf{v}}_{L^8(\mathbf H^2)} + \norm{p}_{L^8(H^1)}\leq C, \end{equation} which completes the proof. \end{document}
arXiv
Bilinear equations in Hilbert space driven by paths of low regularity A $ C^1 $ Petrov-Galerkin method and Gauss collocation method for 1D general elliptic problems and superconvergence Dynamics of a discrete-time stoichiometric optimal foraging model Ming Chen 1, and Hao Wang 2,, School of Science, Dalian Maritime University, 1 Linghai Road, Dalian, Liaoning, 116026, China Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta T6G 2G1, Canada * Corresponding Author ([email protected]) Received April 2020 Revised June 2020 Published August 2020 Fund Project: Partially supported by NSFC-11801052, NSFLP-2019-ZD-1056, NSERC RGPIN-2020-03911 and NSERC RGPAS-2020-00090 In this paper, we discretize and analyze a stoichiometric optimal foraging model where the grazer's feeding effort depends on the producer's nutrient quality. We systematically make comparisons of the dynamical behaviors between the discrete-time model and the continuous-time model to study the robustness of model predictions to time discretization. When the maximum growth rate of producer is low, both model types admit similar dynamics including bistability and deterministic extinction of the grazer caused by low nutrient quality of the producer. Especially, the grazer is benefited from optimal foraging similarly in both discrete-time and continuous-time models. When the maximum growth rate of producer is high, dynamics of the discrete-time model are more complex including chaos. A phenomenal observation is that under extremely high light intensities, the grazer in the continuous-time model tends to perish due to poor food quality, however, the grazer in the discrete-time model persists in regular or irregular oscillatory ways. This significant difference indicates the necessity of studying discrete-time models which naturally include species' generations and are thus more popular in theoretical biology. Finally, we discuss how the shape of the quality-based feeding function regulates the beneficial or restraint effect of optimal foraging on the grazer population. Keywords: Stoichiometry, food quality, optimal foraging, discrete-time model. Mathematics Subject Classification: Primary: 92D25, 92D40; Secondary: 34C05, 34D20. Citation: Ming Chen, Hao Wang. Dynamics of a discrete-time stoichiometric optimal foraging model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 107-120. doi: 10.3934/dcdsb.2020264 S. Busenberg and K. L. Cooke, Models of vertically transmitted diseases with sequential-continuous dynamics, Nonlinear Phenomena in Mathematical Sciences, (1982), 179-187. Google Scholar M. Chen, M. Fan, C. B. Xie, A. Peace and H. Wang, Stoichiometric food chain model on discrete time scale, Mathematical Biosciences and Engineering, 16 (2018), 101-118. Google Scholar M. Chen, L. Asik and A. Peace, Stoichiometric knife-edge model on discrete time scale, Advances in Difference Equations, 2019 (2019), 1-16. doi: 10.1186/s13662-019-2468-7. Google Scholar K. L. Cooke and J. Wiener, Retarded differential equations with piecewise constant delays, Journal of Mathematical Analysis and Applications, 99 (1984), 265-297. doi: 10.1016/0022-247X(84)90248-8. Google Scholar M. Fan, I. Loladze, Y. Kuang and J. J. Elser, Dynamics of a stoichiometric discrete producer-grazer model, Journal of Difference Equations and Applications, 11 (2005), 347-364. doi: 10.1080/10236190412331335427. Google Scholar R. Frankham and B. W. Brook, The importance of time scale in conservation biology and ecology, Annales Zoologici Fennici, 41 (2004), 459-463. Google Scholar W. Gurney and R. M. Nisbet, Ecological Dynamics, 1998. Google Scholar J. J. Elser, M. Kyle, J. Learned, M. McCrackin, A. Peace and L. Steger, Life on the stoichiometric knife-edge: Effects of high and low food C:P ratio on growth, feeding, and respiration in three Daphnia species, Inland Waters, 6 (2016), 136-146. doi: 10.5268/IW-6.2.908. Google Scholar Y. Kuang, J. Huisman and J. J. Elser, Stoichiometric plant-herbivore models and their interpretation, Mathematical Biosciences and Engineering, 1 (2004), 215-222. doi: 10.3934/mbe.2004.1.215. Google Scholar I. Loladze, Y. Kuang and J. J. Elser, Stoichiometry in producer-grazer systems: Linking energy flow with element cycling, Bulletin of Mathematical Biology, 62 (2000), 1137-1162. doi: 10.1006/bulm.2000.0201. Google Scholar I. Loladze, Y. Kuang, J. J. Elser and W. F. Fagan, Competition and stoichiometry: Coexistence of two predators on one prey, Theoretical Population Biology, 65 (2004), 1-15. doi: 10.1016/S0040-5809(03)00105-9. Google Scholar A. Peace, Y. Zhao, I. Loladze, J. J. Elser and Y. Kuang, A stoichiometric producer-grazer model incorporating the effects of excess food-nutrient content on consumer dynamics, Mathematical Biosciences, 244 (2013), 107-115. doi: 10.1016/j.mbs.2013.04.011. Google Scholar A. Peace, H. Wang and Y. Kuang, Dynamics of a producer-grazer model incorporating the effects of excess food nutrient content on grazer's growth, Bulletin of Mathematical Biology, 76 (2014), 2175-2197. doi: 10.1007/s11538-014-0006-z. Google Scholar A. Peace, Effects of light, nutrients, and food chain length on trophic efficiencies in simple stoichiometric aquatic food chain models, Ecological Modelling, 312 (2015), 125-135. doi: 10.1016/j.ecolmodel.2015.05.019. Google Scholar A. Peace and H. Wang, Compensatory foraging in stoichiometric producer-grazer models, Bulletin of Mathematical Biology, 81 (2019), 4932-4950. doi: 10.1007/s11538-019-00665-2. Google Scholar G. H. Pyke, H. R. Pulliam and E. L. Charnov, Optimal Foraging: A selective review of theory and tests, Quarterly Review of Biology, 52 (1977), 137-154. Google Scholar S. J. Simpson, R. M. Sibly, K. P. Lee, S. T. Behmer and D. Raubenheimer, Optimal foraging when regulating intake of multiple nutrients, Animal Behaviour, 68 (2004), 1299-1311. doi: 10.1016/j.anbehav.2004.03.003. Google Scholar [18] R. W. Sterner and J. J. Elser, Ecological Stoichiometry: The Biology of Elements from Molecules to the Biosphere, Princeton University Press, 2002. doi: 10.1515/9781400885695. Google Scholar G. Sui, M. Fan, I. Loladze and Y. Kuang, The dynamics of a stoichiometric plant-herbivore model and its discrete analog, Mathematical Biosciences and Engineering, 4 (2007), 29-46. doi: 10.3934/mbe.2007.4.29. Google Scholar H. Wang, Y. Kuang and I. Loladze, Dynamics of a mechanistically derived stoichiometric producer-grazer model, Journal of Biological Dynamics, 2 (2008), 286-296. doi: 10.1080/17513750701769881. Google Scholar H. Wang, K. Dunning, J. J. Elser and Y. Kuang, Daphnia species invasion, competitive exclusion, and chaotic coexistence, Discrete & Continuous Dynamical Systems-B, 12 (2009), 481-493. doi: 10.3934/dcdsb.2009.12.481. Google Scholar H. Wang, R. W. Sterner and J. J. Elser, On the "strict homeostasis" assumption in ecological stoichiometry, Ecological Modelling, 243 (2012), 81-88. doi: 10.1016/j.ecolmodel.2012.06.003. Google Scholar H. Wang, Z. Lu and A. Raghavan, Weak dynamical threshold for the "strict homeostasis" assumption in ecological stoichiometry, Ecological Modelling, 384 (2018), 233-240. doi: 10.1016/j.ecolmodel.2018.06.027. Google Scholar C. Xie, M. Fan and W. Zhao, Dynamics of a discrete stoichiometric two predators one prey model, Journal of Biological Systems, 18 (2010), 649-667. doi: 10.1142/S0218339010003457. Google Scholar Figure 1. Attractor of discrete-time optimal foraging model (4) in phase plane for different light intensities in two cases. Panels of ($ a_i $) describe the case when the producer's growth rate is low $ b = 1.2 $, $ i = 1,2,3 $. Panels of ($ b_i $) describe the case when the producer's growth rate is high $ b = 3 $, $ i = 1,2,3 $. The red dashed curves are defined by $ F(x,y) = 0 $, which denote the producer nullclines. The blue dotted curves are defined by $ G(x,y) = 0 $, which denote the grazer nullclines. Solid bullets denote stable equilibria while circles represent unstable equilibria Figure 2. Bifurcation diagram of the population densities with respect to $ K $ (light intensity) for the discrete-time model (4) ($ a_i $), i = 1, 2, and the continuous-time model (1) ($ b_i $), i = 1, 2. Shaded regions with $ + $ represent the parameter regions of the optimal foraging behaviors benefiting the grazers. All parameters are provided in Table 1 with $ b = 1.2 $ Figure 3. Solution curves for system (4) and (1). ($ a_i $) and ($ b_i $) denote the dynamics of (4) and (1) with increasing $ K $, respectively. Producer and grazer's densities ($ \mathrm{mg\; C/L} $) are plotted by dashed and solid lines, respectively. All parameters are provided in Table 1 with $ b = 1.2 $ Figure 4. Solution curves for system (4) and (1). ($ a_i $) and ($ b_i $) denote the dynamics of (4) and (1) with increasing $ K $, respectively. Producer and grazer densities ($ \mathrm{mg\; C/L} $) are described by dashed and solid lines, respectively. All parameters are provided in Table 1 with $ b = 3 $ Figure 5. The bifurcation curves with respect to $ K $ for the discrete-time model ($ a_i $), i = 1, 2, and continuous-time model ($ b_i $), i = 1, 2. All parameters are provided in Table 1 with $ b = 3 $ Figure 6. Spectrum of the maximum Lyapunov exponent (MLE) with respect to $ K $ for the discrete-time model. All parameters are provided in Table 1 with $ b = 3 $ Figure 7. A two-parameter bifurcation diagram for varying light level $ K $ and varying maximal growth rate of producer $ b $ for the discrete-time model (a) and continuous-time model (b). All other parameter values are listed in Table 1 and the initial point is $ x(0) = 0.2\; \mathrm{mg C L}^{-1} $ and $ y(0) = 0.2\; \mathrm{mg C L}^{-1} $. Discrete-time model (4) exhibits periodic oscillations in blue region and chaotic behaviors in red region. Outside these regions, model (4) has stable equilibria Figure 8. Bifurcation diagram of the grazer densities with respect to $ K $ (light intensity) for the discrete-time model (4) ($ a_i $), i = 1, 2, and the continuous-time model (1) ($ b_i $), i = 1, 2. Specially, ($ a_1 $) and ($ b_1 $) denote the case with the low growth rate of producer ($ b = 1.2 $); ($ a_2 $) and ($ b_2 $) denote the case with the high growth rate of producer ($ b = 3 $). Light ($ + $) and dark ($ - $) shaded regions represent the parameter regions of the optimal foraging behaviors benefiting and restraining the grazers, respectively. All parameters are provided in Table 1 except the parameter $ a_1 = 3.5 $ Table 1. Parameters of model (4) with default values and units Par. Description Value Unit $ P_T $ Total phosphorus $ 0.02 $ $ \mathrm{mg P L}^{-1} $ $ K $ Producer carrying capacity determined by light $ 0-3.5 $ $ \mathrm{mg C L}^{-1} $ $ b $ Maximal growth rate of the producer $ 1.2 $ or $ 3 $ $ \mathrm{day}^{-1} $ $ \delta $ Grazer loss rate $ 0.12 $ $ \mathrm{day}^{-1} $ $ \theta $ Grazer constant $ \mathrm{P:C} $ 0.03 $ \mathrm{mgP/mgC} $ $ q $ Producer minimal $ \mathrm{P:C} $ $ 0.0038 $ $ \mathrm{mgP/mgC} $ $ e $ Maximal production efficiency in carbon terms for grazer $ 0.8 $ $ \alpha $ Phosphorus half saturation constant of the producer $ 0.008 $ $ \mathrm{mg C L}^{-1} $ $ \mu $ Water cleared/mg C invested to generate filtering energy $ 700 $ $ \mathrm{L/mg C} $ $ \tau $ Handling time (-inverse of max feeding rate) $ 1.23 $ $ \mathrm{day} $ $ \xi(Q) $ Feeding cost, function for optimal foraging model $ a_0=0.01 $, $ a_1=5.17 $ $ \xi(Q)=\min\{a_0,a_1Q^2+a_2Q+a_3\} $ $ a_2=-0.31 $, $ a_3=0.007 $ Veena Goswami, Gopinath Panda. Optimal customer behavior in observable and unobservable discrete-time queues. Journal of Industrial & Management Optimization, 2021, 17 (1) : 299-316. doi: 10.3934/jimo.2019112 Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339 Angelica Pachon, Federico Polito, Costantino Ricciuti. On discrete-time semi-Markov processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1499-1529. doi: 10.3934/dcdsb.2020170 Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331 Mahir Demir, Suzanne Lenhart. A spatial food chain model for the Black Sea Anchovy, and its optimal fishery. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 155-171. doi: 10.3934/dcdsb.2020373 Tien-Yu Lin, Bhaba R. Sarker, Chien-Jui Lin. An optimal setup cost reduction and lot size for economic production quantity model with imperfect quality and quantity discounts. Journal of Industrial & Management Optimization, 2021, 17 (1) : 467-484. doi: 10.3934/jimo.2020043 Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020166 Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020046 Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571 Chao Xing, Jiaojiao Pan, Hong Luo. Stability and dynamic transition of a toxin-producing phytoplankton-zooplankton model with additional food. Communications on Pure & Applied Analysis, 2021, 20 (1) : 427-448. doi: 10.3934/cpaa.2020275 Claudio Arancibia-Ibarra, José Flores, Michael Bode, Graeme Pettet, Peter van Heijster. A modified May–Holling–Tanner predator-prey model with multiple Allee effects on the prey and an alternative food source for the predator. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 943-962. doi: 10.3934/dcdsb.2020148 Stefan Siegmund, Petr Stehlík. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1011-1029. doi: 10.3934/dcdsb.2020151 Guihong Fan, Gail S. K. Wolkowicz. Chaotic dynamics in a simple predator-prey model with discrete delay. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 191-216. doi: 10.3934/dcdsb.2020263 Matúš Tibenský, Angela Handlovičová. Convergence analysis of the discrete duality finite volume scheme for the regularised Heston model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1181-1195. doi: 10.3934/dcdss.2020226 Ming Chen Hao Wang
CommonCrawl
\begin{document} \begin{abstract} Let $S$ be a complex smooth projective surface of Kodaira dimension one. We show that the group $\mathrm{Aut}_s(S)$ of symplectic automorphisms acts trivially on the Albanese kernel $\mathrm{CH}_0(S)_\mathrm{alb}$ of the $0$-th Chow group $\mathrm{CH}_0(S)$, unless possibly if the geometric genus and the irregularity satisfy $p_g(S)=q(S)\in\{1,2\}$. In the exceptional cases, the image of the homomorphism $\mathrm{Aut}_s(S)\rightarrow \mathrm{Aut}(\mathrm{CH}_0(S)_\mathrm{alb})$ has order at most 3. Our arguments actually take care of the group $\mathrm{Aut}_f(S)$ of fibration-preserving automorphisms of elliptic surfaces $f\colon S\rightarrow B$. We prove that, if $\sigma\in\mathrm{Aut}_f(S)$ induces the trivial action on $H^{i,0}(S)$ for $i>0$, then it induces the trivial action on $\mathrm{CH}_0(S)_\mathrm{alb}$. As a by-product we obtain that if $S$ is an elliptic K3 surface, then $\mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_\mathrm{alb}$. \end{abstract} \title{On symplectic automorphisms of elliptic surfaces acting on $\CH_0$} \section{Introduction} We work over the complex numbers $\mathbb{C}$ in this paper. For a complex smooth projective variety $X$ of dimension $d$, its $k$-th Chow group $\mathrm{CH}_k(X)$ is defined to be $Z_k(X)/\sim_\mathrm{rat}$, where $Z_k(X)$ is the free abelian group on the $k$-dimensional closed subvarieties of $X$ and $\sim_\mathrm{rat}$ denotes the rational equivalence. We have $\mathrm{CH}_d(X)=\mathbb{Z}[X]\cong\mathbb{Z}$ and $\mathrm{CH}_{d-1}(X) = \mathrm{Pic}(X)$. However, $\mathrm{CH}_{k}(X)$ becomes very hard to compute for $0\leq k\leq d-2$. For example, for a smooth projective surface $S$ with $p_g(S)>0$, the degree zero part $\mathrm{CH}_0(S)_{\mathrm{hom}}$ of $\mathrm{CH}_0(S)$ is infinite dimensional \cite{Mum69}. \footnote{This means that the natural map $\mathrm{Sym}^n(S)\times \mathrm{Sym}^n(S)\to \mathrm{CH}_0(S)_\mathrm{hom}, (A,B)\mapsto [A-B]$ is not surjective for any natural number $n$, where $\mathrm{Sym}^n(S)$ denotes the $n$-th symmetric product of $S$.} The Bloch--Beilinson conjecture predicts the existence of a finite decreasing filtration on each Chow group $\mathrm{CH}_k(X)_\mathbb{Q}$ with rational coefficients whose graded pieces are, in terms of correspondence between smooth projective varieties, controlled by the Hodge decomposition of the cohomology groups; see \cite[Section 11.2.2]{Voi03} for precise statements. The philosophy behind this conjecture is that the topology (Hodge theory) determines the algebraic geometry of cycles. Specifically for the $0$-th Chow group, one defines the kernel of the degree map \[ \mathrm{CH}_0(X)_\mathrm{hom}:= \ker (\mathrm{CH}_0(X)\xrightarrow{\deg} \mathbb{Z}) \] and in turn the kernel of the Albanese map \[ \mathrm{CH}_0(X)_\mathrm{alb}:= \ker (\mathrm{CH}_0(X)_\mathrm{hom}\xrightarrow{\mathrm{alb}} \mathrm{Alb}(X)) \] Then $ \mathrm{CH}_0(X)\supset\mathrm{CH}_0(X)_\mathrm{hom}\supset\mathrm{CH}_0(X)_\mathrm{alb}$ are expected to be the first three terms of the Bloch--Beilinson filtration for $\mathrm{CH}_0(X)$, and if $X=S$ is a surface, then this should be the full filtration. As a consequence, one expects the following \begin{conjecture}\label{conj: sym cor} Let $S$ a smooth projective surface. Let $X$ be a smooth projective variety, and $\Gamma\in \mathrm{CH}^2(X\times S)$ a cycle of codimension $2$. If $[\Gamma]^{\ast}\colon H^{2,0}(S)\to H^{2,0}(X)$ vanishes, then $\Gamma_{\ast}: \mathrm{CH}_0(X)_{\mathrm{alb}}\to \mathrm{CH}_0(S)_{\mathrm{alb}}$ also vanishes. \end{conjecture} Taking $X=S$ and $\Gamma=\Gamma_\sigma-\Delta_S$, where $\Gamma_{\sigma}$ is the graph of an automorphism $\sigma\in \mathrm{Aut}(S)$ and $\Delta_S\subset S\times S$ is the diagonal, we obtain the following special and more tractable case of Conjecture~\ref{conj: sym cor}: \begin{conj}\label{conj: sym} Let $S$ be a smooth projective surface. Then any symplectic automorphism acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{conj} Here an automorphism $\sigma\in\mathrm{Aut}(S)$ is called \emph{symplectic} if the induced map $\sigma^*: H^{2,0}(S)\rightarrow H^{2,0}(S)$ is the identity. We will use $\mathrm{Aut}_s(S)$ to denote the group of symplectic automorphisms of $S$. Surfaces with $\kappa(S)\leq 1$ and $p_g(S)=0$ have trivial $\mathrm{CH}_0(S)_\mathrm{alb}$ by \cite{BKL76}, so Conjecture~\ref{conj: sym} is automatically true in this case. The Bloch conjecture (\cite[Conjecture~1.8 and Proposition~1.11]{Blo10}), which is again a consequence of Conjecture~\ref{conj: sym cor} (take $\Gamma=\Delta_S$), asserts that surfaces of general type with $p_g(S)=0$ also have trivial $\mathrm{CH}_0(S)_\mathrm{alb}$. This has been verified in some special cases by various authors: surfaces with ``enough automorphisms" such as Godeaux surfaces, Burniat-Inoue surfaces, Campedelli surfaces, and their alike \cite{IM79, Bar85,Che13, BC13, Bau14, BF15}, surfaces with ``finite dimensional Chow motives'' such as surfaces rationally dominated by a product of curves \cite{Kim05}, surfaces with ``nice moduli spaces'' such as Catanese surfaces, Barlow surfaces \cite{Voi14} and some numerical Campedelli surfaces \cite{Lat20}, and so on. Since the complete classification for surfaces of general type with $p_g=0$ is still unknown and the effective methods avoiding the classification results of such surfaces are not established, the Bloch conjecture is open by now (\cite{BCP11}). For surfaces with $p_g(S)>0$, the Albanese kernel $\mathrm{CH}_0(S)_\mathrm{alb}$ is huge (\cite{Mum69}). Nevertheless, Conjecture~\ref{conj: sym} has been confirmed for abelian surfaces \cite{BKL76, Paw19} as well as symplectic automorphisms of finite order of K3 surfaces \cite{Voi12, Huy12}. For Kummer K3 surfaces, infinite order symplectic automorphisms coming from the covering abelian surface are treated in \cite{Paw19}. The main result of this paper is \begin{thm}\label{thm: main1} Conjecture~\ref{conj: sym} holds for surfaces $S$ with Kodaira dimension $\kappa(S)=1$, unless possibly $q(S)=p_g(S)\in\{1,2\}$. In these cases, the image of the homomorphism $\mathrm{Aut}_s(S)\rightarrow \mathrm{Aut}(\mathrm{CH}_0(S)_\mathrm{alb})$ has order at most $3$. \end{thm} Strengthening the condition of Conjecture~\ref{conj: sym} (compare \cite[Conjecture 1.1]{Voi12}), we have \begin{thm}\label{thm: main2} Let $S$ be a smooth projective surface with $\kappa(S)=1$. If an automorphism $\sigma\in \mathrm{Aut}(S)$ induces the trivial action on $H^{i,0}(S)$ for $i>0$, then it induces trivial action on $\mathrm{CH}_0(S)_\mathrm{alb}$. \end{thm} The results are based on the following elementary observation about the zero cycles of an elliptic surface $f\colon S\rightarrow B$: the Albanese kernel $\mathrm{CH}_0(S)_\mathrm{alb}$ is contained in the so-called \emph{$f$-kernel} \[ \mathrm{CH}_0(S)_f:=\ker(\mathrm{CH}_0(S)\xrightarrow{f_*} \mathrm{CH}_0(B)). \] For any cycle class $\alpha\in \mathrm{CH}_0(S)_f$, one can find a positive integer $d$ and finitely many $\alpha_i\in \mathrm{CH}_0(S)_\mathrm{hom}$ such that \begin{equation}\label{eq: dec} d\alpha = \sum \alpha_i\in \mathrm{CH}_0(S), \end{equation} and the support $\mathrm{supp}(\alpha_i)$ lies on a single fiber for each $i$ (Lemma~\ref{lem: fib cycle}). Let $\mathrm{Aut}_f(S)$ be the subgroup of automorphisms of $S$ preserving the fibration structure $f$; for the precise definition, see Section $2$. What we are dealing with in this paper is in fact the group $\mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$. Note that, if $\kappa(S) =1$, then there is a unique elliptic fibration structure on $S$ and hence $\mathrm{Aut}_f(S) = \mathrm{Aut}(S)$; it follows that $\mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$ is the whole $\mathrm{Aut}_s(S)$. In any case, we have an exact sequence \[ 1\rightarrow \mathrm{Aut}_B(S) \rightarrow \mathrm{Aut}_f(S) \rightarrow \mathrm{Aut}(B) \] where $\mathrm{Aut}_B(S):=\{\sigma\in \mathrm{Aut}_f(S) \mid f\circ \sigma =f\}$. By the decomposition~\eqref{eq: dec}, it is clear that $\sigma\in \mathrm{Aut}_B(S)$ induces the trivial action on $\alpha\in\mathrm{CH}_0(S)_\mathrm{alb}$ if the restriction $\sigma|_F$ to a general fiber $F$ of $f$ is a translation. The latter property is guaranteed if $p_g(S)>0$; see Lemma~\ref{lem: fib trans}. On the other hand, if $p_g(S) = 0$, then $\mathrm{CH}_0(S)_\mathrm{alb} =0$ by \cite{BKL76} and there is nothing to prove. The conclusion is that $\mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_\mathrm{alb}$ (Proposition~\ref{prop: fib action}). Note that $\mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S)$ also acts trivially on the Jacobian $j\colon J\rightarrow B$. Using this fact, we can reduce the problem to one for finite order symplectic automorphisms of $J$ that fix the distinguished section. This is carried out in Section~\ref{sec: red to J}. Replacing the elliptic fibration $f\colon S\rightarrow B$ with its Jacobian $j\colon J\rightarrow B$ and $\sigma\in \mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$ with its induced automorphism $\sigma_J\in \mathrm{Aut}_j(J)\cap \mathrm{Aut}_s(J)$, we can assume that $S$ has a section, and then work only with those fibration-preserving symplectic automorphisms $\sigma$ of finite order such that $|\sigma|= |\sigma_B|$, where $\sigma_B\in \mathrm{Aut}(B)$ is induced by $\sigma$. Here we observe that, in case $p_g(S)>0$, the canonical map $\varphi_{S}\colon S\dashrightarrow \mathbb{P}^{p_g-1}$ factors through $f\colon S\rightarrow B$ by the canonical bundle formula for elliptic fibrations. Since $\sigma$ acts trivially on $H^0(S, K_S)$, $\varphi_{S}$ also factors through the quotient map $\pi\colon B\rightarrow B/\langle\sigma_B\rangle$, where $\sigma_B\in \mathrm{Aut}(B)$ is the automorphism induced by $\sigma$. Therefore, we have a commutative diagram \begin{center} \begin{tikzpicture} \node[name = S] at (0,1) {$S$}; \node[name = B] at (0,0) {$B$}; \node[name = BQ] at (2,0) {$B/\langle \sigma_B\rangle$}; \node[name = P] at (4,0) {$\mathbb{P}^{p_g-1}$}; \draw[->] (S)--node[left]{{\tiny$f$}} (B); \draw[->] (B)--node[above]{{\tiny$\pi$}} (BQ); \draw[->](BQ)--(P); \draw[->](S) to [out=15, in=120] node[above]{{\tiny$\varphi_S$}} (P); \draw[->] (B) to [out=30, in=150]node[above]{{\tiny$\varphi_B$}} (P); \end{tikzpicture} \end{center} where $\varphi_{B}$ is the morphism associated to the linear system $|K_B+L|$ with $L$ is a line bundle of degree $\chi(\mathcal{O}_S)$ on $B$. If $p_g(S)\geq 2$, we obtain $|\sigma_B| \leq \deg \varphi_{B}$. If $\chi(\mathcal{O}_S)\geq 3$, then $\deg(K_B+L)=2g(B)-2 +\chi(\mathcal{O}_S) \geq 2g(B)+1$ and thus $\varphi_{B}$ is an embedding. Therefore, $\sigma_B=\mathrm{id}_B$, and it follows that $\sigma$ is also the identity (Theorem~\ref{thm: chi geq 3}). In case $\chi(\mathcal{O}_S)\in \{1,2\}$, we cannot conclude that $\sigma_B=\mathrm{id}_B$, but a similar consideration yields strong restrictions on what $\sigma_B$ can be. In fact, if we impose the additional condition that $\sigma$ acts trivially also on $H^1(S, \mathcal{O}_S)$, then $\sigma_B=\mathrm{id}_B$ holds, which implies Theorem~\ref{thm: main2} in this case. In most cases, we can show that $|\sigma| = |\sigma_B|\in\{1,2,3,4,6\}$; using the method of ``enough automorphisms" \cite{IM79}, either $\sigma^2$ or $\sigma^3$ has order at most $2$, and thus acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$; see Lemma~\ref{lem: inv}. In case $\chi(\mathcal{O}_S) = 0$, the relatively minimal model of the surface $S$ over $B$ is an elliptic quasi-bundle and hence is the quotient of a product of two curves. We can thus draw on the finite dimensionality of the Chow motives of such surfaces \cite{Kim05, KMP07}. Our arguments have a byproduct concerning (possibly infinite order) symplectic automorphisms of elliptic K3 surfaces: \begin{thm}[= Corollary~\ref{cor: K3}] Let $f\colon S\rightarrow B$ be an elliptic K3 surface. Then $\mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$ acts trivially on $CH_0(S)_\mathrm{alb}$. \end{thm} \noindent\textbf{Acknowledgements}. We would like to thank Qizheng Yin, Renjie L\"u, Zhiyuan Li and Xun Yu for their interest in our project and for helpful discussions. This work was supported by the NSFC (No.~11971399 and No.~11771294). \section{Fibration-preserving automorphisms} Let $S$ be a smooth projective surface and $f\colon S\rightarrow B$ a fibration, that is, a morphism onto the smooth projective curve $B$ with connected fibers. We define the following subgroups of $\mathrm{Aut}(S)$: \begin{itemize}[leftmargin=*] \item the subgroup of \emph{fibration-preserving automorphisms} \[ \mathrm{Aut}_f(S):=\{\sigma\in \mathrm{Aut}(S) \mid \exists\, \sigma_B\in \mathrm{Aut}(B) \text{ such that } \sigma_B\circ f = f\circ \sigma\} \] These automorphisms may permute the fibers of $f$. \item the subgroup of \emph{fiber-preserving automorphisms} \[ \mathrm{Aut}_B(S):=\{\sigma\in \mathrm{Aut}_f(S) \mid f = f\circ \sigma\} \] These automorphisms preserve each fiber of $f$. \end{itemize} There is an obvious exact sequence of groups: \begin{equation}\label{eq: fib} 1\rightarrow \mathrm{Aut}_B(S) \rightarrow \mathrm{Aut}_f(S) \xrightarrow{r} \mathrm{Aut}(B) \end{equation} where $r$ sends an automorphism $\sigma\in\mathrm{Aut}_f(S)$ to $\sigma_B\in \mathrm{Aut}(B)$ such that $\sigma_B\circ f = f\circ \sigma$. \begin{lem}\label{lem: finite} Let $S$ be a smooth projective surface and $f\colon S\rightarrow B$ a fibration. Suppose one of the following conditions holds: \begin{enumerate}[leftmargin=*] \item[(1)] $f$ is not isotrivial, that is, not all smooth fibers of $f$ are isomorphic to each other; \item[(2)] $g(B)\geq 2$; \item[(3)] $f$ has at least three singular fibers (resp.~one singular fiber) if $g(B)=0$ (resp.~$g(B)=1$). \end{enumerate} Then the image $\mathrm{Im}(r)$ of the homomorphism $r$ in \eqref{eq: fib} is finite. \end{lem} \begin{proof} (1) If $f$ is not isotrivial then the rational map from $\lambda\colon B\dashrightarrow M_g$ to the moduli space of curves of genus $g$ is generically finite, where $g\geq 1$ is the genus of the general fibers of $f$. On the other hand, for any $b\in B$, the fibers over the points in $\mathrm{Orbit}(b):=\{\sigma_B(b) \mid \sigma_B\in \mathrm{Im}(r)\}$ are isomorphic, so $\mathrm{Orbit}(b)$ are mapped to the same point by $\lambda$. Thus $\mathrm{Orbit}(b)$ is finite for all $b$, and it follows that $\mathrm{Im}(r)$ is finite. (2) If $g(B)\geq 2$ then $\mathrm{Aut}(B)$ is finite, so $\mathrm{Im}(r)\subset\mathrm{Aut}(B)$ is automatically finite. (3) Let $\Sigma=\{b\in B\mid f^*b \text{ is singular}\}$, which is a finite set. Then there is a natural homomorphism $\mathrm{Im}(r)\rightarrow \mathrm{Perm(\Sigma)}$ into the (finite) permutation group of $\Sigma$; the kernel of this homomorphism is finite if either $g(B)=0$ and $\#\Sigma>2$ or $g(B)=1$ and $\#\Sigma>0$. The proof of the lemma is completed. \end{proof} We recall some facts about isotrivial fibrations (\cite[Section~1]{Ser96}). Let $S$ be a smooth projective surface and $f\colon S\rightarrow B$ an isotrivial fibration whose smooth fibers are isomorphic to a fixed curve $F$ with $g(F)\geq 1$. Birationally $f$ becomes a trivial fibration after a base change. More precisely, there exist a smooth projective curve $\tilde B$, a finite group $G$ acting on $\tilde B$ and $F$, such that $B\cong \tilde B/G$, and the following diagram is commutative: \begin{equation}\label{diag: isotrivial} \begin{tikzpicture}[baseline=(current bounding box.center)] \node[name = S] at (0,1) {$S$}; \node[name = B] at (0,0) {$B$}; \node[name = SQ] at (2,1) {$(\tilde B\times F)/G$}; \node[name = BQ] at (2,0) {$\tilde B/G$}; \draw[->, dashed](S) --(SQ); \draw[->] (S) -- node[left]{{\tiny $f$}} (B); \draw[->] (B) -- node[above]{$\tiny\cong$} (BQ); \draw[->] (SQ) -- node[right]{{\tiny $\pi$}} (BQ); \end{tikzpicture} \end{equation} where the horizontal dashed arrow is a birational map, $G$ acts diagonally on $\tilde B\times F$, and $\pi$ is induced by the projection $\tilde B\times F\rightarrow \tilde B$. It is easy to check that, if $b\in B$ is a branch point of the quotient map $\tilde B\rightarrow \tilde B/G\cong B$, then the fiber $f^*b$ of $f$ over $b$ is singular. \begin{lem}\label{lem: infinite} Let $S$ be a smooth projective surface with Kodaira dimension $\kappa(S)\geq 0$, and $f\colon S\rightarrow B$ a fibration. If the image $\mathrm{Im}(r)$ of the homomorphism in \eqref{eq: fib} is an infinite group, then $g(B)=1$ and $f$ is a fiber bundle. \end{lem} \begin{proof} Suppose that $\mathrm{Im}(r)$ is infinite. By Lemma~\ref{lem: finite}, $f$ is isotrivial and $g(B)\leq 1$. Moreover, if $g(B)=1$ then $f$ has no singular fibers, so it is necessarily a fiber bundle. Now suppose that $g(B)=0$. We will draw a contradiction by showing that $\mathrm{Im}(r)$ is finite in this case, and thus complete the proof. Since $f$ is isotrivial, we have a commutative diagram as in \eqref{diag: isotrivial}. Since $\kappa(\tilde B\times F) \geq \kappa(S)\geq 0$, one has $g(\tilde B)\geq 1$. By the Riemann--Hurwitz formula, the quotient map $\tilde B\rightarrow \tilde B/G\cong B$ has at least three branch points. The fibers of $f$ over these branch points are necessarily singular, so $\mathrm{Im}(r)$ is finite by Lemma~\ref{lem: finite}, (3). \end{proof} \begin{cor}\label{cor: finite} Let $S$ be a smooth projective surface with Kodaira dimension $\kappa(S)\in\{0,1\}$, and $f\colon S\rightarrow B$ an elliptic fibration. Then the image $\mathrm{Im}(r)$ of the homomorphism in \eqref{eq: fib} is finite unless possibly if $S$ is an abelian surface or a bi-elliptic surface. \end{cor} \begin{proof} If $\mathrm{Im}(r)$ is infinite then $f$ is an elliptic bundle over an elliptic curve by Lemma~\ref{lem: infinite}. According to \cite[Section V.5]{BHPV04}, this can only happen when $S$ is an abelian surface or a bi-elliptic surface. \end{proof} Restricting \eqref{eq: fib} to the group of symplectic automorphisms, we obtain another exact sequence \begin{equation}\label{eq: fib sym} 1\rightarrow \mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S) \rightarrow \mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S) \rightarrow \mathrm{Aut}(B) \end{equation} Suppose that $S$ is a surface with Kodaira dimension $\kappa(S)=1$. Then the Iitaka fibration $f\colon S\rightarrow B$, defined by the pluri-canonical systems, is the unique elliptic fibration on $S$. Therefore, any automorphism of $S$ preserves $f$, that is, $\mathrm{Aut}(S)=\mathrm{Aut}_f(S)$. In this case, the exact sequences \eqref{eq: fib} and \eqref{eq: fib sym} can be rewritten as \begin{equation}\label{eq: fib k1} 1\rightarrow \mathrm{Aut}_B(S) \rightarrow \mathrm{Aut}(S) \rightarrow \mathrm{Aut}(B) \end{equation} and \begin{equation}\label{eq: fib sym k1} 1\rightarrow \mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S) \rightarrow \mathrm{Aut}_s(S) \rightarrow \mathrm{Aut}(B) \end{equation} The image of $\mathrm{Aut}(S) \rightarrow \mathrm{Aut}(B)$ is finite by Corollary~\ref{cor: finite} (see also \cite[ Proposition~1.2]{PS20}). The following fact about elliptic fibrations will be used later on. \begin{lem}\label{lem: q} Let $f\colon S\rightarrow B$ be a relatively minimal elliptic fibration. Then one has \[ q(S)\leq g(B)+1, \] and equality holds if and only if there exist a smooth projective curve $\tilde B$, an elliptic curve $F$, and a finite group $G$ acting on $\tilde B$ and $F$ such that $S\cong (\tilde B\times F)/G$, $B\cong \tilde B/G$, the action of $G$ on $F$ is by translations, and the diagram \begin{equation}\label{eq: quasi bundle} \begin{tikzpicture}[baseline=(current bounding box.center)] \node[name = S] at (0,1) {$S$}; \node[name = B] at (0,0) {$B$}; \node[name = SQ] at (2,1) {$(\tilde B\times F)/G$}; \node[name = BQ] at (2,0) {$\tilde B/G$}; \draw[->](S) --node[above]{{\tiny $\cong$}}(SQ); \draw[->] (S) -- node[left]{{\tiny $f$}} (B); \draw[->] (B) -- node[above]{$\tiny\cong$} (BQ); \draw[->] (SQ) -- (BQ); \end{tikzpicture} \end{equation} commutes, where the right vertical arrow is the natural projection. \end{lem} \begin{proof} By \cite[Lemma~1.6]{Ser91} (see also \cite[Lemme]{Bea82}), one has the inequality $q(S)\leq g(B)+1$. Moreover, in the equality case, one has $\chi(\mathcal{O}_S)=0$ and thus $f$ is an elliptic quasi-bundle, that is, the possible singular fibers of $f$ are multiples of smooth elliptic curves \cite[Lemma~1.5]{Ser91}. Then the existence of the commutative diagram \eqref{eq: quasi bundle} follows (\cite{Ser96}). We remark that the action of $G$ on $F$ is necessarily by translations, since $g(B) +1= q(S) = g(B) + g(F/G)$ and hence $g(F/G)=1=g(F)$. \end{proof} Now we state the main result of this section. \begin{prop}\label{prop: fib action} Let $f\colon S\rightarrow B$ be an elliptic fibration. Then the group $\mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S)$ of fiber-preserving symplectic automorphisms acts trivially on $\mathrm{CH}_0(S)_\mathrm{alb}$. \end{prop} We need some preparations before giving the proof of Proposition~\ref{prop: fib action} at the end of this section. First, define \[ \mathrm{CH}_0(S)_f:=\ker (\mathrm{CH}_0(S)\xrightarrow{f_*} \mathrm{CH}_0(B)) \] and call it the \emph{f-kernel} of $\mathrm{CH}_0(S)$. The following elementary observation about $\mathrm{CH}_0(S)_f$ is the basis of further arguments. \begin{lem}[cf.~\cite{BKL76}]\label{lem: fib cycle} For any $\alpha \in \mathrm{CH}_0(S)_f$, there is a positive integer $d$, such that \[ d\alpha = \sum_i \alpha_i \in \mathrm{CH}_0(S) \] where $\deg \alpha_i=0$ and $\mathrm{supp}(\alpha_i)$ is contained in a single smooth fiber of $f$ for each $i$. \end{lem} \begin{proof} By \cite[Fact~3.3]{Voi12}, we can assume that $\mathrm{supp}(\alpha)$ is contained in some union of smooth fibers of $f$. Take an ample smooth curve $C\subset S$, and denote by $d$ the degree of $f|_C\colon C\rightarrow B$. Write $\alpha = \sum_i n_i [p_i]$. Since $\alpha\in \ker(f_*)$, we have \[ f_*\alpha = \sum_i n_i [f(p_i)] = 0\in \mathrm{CH}_0(B). \] It follows that \[ \sum_i n_i F_{p_i} = f^*\left(\sum_i n_i [f(p_i)]\right) = 0 \in \mathrm{Pic}(S), \] and hence \[ \left(\sum_i n_i F_{p_i}\right)\cdot C = 0 \in \mathrm{CH}_0(S), \] where $F_{p_i}$ denotes the fiber containing $p_i$. Now we can write \begin{align*} d\alpha & = \sum_i n_id[p_i] - \left(\sum_i n_i F_{p_i}\right)\cdot C \\ &= \sum_i n_i (d[p_i] - [F_{p_i}\cdot C]) \end{align*} Taking $\alpha_i = n_i (d[p_i] - [F_{p_i}\cdot C])$, we have $\deg \alpha_i=0$ and $\mathrm{supp}(\alpha_i)\subset F_{p_i}$. The proof of the lemma is completed. \end{proof} By the universal property of Albanese maps, we have a commutative diagram \begin{equation}\label{eq: fib alb} \begin{tikzpicture}[baseline=(current bounding box.center)] \node[name = S] at (0,1.5) {$\mathrm{CH}_0(S)_\mathrm{hom}$}; \node[name = AS] at (0,0) {$\mathrm{Alb}(S)$}; \node[name = B] at (3,1.5) {$\mathrm{CH}_0(B)_\mathrm{hom}$}; \node[name = AB] at (3,0) {$\mathrm{Alb}(B)$}; \draw[->](S) --node[above]{{\tiny $f_*$}}(B); \draw[->] (S) -- node[left]{{\tiny $\mathrm{alb}_S$}} (AS); \draw[->] (B) -- node[right]{{\tiny $\mathrm{alb}_B$}} (AB); \draw[->] (AS) -- (AB); \end{tikzpicture} \end{equation} By the Abel--Jacobi theorem, $\mathrm{alb}_B\colon \mathrm{CH}_0(B)_\mathrm{hom} \rightarrow \mathrm{Alb}(B)$ is an isomorphism. It follows from \eqref{eq: fib alb} that \[ \mathrm{CH}_0(S)_\mathrm{alb} = \ker(\mathrm{alb}_S) \subset \ker(f_*) = \mathrm{CH}_0(S)_f, \] and $\mathrm{CH}_0(S)_\mathrm{alb} = \mathrm{CH}_0(S)_f$ if and only if the induced map $\mathrm{Alb}(S)\rightarrow \mathrm{Alb}(B)$ is an isomorphism. We recall how the translations of a smooth elliptic fiber $F$ of $f$ act on its cycle classes and holomorphic one-forms. The universal cover of the (complex) elliptic curve $F$ is $\mathbb{C}$, and $F\cong \mathbb{C}/\Gamma$ for some lattice $\Gamma\subset\mathbb{C}$. Any $\bar c\in F$ determines an automorphism $\tau_{\bar c}\colon \bar z\mapsto \bar z+\bar c$ of $F$, called the \emph{translation} by $\bar c$; it is descended from the usual translation $\tau_c\colon z\mapsto z+c$ on $\mathbb{C}$. Note that the translation $\tau_{\bar c}$ induces the trivial action on $H^0(F, K_F)$. In fact, a basis element $\xi$ of the one-dimensional vector space $H^0(F, K_F)$ is descended from the one form $dz$ on the universal cover $\mathbb{C}$. Since $\tau_c^*dz = d(z+c)=dz$ on $\mathbb{C}$, we have also $\tau_{\bar c}^*\xi = \xi$ on $F$. Fixing a point $\bar e\in F$ as the origin, one has the identifications \[ F\cong \mathrm{Pic}^0(F) \cong \mathrm{CH}_0(F)_{\hom} \] by sending $\bar z\in F$ to $\mathcal{O}_F([\bar z] -[\bar e])\in \mathrm{Pic}^0(F)$ and to $[\bar z] -[\bar e] \in \mathrm{CH}_0(F)_{\hom}$ respectively. In $\mathrm{CH}_0(F)_{\hom}$, we have by the Abel--Jacobi theorem \[ \tau_{\bar c*}([\bar z] -[\bar e]) = [\bar z + \bar c] -[\bar e + \bar c] = [\bar z] -[\bar e], \] so $\tau_{\bar c}$ induces the trivial action on $\mathrm{CH}_0(F)_{\hom}$. \begin{lem}\label{lem: trans id f kernel} Let $f\colon S\rightarrow B$ be an elliptic fibration. Then for any $\sigma\in\mathrm{Aut}_B(S)$ such that its restriction $\sigma_F$ to a general fiber $F$ is a translation, the induced homomorphism $\sigma_*\colon \mathrm{CH}_0(S)_{f,\mathbb{Q}} \rightarrow \mathrm{CH}_0(S)_{f,\mathbb{Q}}$ is the identity. \end{lem} \begin{proof} Take any $\alpha\in \mathrm{CH}(S)_{f}$. By Lemma~\ref{lem: fib cycle}, we can write $d\alpha = \sum \alpha_i \in \mathrm{CH}_0(S)$, where $d$ is a positive integer, and for each $i$, $\deg \alpha_i=0$ and $\mathrm{supp}(\alpha_i)$ is contained in a single smooth fiber, say $F_{i}$. Since $\sigma_{F_i}$ is a translation of $F_i$ and $\deg \alpha_i=0$, we have $\sigma_{F_i*}(\alpha_i) = \alpha_i$, viewed as elements in $\mathrm{CH}_0(F_i)$. Pushing the equality to $S$ by the inclusion map $F_i\hookrightarrow S$, we obtain $\sigma_*(\alpha_i) =\alpha_i\in \mathrm{CH}_0(S)$. Therefore, we have \[ \sigma_*(d\alpha) = \sum_i \sigma_*(\alpha_i) = \sum_i \alpha_i = d\alpha. \] \end{proof} \begin{cor}\label{cor: trans id alb kernel} Let $f\colon S\rightarrow B$ be an elliptic fibration. Then for any $\sigma\in\mathrm{Aut}_B(S)$ that induces a translation on a general fiber $F$, its action on the Albanese kernel $\sigma_*\colon \mathrm{CH}_0(S)_\mathrm{alb} \rightarrow \mathrm{CH}_0(S)_\mathrm{alb}$ is the identity. \end{cor} \begin{proof} By Lemma~\ref{lem: trans id f kernel}, $\sigma_*\colon \mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}} \rightarrow \mathrm{CH}_0(S)_{\mathrm{alb}, \mathbb{Q}}$ is the identity. Since $\mathrm{CH}(S)_\mathrm{alb}$ is torsion free by \cite{Roj80}, we infer that $\sigma_*$ is the identity on $\mathrm{CH}_0(S)_\mathrm{alb}$. \end{proof} Let $f\colon S\rightarrow B$ be a relatively minimal elliptic fibration. In order to describe the fiber-preserving symplectic automorphisms of $S$, we take a closer look at the canonical bundle formula (see \cite[Chapter V, Theorem 12.1]{BHPV04} and its proof, and ultimately \cite[Theorem~12.1]{Kod63}) \begin{equation}\label{eq: cbf} \omega_S = f^*(f_*\omega_{S/B} \otimes \omega_B) \otimes \mathcal{O}_S(\sum_i (m_i -1)F_i) \end{equation} where $\omega_S=\mathcal{O}_S(K_S)$ and $\omega_B=\mathcal{O}_B(K_B)$ are the canonical sheaves of $S$ and $B$ respectively, $\omega_{S/B} = \omega_S\otimes f^*\omega_B^{-1}$ is the relative canonical sheaf of $f$, and the $m_iF_i$'s are the multiple fibers of $f$. There is a natural inclusion of invertible sheaves \[ f^*(f_*\omega_{S/B} \otimes \omega_B) \hookrightarrow \omega_S \] which is an isomorphism over $B^0:=\{b\in B \mid f^*b \text{ is smooth}\}$ and which induces an isomorphism of global sections \begin{equation}\label{eq: K iso} f^*\colon H^0(B, f_*\omega_{S/B} \otimes \omega_B) \xrightarrow{\cong} H^0(S, \omega_S). \end{equation} Analytically locally around each $b\in B^0$, there is a small coordinate disk $b\in \Delta\cong \{t\in \mathbb{C}\mid |t|<\epsilon\}$ such that the sections of $f_*\omega_{S/B} \otimes \omega_B$ has the form \begin{equation*} \xi_t \otimes h(t) dt, \end{equation*} where $h(t)$ is a holomorphic function on $\Delta$ and $\xi_t$ is a basis of $H^0(F_t, K_{F_t})$, varying holomorphically in $t\in \Delta$. Pulling it back to $S$, we obtain a description, which is local in $B$, of the global sections $\omega\in H^0(S, \omega_S)$: \begin{equation}\label{eq: local 2-form} \omega = h(t) \xi_t \wedge dt. \end{equation} This is used in the proof of the next lemma. \begin{lem}\label{lem: fib trans} Let $f\colon S\rightarrow B$ be an elliptic fibration with $p_g(S)>0$. Then a fiber-preserving automorphism $\sigma\in\mathrm{Aut}_B(S)$ is symplectic if and only if it induces translations on the smooth fibers. \end{lem} \begin{proof} Let $\sigma\in\mathrm{Aut}_B(S)$ be a nontrivial fiber-preserving automorphism and $\sigma_{F}:=\sigma|_{F}$ its restriction to a smooth fiber $F$ of $f$. (i) Suppose that $\sigma\in\mathrm{Aut}_B(S)$ is symplectic. We want to show that $\sigma|_{F}$ is a translation, that is, it acts freely on $F$. Suppose on the contrary that $\sigma_{F}$ has a fixed point $p$. Then $\sigma_{F}$ is of finite order, and $F/\langle\sigma_{F}\rangle$ has genus 0 by the Riemann--Hurwitz formula. It follows that $\sigma$ is of finite order, and a resolution $X$ of the quotient surface $S/\langle\sigma\rangle$ is a $\mathbb{P}^1$-fibration over $B$. But then \[ H^0(S, K_S) = H^0(S, K_S)^\sigma \cong H^0(X, K_X) =0. \] This contradicts the assumption that $p_g(S)>0$. (ii) Now suppose that $\sigma_{F}\in \mathrm{Aut}(F)$ is a translation. Then $\sigma_F^*\xi = \xi$ for $\xi\in H^0(F, K_F)$; see the discussion above Lemma~\ref{lem: trans id f kernel}. Taking a local coordinate $t$ around $f(F)$ in $B$, we can write locally $\omega = h(t)\xi_t\wedge dt$ as in \eqref{eq: local 2-form}, where $\xi_t\in H^0(F_t, K_{F_t})$ is a basis element, so \[ \sigma^*\omega = \mathrm{id}_B^*h(t)\sigma_{F_t}^*\xi_t \wedge\mathrm{id}_B^*dt = h(t)\xi_t\wedge dt = \omega \] It follows that $\sigma$ is a symplectic automorphism of $S$. \end{proof} \begin{lem}\label{lem: trans id 1-forms} Let $f\colon S\rightarrow B$ be an elliptic surface. Suppose that a fiber-preserving automorphism $\sigma\in\mathrm{Aut}_B(S)$ induces translations on smooth fibers. Then it induces trivial action on $H^{1,0}(S) = H^0(S, \Omega_S^1)$. \end{lem} \begin{proof} Since $\sigma\in\mathrm{Aut}_B(S)$ descends to a fiber-preserving automorphism of the relatively minimal model of $S$, we can assume without loss of generality that $f$ is already relatively minimal. By Lemma~\ref{lem: q}, we have $q(S)\leq g(B)+1$. If $q(S)=g(B)$, then $H^0(S, \Omega_S^1) = f^*H^0(B, K_B)$. Since $\sigma$ induces trivial action on $B$, it induces trivial action on $H^0(B, K_B)$ and hence also on $H^0(S, \Omega^1_S)$. If $q(S) = g(B) +1$, then $S\cong (\tilde B\times F)/G$ as in \eqref{eq: quasi bundle}. Since $\sigma$ induces $\mathrm{id}_B$ on $B$ and a translation on the fibers $F$, one sees that it induces trivial action on $H^0(S, \Omega_S^1)$. \end{proof} Finally, we give the proof of Proposition~\ref{prop: fib action}. \begin{proof}[Proof of Proposition~\ref{prop: fib action}] If $p_g(S)=0$ then $\mathrm{CH}_0(S)_\mathrm{alb} = 0$ by \cite{BKL76} and there is nothing to prove. In case $p_g(S)>0$, it suffices to combine Lemma~\ref{lem: fib trans} and Corollary~\ref{cor: trans id alb kernel}. \end{proof} \section{Elliptic surfaces with $\chi(\mathcal{O}_S)\geq 3$} In this section, we prove the following theorem. \begin{thm}\label{thm: chi geq 3} Let $S$ be a smooth projective surface with $\kappa(S)=1$ and $\chi(\mathcal{O}_S)\geq 3$. Then $\mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_\mathrm{alb}$. \end{thm} \begin{proof} Since $\kappa(S)=1$, there is a unique elliptic fibration $f\colon S\rightarrow B$, and any automorphism of $S$ preserves the fibration $f$. By Proposition~\ref{prop: fib action}, it suffices to show that $\mathrm{Aut}_s(S)=\mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S)$, that is, any symplectic automorphism of $S$ preserves the fibers of $f$. By the canonical bundle formula \eqref{eq: cbf}, we have \[ |K_S| = f^* |K_B+L|+ \sum_i (m_i -1)F_i \] Thus the canonical map $\varphi_S$ of $S$, induced by the linear system $|K_S|$, factors as \[ \varphi_S\colon S \xrightarrow{f} B \xrightarrow{\varphi_{B}} \mathbb{P}^{p_g-1} , \] where $\varphi_{B}$ is the map defined by the linear system $|K_B+L|$. Now, since $\chi(\mathcal{O}_S)\geq 3$, we have \[ \deg(K_B+L) = 2g(B) -2 + \deg L = 2g(B) -2 +\chi(\mathcal{O}_S) \geq 2g(B)+1. \] It follows that $K_B+L$ is very ample and hence $\varphi_{B}$ is an embedding. A symplectic automorphism $\sigma$ induces an automorphism $\sigma_B\in \mathrm{Aut}(B)$ and the identity on $\mathbb{P}^{p_g-1}$, and they act equivariantly on the respective varieties. Since $\varphi_{B}$ is an embedding, it can only happen that $\sigma_B=\mathrm{id}_B$. In other words, $\sigma$ preserves each fiber of $f$, which is what we wanted to prove. \end{proof} \section{Reduction to the Jacobian fibration}\label{sec: red to J} Given an elliptic fibration $f\colon S\rightarrow B$, a natural idea is to reduce the problem at hand to the Jacobian fibration $j\colon J\rightarrow B$. The following construction has been used by \cite{BKL76} in proving the vanishing of $\mathrm{CH}(S)_\mathrm{alb}$ for surfaces with $p_g(S)=0$ and $\kappa(S)\leq 1$. We will apply it to deal with the fibration-preserving automorphisms of $S$. For any irreducible curve $C\subset S$, horizontal with respect to $f$, one can define a rational dominant map $\phi_C\colon S\dashrightarrow J$ over $B$ as follows: to a point $p$ on a smooth fiber $F_b$ over $b\in B$, we associate \[ \phi_C(p):=d[p] - C|_{F_b} \in j^*b=\mathrm{Pic}^0(F_b), \] where $d$ is the degree of the finite morphism $f|_C\colon C\rightarrow B$. It is clear that $\deg\phi_C=d^2.$ \begin{lem}[cf.~{\cite[page 138]{BKL76}} and {\cite[Proof of Theorem~11.10]{Voi03}}]\label{lem: iso CH} For a smooth ample curve $C\subset S$, the induced homomorphism \[ \phi_{C*}\colon \mathrm{CH}_0(S)_{f,\mathbb{Q}} \rightarrow \mathrm{CH}_0(J)_{j, \mathbb{Q}} \] is an isomorphism, which restricts to an isomorphism between the Albanese kernels $\phi_{C*}\colon \mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}} \rightarrow \mathrm{CH}_0(J)_{\mathrm{alb},\mathbb{Q}}$. \end{lem} \begin{proof} We define a homomorphism $\lambda\colon \mathrm{CH}_0(J)_{j,\mathbb{Q}}\rightarrow \mathrm{CH}_0(S)_{f,\mathbb{Q}}$ as follows: for any $\gamma\in \mathrm{CH}_0(J)_{j, \mathbb{Q}}$, we can assume that $\mathrm{supp}(\gamma)$ is on a smooth fiber $j^*b$ of $j$ and $\gamma = [\gamma']-[o_b]$, where $o_b$ denotes the origin of $j^*b=\mathrm{Pic}^0(f^*b)$. Then we set \[ \lambda(\gamma) =\frac{1}{d^2} \left(([p_1']+\cdots+[p_{d}']) - ([p_1]+\cdots+[p_{d}])\right) \] where $[p_1]+\cdots+[p_{d}] = C|_{f^*b}$ and, for each $1\leq i\leq d$, $p_i'$ is the unique point of $f^*b$ such that $[p_i']-[p_i] = \gamma'\in \mathrm{Pic}^0(f^*b)$. Then it is straightforward to check that $\lambda$ is the inverse of $\phi_{C*}$. Note that the irregularities of $S$ and $J$ are the same by the following Lemma~\ref{lem: inv J}, and thus $\phi_{C}$ induces an isomorphism $\phi_{C*}\colon \mathrm{Alb}(S)_\mathbb{Q} \cong \mathrm{Alb}(J)_\mathbb{Q}$. In view of the following commutative diagram, where the rows are exact, we infer that the left vertical map $\phi_{C*}\colon \mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}}\rightarrow \mathrm{CH}_0(J)_{\mathrm{alb},\mathbb{Q}}$ is an isomorphism: \begin{equation*} \begin{tikzpicture}[baseline=(current bounding box.center)] \node[name = O1] at (-2, 1.5) {$0$}; \node[name = O2] at (-2, 0) {$0$}; \node[name = SA] at (0,1.5) {$\mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}}$}; \node[name = Sf] at (3,1.5) {$\mathrm{CH}_0(S)_{f,\mathbb{Q}}$}; \node[name = AS] at (6,1.5) {$\mathrm{Alb}(S)_\mathbb{Q}$}; \node[name = JA] at (0,0) {$\mathrm{CH}_0(J)_{\mathrm{alb},\mathbb{Q}}$}; \node[name = Jj] at (3,0) {$\mathrm{CH}_0(J)_{j,\mathbb{Q}} $}; \node[name = AJ] at (6,0) {$ \mathrm{Alb}(J)_\mathbb{Q}$}; \draw[->](O1) -- (SA); \draw[->] (SA) --(Sf); \draw[->] (Sf) --(AS); \draw[->](O2) -- (JA); \draw[->] (JA) -- (Jj); \draw[->] (Jj) -- (AJ); \draw[->] (SA) -- node[right]{\tiny $\phi_{C*}$} (JA); \draw[->] (Sf) -- node[right]{\tiny $\phi_{C*}$} node[left]{\tiny $\cong$} (Jj); \draw[->] (AS) -- node[right]{\tiny $\phi_{C*}$}node[left]{\tiny $\cong$} (AJ); \end{tikzpicture} \end{equation*} \end{proof} Many of the numerical invariants of an elliptic fibration and its Jacobian fibration turn out to be the same. We give a proof of this fact for lack of an adequate reference. \begin{lem}[cf.~{\cite[Proposition~5.3.6 on page 308, Corollaries 5.3.4 and 5.3.5 on page 310]{CD89}}]\label{lem: inv J} The following equalities hold: \[ \chi(\mathcal{O}_S) = \chi(\mathcal{O}_J),\, p_g(S) = p_g(J),\, q(S) = q(J). \] If $f$ is relatively minimal, then for each $b\in B$, $e(f^*b) = e(j^*b)$, where $e(\cdot)$ denotes the Euler characteristic of a topological space. \end{lem} \begin{proof} Replacing $f\colon S\rightarrow B$ with the relatively minimal elliptic fibration does not change the invariants $\chi(\mathcal{O}_S)$, $p_g(S)$ and $q(S)$, as well as its Jacobian fibration. Thus we can assume that $f$ is relatively minimal. Then we have $K_S^2=0$, and hence by the Noether formula \begin{equation}\label{eq: noether S} 12\chi(\mathcal{O}_S) = e(S). \end{equation} On the other hand, $j\colon J\rightarrow B$ is a relatively minimal fibration such that for each $b\in B$, one has $e(f^*b) = e(j^*b)$. It follows that \begin{equation}\label{eq: noether J} e(J) = e(S) \text{ and }12\chi(\mathcal{O}_J) = e(J). \end{equation} Combining \eqref{eq: noether S} and \eqref{eq: noether J}, we obtain $\chi(\mathcal{O}_S) = \chi(\mathcal{O}_J)$. Since $ \chi(\mathcal{O}_J) = 1-q(J) + p_g(J)$, it remains to show $q(S) = q(J)$. First, we have the easy inequalities \[ g(B) \leq q(J)\leq q(S) \leq g(B)+1, \] where the second inequality holds because of the existence of dominant maps such as $\phi_C$ from $S$ to $J$, and the last inequality is given by Lemma~\ref{lem: q}. Thus, if $q(S) = g(B)$, then $q(J)= q(S) = g(B)$. Suppose now $q(S) = g(B)+1$. Then $S\cong (\tilde B\times F)/G$ as in \eqref{eq: quasi bundle}, and it is straightforward to check that $J= B\times F$. Therefore, \[ q(J) = g(B)+g(F) = g(B)+1= q(S). \] \end{proof} The induced map $\phi_{C*}$ of zero cycles and holomorphic forms actually does not depend on the choice of the curve $C$, as the following lemma shows. \begin{lem}\label{lem: CC' same} Let $f\colon S\rightarrow B$ be an elliptic surface, and $C$ and $C'$ be two smooth ample curves on $S $. Suppose that $\deg f|_{C'} = \deg f|_C$. Then the rational maps $\phi_{C}$ and $\phi_{C'}$ from $S$ to $J$ induce the same maps between the $0$-th Chow groups and the spaces of holomorphic forms, that is, \begin{enumerate}[leftmargin=*] \item $\phi_{C*} = \phi_{C'*}\colon \mathrm{CH}_0(S)_{f,\mathbb{Q}} \rightarrow \mathrm{CH}_0(J)_{j,\mathbb{Q}}$, and \item $\phi_C^* = \phi_{C'}^*\colon H^{i,0}(J) \rightarrow H^{i,0}(S)$ for any $i$. \end{enumerate} \end{lem} \begin{proof} Let $d$ be the degree $\deg f|_{C'} = \deg f|_C$. Then \[ \phi_{C'}(p)=d[p] - C'|_{F_b} = d[p] - C|_{F_b} + (C-C')|_{F_b} = \phi_{C}(p)+(C-C')|_{F_b}\in j^*b, \] so $\phi_{C'}(p)$ and $\phi_{C}(p)$ differ by a translation of $F_b$ by $(C-C')|_{F_b}$. These translations along the fibers glue to an automorphism of $J$ over $B$, which we denote by $\phi_{C-C'}$. In other words, we have a commutative diagram \begin{equation}\label{eq: CC'} \begin{tikzpicture}[baseline=(current bounding box.center)] \node[name = S] at (1.5, 1) {$S$}; \node[name = J1] at (0, 0) {$J$}; \node[name = J2] at (3, 0) {$J$}; \draw[->, dashed] (S)--node[above]{\tiny $\phi_C$}(J1); \draw[->, dashed] (S)--node[above]{\tiny $\phi_{C'}$}(J2); \draw[->] (J1)--node[above]{\tiny $\phi_{C-C'}$}(J2); \end{tikzpicture} \end{equation} where $\phi_{C-C'}\in \mathrm{Aut}_B(J)$ induces translations on general fibers of $j\colon J\rightarrow B$. By Lemmas~\ref{lem: trans id f kernel}, \ref{lem: trans id 1-forms} and \ref{lem: fib trans}, $\phi_{C-C'}$ induces the identity map on $ \mathrm{CH}_0(J)_{j,\mathbb{Q}}$ as well as on $ H^{i,0}(J)$. In view of \eqref{eq: CC'}, the desired equalities $\phi_{C*} = \phi_{C'*}$ and $\phi_C^* = \phi_{C'}^*$ follows. \end{proof} By the universal property of $J$, any automorphism $\sigma\in \mathrm{Aut}_f(S)$ preserving the fibration $f$ induces an automorphism $\sigma_J\in \mathrm{Aut}_j(J)$ such that they induce the same automorphism $\sigma_B\in \mathrm{Aut}(B)$ on the base curve $B$ and the following diagram is commutative \begin{equation}\label{diag: ind aut J} \begin{tikzpicture}[baseline=(current bounding box.center), scale=.8] \node[name=S1] at (0, 1.5) {$S$}; \node[name=S2] at (0,0) {$S$}; \node[name=J1] at (2, 1.5){$J$}; \node[name=J2] at (2, 0){$J$}; \draw[->](S1)--node[left]{\tiny $\sigma$}(S2); \draw[->, dashed](S1)--node[above]{\tiny $\phi_C$}(J1); \draw[->](J1)--node[right]{\tiny $\sigma_J$}(J2); \draw[->, dashed](S2)--node[above]{\tiny $\phi_{\sigma(C)}$}(J2); \end{tikzpicture} \end{equation} This defines a group homomorphism $\mathrm{Aut}_f(S) \rightarrow \mathrm{Aut}_j(J)$, $\sigma\mapsto\sigma_J$. For a point $b\in B$ such that the fiber $F_b:=f^*b$ is smooth, we have $j^*b = \mathrm{Pic}^0(F_b)$, and for any $\alpha\in \mathrm{Pic}^0(F_b)$, \[ \sigma_J(\alpha) = (\sigma^{-1})^*(\alpha) \in \mathrm{Pic}^0(\sigma(F_b)) = j^*(\sigma_B(b)). \] \begin{lem}\label{lem: red to J} Let $f\colon S\rightarrow B$ be an elliptic fibration and let $j\colon J\rightarrow B$ the Jacobian of $f$. Then an automorphism $\sigma\in \mathrm{Aut}_f(S)$ acts as identity on $\mathrm{CH}_0(S)_{f,\mathbb{Q}}$ (resp.~$\mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}}$, resp.~$H^{2,0}(S)$, resp.~$H^{1,0}(S)$) if and only if so does the induced automorphism $\sigma_J\in\mathrm{Aut}_j(J)$ on $\mathrm{CH}_0(J)_{j,\mathbb{Q}}$ (resp.~$\mathrm{CH}_0(J)_{\mathrm{alb},\mathbb{Q}}$, resp.~$H^{2,0}(J)$, resp.~$H^{1,0}(J)$). \end{lem} \begin{proof} It follows from Lemma~\ref{lem: CC' same} that $\phi_C$ and $\phi_{\sigma(C)}$ in \eqref{diag: ind aut J} induces the same maps on the Chow groups as well as on the spaces $H^{i,0}$, and they are all isomorphisms by Lemmas~\ref{lem: iso CH} and \ref{lem: inv J}. In view of \eqref{diag: ind aut J}, the assertion of the lemma follows. \end{proof} \begin{cor}\label{cor: red to J} Let $f\colon S\rightarrow B$ be an elliptic fibration and let $j\colon J\rightarrow B$ the Jacobian of $f$. Then $\sigma\in \mathrm{Aut}_f(S)$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$ if and only if $\sigma_J\in \mathrm{Aut}_j(J)$ acts trivially on $\mathrm{CH}_0(J)_\mathrm{alb}$. \end{cor} \begin{proof} The natural maps $\mathrm{CH}_0(S)_\mathrm{alb}\rightarrow \mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}}$ and $\mathrm{CH}_0(J)_\mathrm{alb}\rightarrow \mathrm{CH}_0(J)_{\mathrm{alb},\mathbb{Q}}$ are injective by \cite{Roj80}, and hence the corollary follows from Lemma~\ref{lem: red to J}. \end{proof} The following lemma on the orders of the induced automorphisms $\sigma_J$ and $\sigma_B$ will be used in Section~\ref{sec: chi leq 2}. \begin{lem}\label{lem: same order} Let $f\colon S\rightarrow B$ be an elliptic fibration such that $p_g(S)>0$, and let $j\colon J\rightarrow B$ the Jacobian of $f$. Let $\sigma\in\mathrm{Aut}_f(S)\cap\mathrm{Aut}_s(S)$ be a symplectic fibration-preserving automorphism. Then the induced automorphisms $\sigma_J\in\mathrm{Aut}_j(J)$ and $\sigma_B\in\mathrm{Aut}(B)$ have the same order. \begin{proof} For any integer $n$, we have \begin{equation}\label{eq: order J B} j\circ \sigma_J^n = \sigma_B^n\circ j \end{equation} If the order $|\sigma_B|$ is infinite, then $\sigma_J$ necessarily has infinite order. Now suppose that $\sigma_B$ has finite order $m$. Then $\sigma^m$ lies in $\mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S)$. By Lemma~\ref{lem: fib trans}, $\sigma^m$ induces translations on the smooth fibers $F_b$. It follows that $\sigma_J^m$ induces the identity on $j^*b=\mathrm{Pic}^0(F_b)$ and is itself the identity. Therefore, $|\sigma_J|$ is finite, with order dividing $m=|\sigma_B|$. On the other hand, $m$ divides $|\sigma_J|$ by \eqref{eq: order J B}. We infer that $|\sigma_J| = |\sigma_B|$. \end{proof} \end{lem} \section{Elliptic surfaces with $\chi(\mathcal{O}_S)\leq 2$}\label{sec: chi leq 2} In this section we deal with elliptic surfaces $f\colon S\rightarrow B$ with $\chi(\mathcal{O}_S)\leq 2$. We need a lemma for the action of the Klein group $(\mathbb{Z}/2\mathbb{Z})^2$ on $\mathrm{CH}_0(S)$. It is based on the idea of ``enough automorphisms" of \cite{IM79}. \begin{lem}\label{lem: Klein} Let $S$ be a smooth projective surface. Let $G=\langle \sigma, \tau \rangle\cong (\mathbb{Z}/2\mathbb{Z})^2$ be a subgroup of $\mathrm{Aut}(S)$ such that the smooth models of the quotient surfaces $S/\langle \tau\rangle$ and $S/\langle \sigma\tau\rangle$ are not of general type and have vanishing geometric genera. Then $\sigma$ induces the identity on the $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{lem} \begin{proof} Let $X_1$ and $X_2$ be the smooth models of $S/\langle \tau\rangle$ and $S/\langle \sigma\tau\rangle$ respectively. Then, by \cite{BKL76}, we have \[ \mathrm{CH}_0(S)_{\mathrm{alb}, \mathbb{Q}}^\tau = \mathrm{CH}_0(X_1)_{\mathrm{alb},\mathbb{Q}} = 0,\, \mathrm{CH}_0(S)_{\mathrm{alb}, \mathbb{Q}}^{\sigma\tau} = \mathrm{CH}_0(X_2)_{\mathrm{alb},\mathbb{Q}} = 0 \] Since $\tau$ and $\sigma\tau$ are both involutions, they act as $-\mathrm{id}$ on $\mathrm{CH}_0(S)_{\mathrm{alb}, \mathbb{Q}}$. It follows that $\sigma = (\sigma\tau)\tau$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}, \mathbb{Q}}$. Since $\mathrm{CH}_0(S)_{\mathrm{alb}}$ has no torsion by \cite{Roj80}, the lemma follows. \end{proof} \begin{lem}\label{lem: inv} Let $f\colon S\rightarrow B$ be a smooth projective elliptic surface. Then any symplectic involution of $S$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{lem} \begin{proof} If $\kappa(S)\leq 0$, this is a consequence of \cite{BKL76} and \cite{Voi12}. In the following we assume that $\kappa(S)=1$. Let $\sigma\in \mathrm{Aut}_s(S)$ be a symplectic involution. By Lemma~\ref{lem: red to J}, it suffices to prove that the induced automorphism $\sigma_J\in \mathrm{Aut}_s(J)$ acts trivially on $\mathrm{CH}_0(J)_\mathrm{alb}$, where $j\colon J\rightarrow B$ is the Jacobian fibration of $f$. Note that $|\sigma_J|\leq |\sigma|= 2$. If $\sigma_J=\mathrm{id}_J$, then there is nothing to prove. We can thus assume that $\sigma_J$ is also an involution. Note that the $o$-section of $j$ is preserved by $\sigma_J$. Let $\tau\in \mathrm{Aut}_B(J)$ be the involution that restricts to $-\mathrm{id}_F$ on a general fiber $F$ of $j$. Then the subgroup $G=\langle \sigma_J, \tau \rangle<\mathrm{Aut}_j(J)$, generated by $\sigma_J$ and $\tau$, is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2$. It is easy to see that the smooth models of the quotient surfaces $J/\langle\tau\rangle$ and $J/\langle \tau\sigma_J\rangle$ have vanishing geometric genera. By Lemma~\ref{lem: Klein}, $\sigma_{J}$ acts as the identity on $\mathrm{CH}_0(J)_\mathrm{alb}$. \end{proof} \begin{thm}\label{thm: chi eq 2} Let $f\colon S\rightarrow B$ be an elliptic fibration with $\chi(\mathcal{O}_S)=2$. Then $\mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_{f,\mathbb{Q}}$. As a consequence, $\mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{thm} \begin{proof} Since $\chi(\mathcal{O}_S)=2$, one has $p_g(S)=q(S)+1\geq 1$. Let $j\colon J\to B$ be the Jacobian fibration of $f$. By Lemma \ref{lem: red to J}, it suffices to show that $\sigma_J$ acts trivially on $\mathrm{CH}_0(J)_{j,\mathbb{Q}}$, where $\sigma_J\in \mathrm{Aut}_j(J)\cap \mathrm{Aut}_s(J)$ is the automorphism induced by $\sigma$. First we assume that $q(J)=0$. In this case $J$ is an elliptic $K3$ surface. Recall that the induced automorphisms $\sigma_J\in\mathrm{Aut}(J)$ and $\sigma_B\in\mathrm{Aut}(B)$ have the same order by Lemma~\ref{lem: same order}, which is finite by Corollary~\ref{cor: finite}. Now the triviality of the action of $\sigma_J$ on $\mathrm{CH}_0(J)_{j,\mathbb{Q}}$ follows from the results of Voisin \cite{Voi12} and Huybrechts \cite{Huy12}. Now we can assume that $q(J)>0$. By the canonical bundle formula, we have $|K_J|= j^* |K_B+L|$, where $L$ is a line bundle of degree $\chi(\mathcal{O}_J)=\chi(\mathcal{O}_S)=2$. Since $\deg(K_B+L)=2g(B)$, the linear system $|K_{B}+L| $ is base point free and the map $\varphi_{B}$ defined by $|K_{B}+L|$ is a morphism. It follows that the canonical map $\varphi_{J}$ of $J$ is a morphism which factors as \[ \varphi_{J} \colon J \xrightarrow{j} B \xrightarrow{\varphi_{B}} \mathbb{P}^{p_g-1} \] where $p_g:=p_g(J) = p_g(S)$. On the one hand, $\deg(K_B+L)=2g(B)=2(p_g-1)$ by Riemann--Roch. On the other hand, we know that $\deg(K_B+L)=\deg(\varphi_{B})\cdot \deg(\mathrm{Im}(\varphi_{B}))$ and $\deg(\mathrm{Im}(\varphi_{B}))\geq p_g-1$. So $\deg(\varphi_{B})\leq 2$. Since $\sigma_J$ acts trivially on $H^0(J, K_J)$, the morphism $\varphi_{B}$ factors through the quotient map $B\rightarrow B/\sigma_B$. Therefore, one has \[ |\sigma_J| = |\sigma_B| \leq \deg\varphi_{B} \leq 2. \] We are done by Lemma~\ref{lem: inv}. \end{proof} We have the following two immediate corollaries. \begin{cor} Let $S$ be a smooth projective surface with $\kappa(S)=1$ and $\chi(\mathcal{O}_S)=2$. Then $\mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{cor} \begin{cor}\label{cor: K3} Let $f: S\to B$ be an elliptic $K3$ surface. Then $\mathrm{Aut}_f(S)\cap \mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{cor} Next we treat the case where $\chi(\mathcal{O}_{S})=1$. \begin{thm}\label{thm: chi eq 1} Let $f:S\to B$ be a smooth projective elliptic surface with $\chi(\mathcal{O}_S)=1$. Then $\mathrm{Aut}_s(S)\cap\mathrm{Aut}_f(S)$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$ if $p_g(S)=q(S)\notin\{1,2\}$. Otherwise, the image of the homomorphism $\mathrm{Aut}_s(S)\cap \mathrm{Aut}_f(S)\to \mathrm{Aut}(\mathrm{CH}_0(S)_{\mathrm{alb}})$ has order at most $3$. \end{thm} \begin{proof} If $p_g(S)=q(S)=0$, then $\mathrm{CH}_0(J)_{\mathrm{alb}}=0$ by \cite{BKL76}, and there is nothing to prove. So we may assume that $p_g(S)=q(S)>0$. Let $j\colon J\rightarrow B$ be the Jacobian fibration of $f$. Let $\sigma_J\in \mathrm{Aut}_j(J)\cap\mathrm{Aut}_s(J)$ and $\sigma_B\in\mathrm{Aut}(B)$ be the automorphisms induced by $\sigma$. By Corollary~\ref{cor: red to J}, it suffices to show that $\sigma_J$ acts trivially on $\mathrm{CH}_0(J)_{\mathrm{alb}}$. Since $\chi(\mathcal{O}_S)=1$, the surface cannot be abelian or bielliptic, so $\sigma_B$ is of finite order by Corollary~\ref{cor: finite}. Also, $\sigma_J$ has the same finite order as $\sigma_B$ by Lemma~\ref{lem: same order}. By Lemma~\ref{lem: inv J}, $\chi(\mathcal{O}_J)=\chi(\mathcal{O}_{S})=1$ and thus $q(J) = p_g(J)$. Since $\chi(\mathcal{O}_J)>0$, the fibration $j\colon J\rightarrow B$ cannot be an elliptic quasi-bundle and it follows that $g(B) = q(J)$. By the canonical bundle formula, we have $|K_J|=j^{\ast}|K_B+L|$, where $L$ an invertible sheaf on $B$ of degree $\chi(\mathcal{O}_J)=1$. Then the canonical map of $J$ factors through $j$ followed by the map $\varphi_{B}$ induced by the linear system $|K_B+L|$ as in the proof of Theorem~5.3. If $g(B)=q(J)\geq 3$, then \[ |\sigma_J|=|\sigma_B|\leq \deg (\varphi_{B})\leq 2. \] In this case, $\sigma_J$ acts trivially on $\mathrm{CH}_0(J)_{\mathrm{alb}}$ by Lemma~\ref{lem: inv}. If $g(B)=q(J)=2$, then \[ |\sigma_J|\leq \deg (\varphi_{B})\leq 3. \] If $g(B)=q(J)=1$, then $\deg(K_B+L)=1$ and $|K_B+L|$ consists of a unique element, say $p\in B$, which is necessarily fixed by $\sigma_B$. In these last two cases, the order $|\sigma_J|$ is one of $\{1,2,3,4,6\}$. It follows that either $\sigma_J^2$ or $\sigma_J^3$ has order at most $2$, and thus acts trivially on $\mathrm{CH}_0(J)_{\mathrm{alb}}$ by Lemma~\ref{lem: inv}. This completes the proof. \end{proof} Now we deal with the case where $\chi(\mathcal{O}_{S})=0$. \begin{thm}\label{chi eq 0} Let $S$ be a smooth projective surface with $\chi(\mathcal{O}_S)=0$. Then $\mathrm{Aut}_s(S)$ acts trivially on $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{thm} \begin{proof} Since $\chi(\mathcal{O}_S)=0$, we have $\kappa(S)\leq 1$. If $\kappa(S)\leq 0$ or $p_g(S)=0$, then the assertion follows from \cite{BKL76}. Therefore, we can assume that $\kappa(S)=1$ and $p_g(S)>0$. We can also assume that $S$ is minimal. Let $f\colon S\to B$ be the Iitaka fibration of $S$ and $j\colon J\rightarrow B$ its Jacobian. For any $\sigma\in \mathrm{Aut}_s(S)$, the induced automorphism $\sigma_J\in \mathrm{Aut}_s(J)\cap \mathrm{Aut}_j(J)$ has finite order by Lemma~\ref{lem: same order} and Corollary~\ref{cor: finite}. By Lemma~\ref{lem: red to J}, it suffices to show that $\sigma_J$ induces the trivial action on $\mathrm{CH}_0(J)_{\mathrm{alb}, \mathbb{Q}}$. Since $\chi(\mathcal{O}_J)=\chi(\mathcal{O}_S) = 0$, $j$ is a quasi-bundle and hence $J$ is isogenous to a product of curves. It follows that the Chow motive $h(J)$ is finite dimensional in the sense of Kimura, and the assertion follows from Lemma \ref{lem:fdtoSBC}. \end{proof} \begin{rmk} It would be interesting to give a direct proof of Theorem~\ref{chi eq 0} without involving the theory of Chow motives. \end{rmk} The following lemma should be well-known to experts. We write down a proof for lack of an adequate reference. \begin{lemma}\label{lem:fdtoSBC} Let $S$ be a smooth projective surface with $p_g(S)>0$. Assume that the Chow motive $h(S)$ of $S$ is finite dimensional in the sense of Kimura \cite{Kim05}. Then any symplectic automorphism $\sigma\in \mathrm{Aut}_s(S)$ of finite order acts as the identity on $\mathrm{CH}_0(S)_{\mathrm{alb}}$. \end{lemma} \begin{proof} The Chow motive of $S$ has a Chow-K\"unneth decomposition (\cite[Proposition 7.2.1]{KMP07}): \[ h(S) = h_0(S) \oplus h_1(S) \oplus h_2(S) \oplus h_3(S) \oplus h_4(S) \] in the category of Chow motives with rational coefficients. There is a further decomposition for $h_2(S)$ (\cite[Proposition 7.2.3]{KMP07}): \[ h_2(S) = h_2^\mathrm{alg}(S) \oplus t_2(S), \] where $h_2^\mathrm{alg}(S)$ denotes the algebraic part and $t_2(S)$ the transcendental part. We have \[ \mathrm{CH}_0(t_2(S)) = \mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}}\text{ and } H^2(t_2(S))=H^2_{\mathrm{tr}}(S,\mathbb{Q}), \] where $H^2_{\mathrm{tr}}(S,\mathbb{Q})$ denotes the transcendental part of $H^2(S,\mathbb{Q})$. Since the motive $h(S)$ is finite dimensional in the sense of Kimura \cite{Kim05}, its direct summand $t_2(S)$ is also finite dimensional. For any symplectic automorphism $\sigma\in \mathrm{Aut}_s(S)$, it acts trivially on $H^2_{\mathrm{tr}}(S,\mathbb{Q}) = H^2(t_2(S))$. Therefore, $(\Gamma_\sigma -\Delta_S)_*\colon t_2(S)\rightarrow t_2(S)$ is a numerically trivial morphism, where $\Gamma_\sigma$ is the graph of $\sigma$ and $\Delta_S\subset S\times S$ is the diagonal. Then, by Kimura's nilpotence theorem \cite[Proposition 7.5]{Kim05}, $(\Gamma_\sigma -\Delta_S)_*$ is nilpotent as an endomorphism of $t_2(S)$. It follows that the action of $\sigma$ on $\mathrm{CH}_0(t_2(S)) = \mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}}$ is unipotent. Since $\sigma$ is of finite order, we infer that $\sigma$ acts as the identity on $ \mathrm{CH}_0(S)_{\mathrm{alb},\mathbb{Q}}$. It acts trivially also on $\mathrm{CH}_0(S)_{\mathrm{alb}}$ because $\mathrm{CH}_0(S)_{\mathrm{alb}}$ has no torsion by \cite{Roj80}. \end{proof} Finally, strengthening the hypothesis in Conjecture \ref{conj: sym}, we obtain \begin{thm} Let $S$ be a smooth projective surface with $\kappa(S)=1$. If an automorphism $\sigma\in \mathrm{Aut}(S)$ induces the trivial action on $H^{i,0}(S)$ for $i>0$, then it induces the trivial action on $\mathrm{CH}_0(S)_\mathrm{alb}$. \end{thm} \begin{proof} Let $f:S\to B$ be the Iitaka fibration of $S$ and $\sigma$ an automorphism of $S$ acting trivially on $H^{i,0}(S)$ for $i>0$. By Theorems~\ref{thm: chi geq 3}, \ref{thm: chi eq 2}, \ref{thm: chi eq 1} and \ref{chi eq 0}, it is enough to show the result for surfaces with $q(S)=p_g(S)\in\{1,2\}$. Let $\sigma_{B}\in \mathrm{Aut}(B)$ be the automorphism induced by $\sigma$. It suffices to show that $\sigma_B=\mathrm{id}_B$, since then $\sigma\in \mathrm{Aut}_B(S)\cap \mathrm{Aut}_s(S)$ and we can conclude by Proposition~\ref{prop: fib action}. Observe that, since $\sigma$ acts trivially on $H^{1,0}(S)$ and $f^{\ast}\colon H^{1,0}(B)\rightarrow H^{1,0}(S)$ is injective, $\sigma_{B}$ acts trivially on $H^{1,0}(B)$. It follows that $\sigma_B =\mathrm{id}_B$ if $g(B)= 2$. In case $g(B)=1$, the automorphism $\sigma_B$ is necessarily a translation. Since $\chi(\mathcal{O}_S)>0$, by the holomorphic Lefschetz fixed point formula, the fixed locus $S^{\sigma}$ is non-empty. Thus the translation $\sigma_B$ fixes a nonempty subset $f(S^{\sigma})\subset B$, and we infer that $\sigma_{B}=\mathrm{id}_{B}$. \end{proof} \end{document}
arXiv
Multiple ergodic averages for three polynomials and applications Author: Nikos Frantzikinakis MSC (2000): Primary 37A45; Secondary 37A30, 28D05 DOI: https://doi.org/10.1090/S0002-9947-08-04591-1 Published electronically: April 25, 2008 Abstract: We find the smallest characteristic factor and a limit formula for the multiple ergodic averages associated to any family of three polynomials and polynomial families of the form $\{l_1p,l_2p,\ldots ,l_kp\}$. We then derive several multiple recurrence results and combinatorial implications, including an answer to a question of Brown, Graham, and Landman, and a generalization of the Polynomial Szemerédi Theorem of Bergelson and Leibman for families of three polynomials with not necessarily zero constant term. We also simplify and generalize a recent result of Bergelson, Host, and Kra, showing that for all $\varepsilon >0$ and every subset of the integers $\Lambda$ the set \[ \big \{n\in \mathbb {N}\colon d^*\big (\Lambda \cap (\Lambda +p_1(n))\cap (\Lambda +p_2(n))\cap (\Lambda + p_3(n))\big )>(d^*(\Lambda ))^4-\varepsilon \big \} \] has bounded gaps for "most" choices of integer polynomials $p_1,p_2,p_3$. L. M. Abramov, Metric automorphisms with quasi-discrete spectrum, Izv. Akad. Nauk SSSR Ser. Mat. 26 (1962), 513–530 (Russian). MR 0143040 L. Auslander, L. Green, F. Hahn. Flows on homogeneous spaces, With the assistance of L. Markus and W. Massey, and an appendix by L. Greenberg, Annals of Mathematics Studies, 53, Princeton University Press, Princeton, N.J. (1963). F. A. Behrend, On sets of integers which contain no three terms in arithmetical progression, Proc. Nat. Acad. Sci. U.S.A. 32 (1946), 331–332. MR 18694, DOI https://doi.org/10.1073/pnas.32.12.331 Daniel Berend and Yuri Bilu, Polynomials with roots modulo every integer, Proc. Amer. Math. Soc. 124 (1996), no. 6, 1663–1671. MR 1307495, DOI https://doi.org/10.1090/S0002-9939-96-03210-8 V. Bergelson, Weakly mixing PET, Ergodic Theory Dynam. Systems 7 (1987), no. 3, 337–349. MR 912373, DOI https://doi.org/10.1017/S0143385700004090 Vitaly Bergelson, Bernard Host, and Bryna Kra, Multiple recurrence and nilsequences, Invent. Math. 160 (2005), no. 2, 261–303. With an appendix by Imre Ruzsa. MR 2138068, DOI https://doi.org/10.1007/s00222-004-0428-6 V. Bergelson and A. Leibman, Polynomial extensions of van der Waerden's and Szemerédi's theorems, J. Amer. Math. Soc. 9 (1996), no. 3, 725–753. MR 1325795, DOI https://doi.org/10.1090/S0894-0347-96-00194-4 V. Bergelson, A. Leibman, E. Lesigne. Weyl complexity of a system of polynomials and constructions in combinatorial number theory, to appear in J. Analyse Math. Tom C. Brown, Ronald L. Graham, and Bruce M. Landman, On the set of common differences in van der Waerden's theorem on arithmetic progressions, Canad. Math. Bull. 42 (1999), no. 1, 25–36. MR 1695890, DOI https://doi.org/10.4153/CMB-1999-003-9 Jean-Pierre Conze and Emmanuel Lesigne, Théorèmes ergodiques pour des mesures diagonales, Bull. Soc. Math. France 112 (1984), no. 2, 143–175 (French, with English summary). MR 788966 Jean-Pierre Conze and Emmanuel Lesigne, Sur un théorème ergodique pour des mesures diagonales, Probabilités, Publ. Inst. Rech. Math. Rennes, vol. 1987, Univ. Rennes I, Rennes, 1988, pp. 1–31 (French). MR 989141 Harry Furstenberg, Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions, J. Analyse Math. 31 (1977), 204–256. MR 498471, DOI https://doi.org/10.1007/BF02813304 H. Furstenberg, Recurrence in ergodic theory and combinatorial number theory, Princeton University Press, Princeton, N.J., 1981. M. B. Porter Lectures. MR 603625 H. Furstenberg and Y. Katznelson, An ergodic Szemerédi theorem for commuting transformations, J. Analyse Math. 34 (1978), 275–291 (1979). MR 531279, DOI https://doi.org/10.1007/BF02790016 Nikos Frantzikinakis, Uniformity in the polynomial Wiener-Wintner theorem, Ergodic Theory Dynam. Systems 26 (2006), no. 4, 1061–1071. MR 2246591, DOI https://doi.org/10.1017/S0143385706000204 Nikos Frantzikinakis and Bryna Kra, Polynomial averages converge to the product of integrals, Israel J. Math. 148 (2005), 267–276. Probability in mathematics. MR 2191231, DOI https://doi.org/10.1007/BF02775439 Nikos Frantzikinakis and Bryna Kra, Ergodic averages for independent polynomials and applications, J. London Math. Soc. (2) 74 (2006), no. 1, 131–142. MR 2254556, DOI https://doi.org/10.1112/S0024610706023374 B. Green, A Szemerédi-type regularity lemma in abelian groups, with applications, Geom. Funct. Anal. 15 (2005), no. 2, 340–376. MR 2153903, DOI https://doi.org/10.1007/s00039-005-0509-8 Bernard Host and Bryna Kra, Nonconventional ergodic averages and nilmanifolds, Ann. of Math. (2) 161 (2005), no. 1, 397–488. MR 2150389, DOI https://doi.org/10.4007/annals.2005.161.397 Bernard Host and Bryna Kra, Convergence of polynomial ergodic averages, Israel J. Math. 149 (2005), 1–19. Probability in mathematics. MR 2191208, DOI https://doi.org/10.1007/BF02772534 P. Koester. An extension of a method of Behrend in additive combinatorics, Online Journal of Analytic Combinatorics, (2008), no. 3. A. Leibman, Polynomial mappings of groups, Israel J. Math. 129 (2002), 29–60. MR 1910931, DOI https://doi.org/10.1007/BF02773152 A. Leibman, Pointwise convergence of ergodic averages for polynomial sequences of translations on a nilmanifold, Ergodic Theory Dynam. Systems 25 (2005), no. 1, 201–213. MR 2122919, DOI https://doi.org/10.1017/S0143385704000215 A. Leibman, Pointwise convergence of ergodic averages for polynomial actions of ${\Bbb Z}^d$ by translations on a nilmanifold, Ergodic Theory Dynam. Systems 25 (2005), no. 1, 215–225. MR 2122920, DOI https://doi.org/10.1017/S0143385704000227 A. Leibman, Convergence of multiple ergodic averages along polynomials of several variables, Israel J. Math. 146 (2005), 303–315. MR 2151605, DOI https://doi.org/10.1007/BF02773538 A. Leibman. Host-Kra and Ziegler factors and convergence of multiple averages, Handbook of Dynamical Systems, vol. 1B, Elsevier, (2005), 841–853. A. Leibman. Orbit of the diagonal of a power of a nilmanifold, Preprint, Available at http://www.math.ohio-state.edu/leibman/preprints/OrbDiag.pdf Emmanuel Lesigne, Sur une nil-variété, les parties minimales associées à une translation sont uniquement ergodiques, Ergodic Theory Dynam. Systems 11 (1991), no. 2, 379–391 (French, with English summary). MR 1116647, DOI https://doi.org/10.1017/S0143385700006209 William Parry, Ergodic properties of affine transformations and flows on nilmanifolds, Amer. J. Math. 91 (1969), 757–771. MR 260975, DOI https://doi.org/10.2307/2373350 Karl Petersen, Ergodic theory, Cambridge Studies in Advanced Mathematics, vol. 2, Cambridge University Press, Cambridge, 1989. Corrected reprint of the 1983 original. MR 1073173 Imre Z. Ruzsa, Solving a linear equation in a set of integers. I, Acta Arith. 65 (1993), no. 3, 259–282. MR 1254961, DOI https://doi.org/10.4064/aa-65-3-259-282 E. Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, Acta Arith. 27 (1975), 199–245. MR 369312, DOI https://doi.org/10.4064/aa-27-1-199-245 Peter Walters, An introduction to ergodic theory, Graduate Texts in Mathematics, vol. 79, Springer-Verlag, New York-Berlin, 1982. MR 648108 T. Ziegler, A non-conventional ergodic theorem for a nilsystem, Ergodic Theory Dynam. Systems 25 (2005), no. 4, 1357–1370. MR 2158410, DOI https://doi.org/10.1017/S0143385703000518 Tamar Ziegler, Universal characteristic factors and Furstenberg averages, J. Amer. Math. Soc. 20 (2007), no. 1, 53–97. MR 2257397, DOI https://doi.org/10.1090/S0894-0347-06-00532-7 L. M. Abramov. Metric automorphisms with quasi-discrete spectrum, Izv. Akad. Nauk SSSR Ser. Mat., 26, (1962), 513–530. F. Behrend. On sets of integers which contain no three terms in arithmetical progression, Proc. Nat. Acad. Sci. U.S.A., 32, (1946), 331–333. D. Berend, Y. Bilu. Polynomials with roots modulo every integer, Proc. Amer. Math. Soc., 124, no 6, (1996), 1663–1671. V. Bergelson. Weakly mixing PET. Ergodic Theory Dynam. Systems, 7 (1987), 337–349. V. Bergelson, B. Host, B. Kra. Multiple recurrence and nilsequences, Inventiones Math., 160, (2005), 261–303. V. Bergelson, A. Leibman. Polynomial extensions of van der Waerden's and Szemerédi's theorems, J. Amer. Math. Soc., 9, (1996), 725–753. T. Brown, R. Graham, B. Landman. On the set of common differences in van der Waerden's Theorem on arithmetic progressions, Canad. Math. Bull., 42, (1999), 25–36. J.-P. Conze, E. Lesigne. Théorèmes ergodiques pour des mesures diagonales, Bull. Soc. Math. France, 112, (1984), 143–175. J.-P. Conze, E. Lesigne. Sur un théorème ergodique pour des mesures diagonales, Publications de l'Institut de Recherche Mathématique de Rennes, Probabilités, (1987), 1–31. H. Furstenberg. Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions, J. Analyse Math., 31, (1977), 204–256. H. Furstenberg. Recurrence in ergodic theory and combinatorial number theory. Princeton University Press, Princeton, (1981). H. Furstenberg, Y. Katznelson. An ergodic Szemerédi theorem for commuting transformations. J. Analyse Math., 34, (1979), 275–291. N. Frantzikinakis. Uniformity in the polynomial Wiener-Wintner theorem, Ergodic Th. Dynam. Systems, 26, Issue 4, (2006), 1061–1071. N. Frantzikinakis, B. Kra. Polynomial averages converge to the product of integrals, Isr. J. Math., 148 (2005), 267–276. N. Frantzikinakis, B. Kra. Polynomial averages for independent polynomials and applications, J. London Math. Soc., 74, Issue 1, (2006), 131-142. B. Green. A Szemerédi-type regularity lemma in abelian groups, with applications, Geom. Funct. Anal. 15, (2005), no 2, 340–376. B. Host, B. Kra. Nonconventional ergodic averages and nilmanifolds, Annals Math., 161, (2005), 397–488. B. Host, B. Kra. Convergence of polynomial ergodic averages, Isr. J. Math., 149, (2005), 1–19. A. Leibman. Polynomial mappings of groups, Israel Journal of Mathematics, 129, (2002), 29–60. A. Leibman. Pointwise convergence of ergodic averages for polynomial sequences of translations on a nilmanifold. Ergodic Th. Dynam. Systems, 25, (2005), no. 1., 201–213. A. Leibman. Pointwise convergence of ergodic averages for polynomial actions of $\mathbb {Z}^d$ by translations on a nilmanifold, Ergodic Th. Dynam. Systems, 25, (2005), no. 1., 215–225. A. Leibman. Convergence of multiple ergodic averages along polynomials of several variables, Isr. J. Math., 146, (2005), 303–316. E. Lesigne. Sur une nil-variété, les parties minimales associées à une translation sont uniquement ergodiques, Ergodic Theory Dynam. Systems, 11, (1991), no. 2, 379–391. W. Parry. Ergodic properties of affine transformations and flows on nilmanifolds, Amer. J. Math., 91, (1969), 757–771. K. Petersen. Ergodic theory. Cambridge Studies in Advanced Mathematics, 2. Cambridge University Press, Cambridge, (1989). I. Ruzsa. Solving a linear equation in a set of integers I, Acta Arith., 65, (1993), no. 3, 259–282. E. Szemerédi. On sets of integers containing no $k$ elements in arithmetic progression, Acta Arith., 27, (1975), 299–345. P. Walters. An introduction to ergodic theory. Graduate Texts in Mathematics, 79, Springer-Verlag, New York-Berlin, (1982). T. Ziegler. A nonconventional ergodic theorem for a nilsystem, Ergodic Th. Dynam. Systems, 25, (2005), no. 4, 1357–1370. T. Ziegler. Universal characteristic factors and Furstenberg averages, J. Amer. Math. Soc., 20, (2007), 53–97. Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 37A45, 37A30, 28D05 Retrieve articles in all journals with MSC (2000): 37A45, 37A30, 28D05 Nikos Frantzikinakis Affiliation: Department of Mathematics, University of Memphis, Memphis, Tennessee 38152-3240 Email: [email protected] Keywords: Characteristic factor, multiple ergodic averages, multiple recurrence, polynomial Szemerédi. Received by editor(s): October 17, 2006 Additional Notes: The author was partially supported by NSF grant DMS-0111298. The copyright for this article reverts to public domain 28 years after publication.
CommonCrawl
# M.Sc. MATHEMATICS ## MAL-523 METHODS OF APPLIED MATHEMATICS DIRECTORATE OF DISTANCE EDUCATION GURU JAMBHESHWAR UNIVERSITY OF SCIENCE AND TECHNOLOGY HISAR-125001 ## MAL-523 METHODS OF APPLIED MATHEMATICS Contents ## Lesson No and Name 1 Fourier Transforms 2 Applications of Fourier Transforms 3 Curvilinear Co-ordinates 4 Random variable and Mathematical Expectation 5 Moments and Moment generating functions 6 Theoretical Discrete Distributions 7 Theoretical Continuous Distributions 8 Multiple and partial Correlation Written by: Prof .Kuldip Bansal Department of Mathematics G.J.University of Science \\& Technology Hisar. ## LESSON-1 Integral Transform: The integral transform of a function $\mathrm{f}(\mathrm{t})$ is defined by the equation $g(s)=\int_{a}^{b} f(t) k(s, t) d t$, where $k(s, t)$ is a known function of $s$ and $t$, called the kernal of the transform; $\mathrm{s}$ is called the parameter of the transform and $\mathrm{f}(\mathrm{t})$ is called inverse transform of $\mathrm{g}(\mathrm{s})$. Some of the well known transforms are given as under: ## (1) Laplace transform:- When $\mathrm{k}(\mathrm{s}, \mathrm{t})=\mathrm{e}^{-\mathrm{st}}$, we have Laplace transform of $\mathrm{f}(\mathrm{t})$ as: $\mathrm{g}(\mathrm{s})=\int_{0}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{st}} \mathrm{dt}$ we can also write $$ \mathrm{L}[\mathrm{f}(\mathrm{t})]=\mathrm{g}(\mathrm{s}) \text { or } \mathrm{F}(\mathrm{s}) \text { or } \overline{\mathrm{f}}(\mathrm{s}) $$ ## (2) Fourier transform:- when $\mathrm{k}(\mathrm{s}, \mathrm{t})=\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-\mathrm{ist}}$, we have th eFourier transform of $\mathrm{f}(\mathrm{t})$ as $$ \mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{g}(\mathrm{s})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt} $$ ## (3) Mellin transform:- When $k(s, t)=t^{s-1}$, we have Mellin transform of $f(t)$ as: $\quad M[f(t)]=g(s)=\int_{0}^{\infty} f(t) t^{s-1} d t$ (4) Hankel transform (Fourier-Bessel):- When $k(s, t)=t J_{n}(s t)$, we have Hankel transform of $f(t)$ as: $g(s)=\int_{0}^{\infty} f(t) t J_{n}(s t) d t$ Fourier Transform:- If $\mathrm{f}(\mathrm{t})$ be a function defined on $(-\infty, \infty)$ and $\mathrm{f}$ be piecewise continuous in each finite partial interval and absolutely integrable in $(-\infty, \infty)$, i.e., $\mathrm{f}(\mathrm{t})$ be a function $\mathrm{s}$. $\mathrm{t}$. $$ \int_{-\infty}^{\infty}|\mathrm{f}(\mathrm{t})| \mathrm{dt}<\infty $$ then the function $$ \overline{\mathrm{f}}(\mathrm{s})=\mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{F}(\mathrm{s})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt} $$ is called Fourier transform (F.T.) of $\mathrm{f}(\mathrm{t})$. Then $F^{-1}[F(s)]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} F(s) e^{\text {ist }} d s=f(t)$ is called inverse Fourier Transform of $F(s)$. ## Remarks: (i) If $\mathrm{F}(\mathrm{s})=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt}$, then $$ \mathrm{F}^{-1}[\mathrm{~F}(\mathrm{~s})]=\int_{-\infty}^{\infty} \mathrm{F}(\mathrm{s}) \mathrm{e}^{\mathrm{ist}} \mathrm{ds} $$ (ii) If $\mathrm{F}(\mathrm{s})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt}$, then $$ \mathrm{F}^{-1}[\mathrm{~F}(\mathrm{~s})]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{F}(\mathrm{s}) \mathrm{e}^{\mathrm{ist}} \mathrm{ds} $$ (iii) If $F(s)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{\text {ist }} d t$, then $$ f(t)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} F(s) e^{-i s t} d s $$ ## Fourier cosine and sine transform $$ F_{c}(s)=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \text { cos st dt is called Fourier cosine transform }(F C T) \text { and } F_{s}(s)= $$ $\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t)$ sin st dt is called Fourier sine transform (FST) of $f(t)$. The functions $\mathrm{f}_{\mathrm{c}}(\mathrm{t})=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{F}_{\mathrm{c}}(\mathrm{s}) \cos \mathrm{st} \mathrm{ds}$ $$ \mathrm{f}_{\mathrm{s}}(\mathrm{t})=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{F}_{\mathrm{s}}(\mathrm{s}) \sin \mathrm{st} \mathrm{ds} $$ are called the Inverse Fourier cosine \\& inverse Fourier sine transform of $\mathrm{F}_{\mathrm{c}}(\mathrm{s})$ and $\mathrm{F}_{\mathrm{s}}(\mathrm{s})$, respectively. ## Properties of Fourier Transform ## (1) Linearity Property If $f_{1}(t)$ and $f_{2}(t)$ are functions with Fourier Transform $F_{1}(s)$ and $F_{2}(s)$, respectively and $C_{1}$ $\& C_{2}$ are constant, then Fourier Transform of $C_{1} f_{1}(t)+C_{2} f_{2}(t)$ is $$ \begin{aligned} \mathrm{F}\left[\mathrm{c}_{1} \mathrm{f}_{1}(\mathrm{t})+\mathrm{c}_{2} \mathrm{f}_{2}(\mathrm{t})\right] & =\mathrm{c}_{1} \mathrm{~F}\left[\mathrm{f}_{1}(\mathrm{t})\right]+\mathrm{c}_{2} \mathrm{~F}\left[\mathrm{f}_{2}(\mathrm{t})\right] \\ & =\mathrm{c}_{1} \mathrm{~F}_{1}(\mathrm{~s})+\mathrm{c}_{2} \mathrm{~F}_{2}(\mathrm{~s}) \end{aligned} $$ Proof: By definition $$ \begin{aligned} \text { L.H.S. } & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-i s t}\left[c_{1} f_{1}(t)+c_{2} f_{2}(t)\right] d t \\ & =\frac{c_{1}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-i s t} f_{1}(t) d t+\frac{c_{2}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-i s t} f_{2}(t) d t \\ & =C_{1} F\left[f_{1}(t)\right]+C_{2} F\left[f_{2}(t)\right] \\ & =C_{1} F_{1}(s)+C_{2} F_{2}(s) \end{aligned} $$ ## (2) Change of scale property or similarity theorem If ' $a$ ' is a real constant and $\mathrm{F}(\mathrm{s})=\mathrm{F}[\mathrm{f}(\mathrm{t})]$ then $$ \mathrm{F}[\mathrm{f}(\mathrm{at})]=\frac{1}{|\mathrm{a}|} \mathrm{F}\left(\frac{\mathrm{s}}{\mathrm{a}}\right) $$ Proof : For a $>0$, $$ F[f(a t)]=\int_{-\infty}^{\infty} f(a t) e^{-i s t} d t $$ Put at $=x \Rightarrow d t=\frac{1}{a} d x$ $$ \begin{array}{r} \therefore \quad F[f(a t)]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(x) e^{-i\left(\frac{s}{a}\right) x} \frac{1}{a} d x \\ =\frac{1}{a} \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i\left(\frac{s}{a}\right) t} d t \\ \Rightarrow \quad F[f(a t)]=\frac{1}{a} F\left(\frac{s}{a}\right) \end{array} $$ For a $<\mathbf{0}:-\mathrm{F}[\mathrm{f}(\mathrm{at})]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{at}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt}$ Put $\quad$ at $=x \Rightarrow d t=\frac{d x}{a}$ $\therefore \quad \mathrm{F}[\mathrm{f}(\mathrm{at})]=\frac{1}{\mathrm{a}} \cdot \frac{1}{\sqrt{2 \pi}} \int_{\infty}^{-\infty} \mathrm{f}(\mathrm{x}) \mathrm{e}^{-\frac{\mathrm{i}-\mathrm{s}}{\mathrm{s}} \mathrm{a}} \mathrm{dx}$ $$ =\frac{-1}{\mathrm{a}} \cdot \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{i}-\mathrm{s} \mathrm{t}} \mathrm{dt} $$ $\Rightarrow \quad \mathrm{F}[\mathrm{f}(\mathrm{at})]=\frac{-1}{\mathrm{a}} \mathrm{F}\left(\frac{\mathrm{s}}{\mathrm{a}}\right)$ Hence, $F[f(a t)]=\frac{1}{|a|} F\left(\frac{s}{a}\right)$ Particular case:- If $a=-1$, then $$ \mathrm{F}[\mathrm{f}(-\mathrm{t})]=\mathrm{F}(-\mathrm{s}) $$ (3) First shifting Property If $\mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{F}(\mathrm{s})$, then $$ \mathrm{F}[\mathrm{f}(\mathrm{t}-\mathrm{u})]=\mathrm{e}^{-\mathrm{ius}} \mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{e}^{-\mathrm{ius}} \mathrm{F}(\mathrm{s}) $$ Proof :- $F[f(t-u)]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t-u) e^{-i s t} d t$ Put $\mathrm{t}-\mathrm{u}=\mathrm{v} \Rightarrow \mathrm{dt}=\mathrm{dv}$ $\Theta F[f(t-u)]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(v) e^{-i s(v+u)} d v$ $$ \begin{aligned} & =e^{-i s u} \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(v) e^{-i s v} d v \\ & =e^{-i s u} \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i s t} d t \\ & =e^{-i s u} F[f(t)] . \end{aligned} $$ (4). Second shifting property $$ \text { If } \mathrm{F}(\mathrm{s})=\mathrm{F}[\mathrm{f}(\mathrm{t})] \text {, then } $$ $$ \mathrm{F}\left[\mathrm{f}(\mathrm{t}) \mathrm{e}^{\mathrm{iat}}\right]=\mathrm{F}(\mathrm{s}-\mathrm{a}) $$ Proof :- $F\left[f(t) e^{i a t}\right]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{i a t} e^{-i s t} d t$ $$ =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i(s-a) t} d t=F(s-a) $$ ## (5) Symmetry property $$ \text { If } F(s)=F[f(t)] \text {, then } F[F(t)]=f(-s) $$ Proof : We know that $$ \sqrt{2 \pi} f(t)=\int_{-\infty}^{\infty} F(s) e^{i s t} d s $$ changing $t$ to $-t$, we have $$ \sqrt{2 \pi} \mathrm{f}(-\mathrm{t})=\int_{-\infty}^{\infty} \mathrm{F}(\mathrm{s}) \mathrm{e}^{-\mathrm{ist}} \mathrm{ds} $$ Interchanging $\mathrm{t} \& \mathrm{~s}$, we get $$ \begin{aligned} & \sqrt{2 \pi} f(-s)=\int_{-\infty}^{\infty} F(t) e^{-i s t} d t \\ & \Rightarrow \quad \mathrm{f}(-\mathrm{s})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{F}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt} \\ & \Rightarrow \quad \mathrm{f}(-\mathrm{s})=\mathrm{F}[\mathrm{F}(\mathrm{t})] \end{aligned} $$ Example. 1. Find Fourier transform of $\mathrm{f}(\mathrm{t})=\left\{\begin{array}{cl}\mathrm{e}^{-\alpha \mathrm{t}}, & \mathrm{t} \geq 0, \alpha>0 \\ 0, & \mathrm{t}<0\end{array}\right.$ Solution :- By definition, $F(s)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i s t} d t$ $$ \begin{aligned} & \Rightarrow \quad F(s)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{0} f(t) e^{-i s t} d t+\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} e^{-\alpha t} e^{-i s t} d t \\ & =\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \mathrm{e}^{-(\alpha+i \mathrm{ist}) \mathrm{t}} \mathrm{dt} \quad[\operatorname{using}(1)] \\ & \Rightarrow \quad \mathrm{F}(\mathrm{s})=\frac{1}{\sqrt{2 \pi}}\left[\frac{\mathrm{e}^{-(\alpha+\mathrm{is})}}{-(\alpha+\mathrm{is})}\right]_{0}^{\infty} \end{aligned} $$ $$ =\frac{1}{\sqrt{2 \pi}}\left[0+\frac{1}{\alpha+\mathrm{is}}\right]=\frac{\alpha-\mathrm{is}}{\sqrt{2 \pi}\left(\alpha^{2}+\mathrm{s}^{2}\right)} $$ Example.2. $\mathrm{f}(\mathrm{t})=\left\{\begin{array}{lll}1 & , & |\mathrm{t}| \leq \mathrm{a} \\ 0 & , & |\mathrm{t}|>\mathrm{a}\end{array}\right.$ Solution $F(s)=\frac{1}{\sqrt{2 \pi}}\left[\int_{-\infty}^{-a} f(t) e^{-i s t} d t+\int_{-a}^{a} f(t) e^{-i s t} d t+\int_{a}^{\infty} f(t) e^{-i s t} d t\right]$ $$ =\mathrm{I}_{1}+\mathrm{I}_{2}+\mathrm{I}_{3} $$ Let $\quad I_{1}=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{-\mathrm{a}} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt}$ Put $\mathrm{t}=-\mathrm{u} \Rightarrow \mathrm{dt}=-\mathrm{du}$ $$ \begin{aligned} \therefore \quad \mathrm{I}_{1} & =\frac{1}{\sqrt{2 \pi}} \int_{\infty}^{\mathrm{a}} \mathrm{f}(-\mathrm{u}) \mathrm{e}^{\mathrm{isu}}(-\mathrm{du}) \\ & =\frac{1}{\sqrt{2 \pi}} \int_{\mathrm{a}}^{\infty} \mathrm{f}(-\mathrm{u}) \mathrm{e}^{\mathrm{isu}} \mathrm{du}=0 \end{aligned} $$ Similarly $I_{3}=\int_{a}^{\infty} f(t) e^{-i s t} d t=0$ $$ \begin{aligned} \therefore \quad \mathrm{I}_{2} & =\frac{1}{\sqrt{2 \pi}} \int_{-\mathrm{a}}^{\mathrm{a}} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt} \\ & =\frac{1}{\sqrt{2 \pi}} \int_{-\mathrm{a}}^{\mathrm{a}} \mathrm{i} \cdot \mathrm{e}^{-\mathrm{ist}} \mathrm{dt} \quad[\Theta \mathrm{f}(\mathrm{t})=1] \\ & =\frac{1}{\sqrt{2 \pi}}\left[\frac{\mathrm{e}^{-\mathrm{ist}}}{-\mathrm{is}}\right]_{-\mathrm{a}}^{\mathrm{a}}=\frac{-1}{\sqrt{2 \pi}} \mathrm{is} \quad\left[\mathrm{e}^{-\mathrm{ias}}-\mathrm{e}^{\mathrm{ias}}\right] \\ \Rightarrow \quad \mathrm{I}_{2} & =\frac{1}{\sqrt{2 \pi}}\left(\frac{\mathrm{e}^{\mathrm{ias}}-\mathrm{e}^{\mathrm{ias}}}{\mathrm{is}}\right) \\ & =\frac{1}{\sqrt{2 \pi}} \cdot \frac{2 \sin \text { as }}{\mathrm{is}}=\sqrt{\frac{2}{\pi}}\left(\frac{\sin \mathrm{as}}{\mathrm{s}}\right), \mathrm{s} \neq 0 \end{aligned} $$ when $\mathrm{s}=0$, $$ \begin{aligned} \mathrm{F}(\mathrm{s}) & =\sqrt{\frac{2}{\pi}} \frac{\mathrm{a} \cos \mathrm{as}}{(1)} \text { as } \mathrm{s} \rightarrow 0 \text { [By L-Hospital rule] } \\ & =\sqrt{\frac{2}{\pi}} \mathrm{a} . \end{aligned} $$ Example. $3:-\mathrm{f}(\mathrm{t})=\left\{\begin{array}{ll}\mathrm{t}, & |\mathrm{t}| \leq \mathrm{a} \\ 0, & |\mathrm{t}|>\mathrm{a}\end{array}\right\}$ Solution :- $F(s)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i s t} d t$ $$ \begin{aligned} & =\frac{1}{\sqrt{2 \pi}}\left[\int_{-a}^{-a} f(t) e^{-i s t} d t+\int_{-a}^{a} f(t) e^{-i s t} d t+\int_{a}^{\infty} f(t) e^{-i s t} d t\right] \\ & \begin{array}{ll} =\frac{1}{\sqrt{2 \pi}} \int_{-\mathrm{a}}^{\mathrm{a}} \mathrm{t} \mathrm{e}^{-\mathrm{ist}} \mathrm{dt} \quad[\operatorname{using}(1)] \end{array} \\ & =\frac{1}{\sqrt{2 \pi}}\left[\left(\mathrm{t} \frac{\mathrm{e}^{-\mathrm{ist}}}{-\mathrm{is}}\right)_{-\mathrm{a}}^{\mathrm{a}}-\int_{-\mathrm{a}}^{\mathrm{a}}(1) \frac{\mathrm{e}^{-\mathrm{ist}}}{-\mathrm{is}} \mathrm{dt}\right] \\ & =\frac{1}{\sqrt{2 \pi}}\left[\frac{\mathrm{a}}{-\mathrm{is}} \mathrm{e}^{-\mathrm{ias}}+\frac{\mathrm{a}}{-\mathrm{is}} \mathrm{e}^{\mathrm{ias}}+\frac{1}{\text { is }(-\mathrm{is})}\left(\mathrm{e}^{-\mathrm{ias}}-\mathrm{e}^{\mathrm{ias}}\right)\right] \\ & =\frac{1}{\sqrt{2 \pi}}\left[\frac{\mathrm{a}}{- \text { is }}\left(\mathrm{e}^{\mathrm{ias}}+\mathrm{e}^{-\mathrm{ias}}\right)-\frac{1}{\mathrm{~s}^{2}}\left(\mathrm{e}^{\mathrm{ias}}-\mathrm{e}^{-\mathrm{ias}}\right)\right] \\ & \Rightarrow \quad \mathrm{F}(\mathrm{s})=\frac{1}{\sqrt{2 \pi}}\left[\frac{2 \mathrm{a} \cos \mathrm{as}}{-\mathrm{is}}-\frac{1}{\mathrm{~s}^{2}} \mathrm{si} \sin \mathrm{as}\right] \\ & =\frac{2}{s^{2} \sqrt{2 \pi}}[i \text { as cos as }-i \sin \text { as }] \\ & \Rightarrow \quad \mathrm{F}(\mathrm{s})=\frac{\sqrt{2}}{\sqrt{\pi}} \frac{\mathrm{i}}{\mathrm{s}^{2}}[\text { as cos as }-\sin \mathrm{as}] \end{aligned} $$ Example. 4. If $F(\mathrm{~s})=\mathrm{F}[\mathrm{f}(\mathrm{t})]$. Find $\mathrm{F}$. T. of $\mathrm{f}(\mathrm{t}) \cos$ at Solution :- We know that $$ \cos a t=\frac{1}{2}\left(e^{i a t}+e^{-i a t}\right) $$ Then $F[f(t) \cos a t]=\frac{1}{2} F\left[f(t) e^{i a t}\right]+\frac{1}{2} F\left[f(t) e^{-i a t}\right]$ or prove by definition [By linearity property] $\Rightarrow \quad \mathrm{F}[\mathrm{f}(\mathrm{t}) \cos \mathrm{at}]=\frac{1}{2} \mathrm{~F}(\mathrm{~s}-\mathrm{a})+\frac{1}{2} \mathrm{~F}(\mathrm{~s}+\mathrm{a})$ [By using shifting property] Example. 5. If $F_{s}(s)$ and $F_{c}(s)$ are $F S T$ and $F C T$ of $f(t)$ respectively, then $$ \begin{aligned} & F_{s}[f(t) \cos a t]=\frac{1}{2}\left[F_{s}(s+a)+F_{s}(s-a)\right] \\ & F_{c}[f(t) \cos a t]=\frac{1}{2}\left[F_{c}(s+a)+F_{c}(s-a)\right] \\ & F_{s}[f(t) \sin a t]=\frac{1}{2}\left[F_{c}(s-a)-F_{c}(s+a)\right] \\ & F_{c}[f(t) \sin a t]=\frac{1}{2}\left[F_{s}(s+a)-F_{s}(s-a)\right] \end{aligned} $$ Solution :- (i) By definition of FST, $$ \begin{aligned} \mathrm{F}_{\mathrm{S}}[\mathrm{f}(\mathrm{t}) \cos \mathrm{at}] & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{f}(\mathrm{t}) \cos \mathrm{at} \sin \mathrm{st} d \mathrm{t} \\ & =\frac{1}{2} \cdot \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{f}(\mathrm{t}) 2 \cos \text { at } \sin \mathrm{st} d \mathrm{t} \end{aligned} $$ Using $\quad 2 \sin A \cos B=\sin (A+B)+\sin (A-B)$, $\Rightarrow \quad \mathrm{F}_{\mathrm{s}}[\mathrm{f}(\mathrm{t}) \cos \mathrm{at}]=\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \mathrm{f}(\mathrm{t})[\sin (\mathrm{s}+\mathrm{a}) \mathrm{t}+\sin (\mathrm{s}-\mathrm{a}) \mathrm{t}] \mathrm{dt}$ $$ \begin{aligned} & =\frac{1}{2}\left[\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin (s+a) t d t+\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin (s-a) t d t\right] \\ & =\frac{1}{2}\left[F_{S}(s+a)+F_{S}(s-a)\right] \end{aligned} $$ (ii) By definition FCT, $$ \begin{aligned} F_{c}[f(t) \cos a t] & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \cos a t \cos s t d t \\ & =\frac{1}{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t)[\cos (s+a) t+\cos (s-a) t] d t \end{aligned} $$ $$ \begin{aligned} & =\frac{1}{2}\left[\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \cos (s+a) t d t+\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \cos (s-a) t d t\right] \\ & =\frac{1}{2}\left[F_{c}(s+a)+F_{c}(s-a)\right] \end{aligned} $$ (iii) By definition of FST, $$ \begin{aligned} F_{S}[f(t) \sin a t] & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin a t \sin s t d t \\ & =\frac{1}{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t)[\cos (s-a)-\cos (s+a)] d t \\ & {[\Theta 2 \sin A \sin B=\cos (A-B)-\cos (A+B)] } \\ & =\frac{1}{2}\left[\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \cos (s-a) d t-\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \cos (s+a) d t\right] \\ & =\frac{1}{2}\left[F_{c}(s-a)-F_{c}(s+a)\right] \end{aligned} $$ (iv) By definition of FCT, $$ \begin{aligned} F_{c}[f(t) \sin a t]= & \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin a t \cos s t t \\ = & \frac{1}{2} \cdot \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t)[\sin (s+a) t-\sin (s-a) t] d t \\ & {[\text { using } 2 \cos A \sin B=\sin (A+B)-\sin (A-B)] } \\ = & \frac{1}{2}\left[\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin (s+a) t-\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin (s-a) t d t\right] \\ = & \frac{1}{2}\left[F_{s}(s+a)-F_{s}(s+a)-F_{s}(s-a)\right] . \end{aligned} $$ Example. 6. Find Fourier sine $\&$ cosine transform of $e^{-a t}, a>0$. Let $\quad \int_{0}^{\infty} \mathrm{e}^{-\mathrm{at}} \cos \mathrm{st} d \mathrm{dt}=\mathrm{I}_{1}$ $$ \int_{0}^{\infty} \mathrm{e}^{-\mathrm{at}} \sin \mathrm{st} d t=\mathrm{I}_{2} $$ Integrating (1) by parts $$ \begin{aligned} \mathrm{I}_{1} & =\left[-\frac{1}{\mathrm{a}} \mathrm{e}^{-\mathrm{at}} \cos \mathrm{st}\right]_{0}^{\infty}-\int_{0}^{\infty} \mathrm{e}^{-\mathrm{at}} \sin \mathrm{st} \mathrm{dt} \\ & =\frac{1}{\mathrm{a}}-\frac{\mathrm{s}}{\mathrm{a}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{at}} \sin \mathrm{st} \mathrm{dt} \\ & =\frac{1}{\mathrm{a}}-\frac{\mathrm{s}}{\mathrm{a}} \mathrm{I}_{2} \end{aligned} $$ Integrating (2) by parts, we have $$ \mathrm{I}_{2}=\frac{\mathrm{s}}{\mathrm{a}} \mathrm{I}_{1} $$ solving (3) and (4) for $\mathrm{I}_{1} \& \mathrm{I}_{2}$, $$ \begin{aligned} & \mathrm{I}_{1}=\frac{1}{\mathrm{a}}-\frac{\mathrm{s}}{\mathrm{a}} \cdot \frac{\mathrm{s}}{\mathrm{a}} \mathrm{I}_{1} \\ \Rightarrow & \mathrm{I}_{1}\left(1+\frac{\mathrm{s}^{2}}{\mathrm{a}^{2}}\right)=\frac{1}{\mathrm{a}} \Rightarrow \mathrm{I}_{1}=\frac{1}{\mathrm{a}} \cdot \frac{\mathrm{a}^{2}}{\mathrm{~s}^{2}+\mathrm{a}^{2}} \\ \Rightarrow \quad & \mathrm{I}_{1}=\frac{\mathrm{a}}{\mathrm{s}^{2}+\mathrm{a}^{2}} \end{aligned} $$ and $\mathrm{I}_{2}=\frac{\mathrm{s}}{\mathrm{a}} \cdot \frac{\mathrm{a}}{\mathrm{s}^{2}+\mathrm{a}^{2}}=\frac{\mathrm{s}}{\mathrm{s}^{2}+\mathrm{a}^{2}}$ Hence $\mathrm{F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{at}}\right]=\sqrt{\frac{2}{\pi}} \mathrm{I}_{1}=\sqrt{\frac{2}{\pi}} \frac{\mathrm{a}}{\mathrm{s}^{2}+\mathrm{a}^{2}}$ and $\quad \mathrm{F}_{\mathrm{S}}\left[\mathrm{e}^{-\mathrm{at}}\right]=\sqrt{\frac{2}{\pi}} \mathrm{I}_{2}=\sqrt{\frac{2}{\pi}} \frac{\mathrm{s}}{\mathrm{s}^{2}+\mathrm{a}^{2}}$ Extensions :- Differentiating both sides of (5) w.r.t. 'a', we find $$ \mathrm{F}_{\mathrm{c}}\left[-\mathrm{t} \mathrm{e}^{-\mathrm{at}}\right]=\sqrt{\frac{2}{\pi}}\left[\frac{\left(\mathrm{s}^{2}+\mathrm{a}^{2}\right) 1-\mathrm{a} \cdot 2 \mathrm{a}}{\left(\mathrm{s}^{2}+\mathrm{a}^{2}\right)^{2}}\right] $$ $\Rightarrow \quad-\mathrm{F}_{\mathrm{c}}\left[\mathrm{te}^{-\mathrm{at}}\right]=\sqrt{\frac{2}{\pi}}\left[\frac{-\mathrm{a}^{2}+\mathrm{s}^{2}}{\left(\mathrm{~s}^{2}+\mathrm{a}^{2}\right)^{2}}\right]$ $\Rightarrow \quad \mathrm{F}_{\mathrm{c}}\left[\mathrm{t}^{-\mathrm{at}}\right]=\sqrt{\frac{2}{\pi}}\left[\frac{\mathrm{a}^{2}-\mathrm{s}^{2}}{\left(\mathrm{~s}^{2}+\mathrm{a}^{2}\right)^{2}}\right]$ Differentiating both sides of (6) w.r.t. 'a', we get $$ \begin{aligned} & \mathrm{F}_{\mathrm{s}}\left[-\mathrm{te}^{-\mathrm{at}}\right]=\sqrt{\frac{2}{\pi}}\left[\frac{\left(\mathrm{s}^{2}+\mathrm{a}^{2}\right) 0-\mathrm{s} .2 \mathrm{a}}{\left(\mathrm{s}^{2}+\mathrm{a}^{2}\right)^{2}}\right] \\ \Rightarrow \quad-\mathrm{F}_{\mathrm{s}}\left[\mathrm{te}^{-\mathrm{at}}\right] & =-\sqrt{\frac{2}{\pi}} \frac{2 \mathrm{as}}{\left(\mathrm{s}^{2}+\mathrm{a}^{2}\right)^{2}} \\ \Rightarrow \quad \mathrm{F}_{\mathrm{s}}\left[\mathrm{te}^{-\mathrm{at}}\right] & =\sqrt{\frac{2}{\pi}} \frac{2 \mathrm{as}}{\left(\mathrm{s}^{2}+\mathrm{a}^{2}\right)^{2}} . \end{aligned} $$ Put $\mathrm{a}=1$ in (5), (6), we get $$ \begin{aligned} & \mathrm{F}_{\mathrm{s}}\left[\mathrm{e}^{-\mathrm{t}}\right]=\sqrt{\frac{2}{\pi}} \frac{\mathrm{s}^{2}}{\mathrm{~s}^{2}+1}=\sqrt{\frac{2}{\pi}} \frac{\mathrm{s}}{\mathrm{s}^{2}+1} \\ & \mathrm{~F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{t}}\right]=\sqrt{\frac{2}{\pi}} \cdot \frac{1}{\mathrm{~s}^{2}+1} \end{aligned} $$ Results: $$ \begin{aligned} & \int e^{\alpha t} \cos \beta t d t=\frac{e^{\alpha t}}{\alpha^{2}+\beta^{2}}(\alpha \cos \beta t+\beta \sin \beta t) \\ & \int e^{\alpha t} \sin \beta t d t=\frac{e^{\alpha t}}{\alpha^{2}+\beta^{2}}(\alpha \sin \beta t-\beta \cos \beta t) \end{aligned} $$ Example. 7 :- Find F. T. of $\mathrm{f}(\mathrm{t})=\mathrm{e}^{-\mathrm{t}^{2} / 2}$ Solution :- By definition, $\mathrm{F}\left[\mathrm{e}^{-\frac{\mathrm{t}^{2}}{2}}\right]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\frac{\mathrm{t}^{2}}{2}} \mathrm{e}^{-\mathrm{ist}} \mathrm{dt}$ $$ \begin{aligned} \Rightarrow \quad F\left[\mathrm{e}^{-\mathrm{t}^{2} / 2}\right] & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\frac{1}{2}(\mathrm{t}+\mathrm{is})^{2}} \cdot \mathrm{e}^{\frac{-\mathrm{s}^{2}}{2}} \mathrm{dt} \\ & =\frac{\mathrm{e}^{-\mathrm{s}^{2} / 2}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\frac{1}{2}(\mathrm{t}+\mathrm{is})^{2}} \mathrm{dt} \end{aligned} $$ Put $\frac{1}{\sqrt{2}}(\mathrm{t}+\mathrm{is})=\mathrm{y} \Rightarrow \mathrm{dt}=\sqrt{2} \mathrm{dy}$ $\Rightarrow \quad \mathrm{F}\left[\mathrm{e}^{-\mathrm{t}^{2} / 2}\right]=\frac{\mathrm{e}^{-\mathrm{s}^{2} / 2}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\mathrm{y}^{2}} \cdot \sqrt{2} \mathrm{dy}$ $$ =\frac{\mathrm{e}^{-\mathrm{s}^{2} / 2}}{\sqrt{\pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\mathrm{y}^{2}} \mathrm{dy}=\frac{\mathrm{e}^{-\mathrm{s}^{2} / 2}}{\sqrt{\pi}} \cdot \sqrt{\pi} $$ $\Rightarrow \quad \mathrm{F}\left[\mathrm{e}^{-\mathrm{t}^{2} / 2}\right]=\mathrm{e}^{-\mathrm{s}^{2} / 2}, \quad \mathrm{~F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{t}^{2} / 2}\right]=\mathrm{e}^{-\mathrm{s}^{2} / 2}$ Example. 8. Find Fourier cosine transform of $f(t)=e^{-t^{2}}$ Solution :- $\mathrm{F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{t}^{2}}\right]=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}^{2}} \cos \mathrm{st} \mathrm{dt}=\mathrm{I}$ Differentiating w.r.t. ' $\mathrm{s}$ ', we get $$ \begin{aligned} \frac{\mathrm{dI}}{\mathrm{ds}} & =-\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{t}^{-\mathrm{t}^{2}} \sin \mathrm{st} \mathrm{dt} \\ & =\frac{1}{2} \cdot \sqrt{\frac{2}{\pi}} \int_{0}^{\infty}\left(-2 \mathrm{te}^{-\mathrm{t}^{2}}\right) \sin \mathrm{st} \mathrm{dt} \end{aligned} $$ Integrating by parts, $$ \begin{aligned} \frac{\mathrm{dI}}{\mathrm{ds}} & =\frac{1}{2} \cdot \sqrt{\frac{2}{\pi}}\left[\left.\sin \mathrm{ste}^{-\mathrm{t}^{2}}\right|_{0} ^{\infty}-\mathrm{s} \int_{0}^{\infty} \cos \mathrm{st} . \mathrm{e}^{-\mathrm{t}^{2}} \mathrm{dt}\right] \\ & =-\frac{\mathrm{s}}{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}^{2}} \cos \mathrm{st} \mathrm{dt} \\ \Rightarrow \quad \frac{\mathrm{dI}}{\mathrm{ds}} & =-\frac{\mathrm{s}}{2} \mathrm{I} \quad[\mathrm{using}(1)] \\ \Rightarrow \quad \frac{\mathrm{dI}}{\mathrm{I}} & =\frac{-\mathrm{s}}{2} \mathrm{ds} \end{aligned} $$ Integrating, $\log I=-\frac{s^{2}}{4}+\log A$ $\Rightarrow \quad \mathrm{I}=\mathrm{A} \mathrm{e}^{-\mathrm{s}^{2} / 4}$ when $\mathrm{s}=0$, from $(1), \mathrm{I}=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}^{2}} \mathrm{dt}$ $\Rightarrow \quad \mathrm{I}=\sqrt{\frac{2}{\pi}} \cdot \frac{\sqrt{\pi}}{2}=\frac{1}{\sqrt{2}}$ Also when $\mathrm{s}=0$, from $(2) \Rightarrow \mathrm{I}=\mathrm{A}$ $\therefore \quad$ from (3) \\& (4), we get $A=\frac{1}{\sqrt{2}}$ $\therefore$ (2) gives $\mathrm{I}=\frac{1}{\sqrt{2}} \mathrm{e}^{-\mathrm{s}^{2} / 4}$ Extension :- $\mathrm{F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{a}^{2} \mathrm{t}^{2}}\right]=\frac{1}{\sqrt{2}} \cdot \frac{1}{\mathrm{a}} \mathrm{e}^{-\frac{1}{4}\left(\frac{\mathrm{s}}{\mathrm{a}}\right)^{2}}$ $\Rightarrow \quad \mathrm{F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{a}^{2} \mathrm{t}^{2}}\right]=\frac{1}{\mathrm{a} \sqrt{2}} \mathrm{e}^{\frac{-\mathrm{s}^{2}}{4 \mathrm{a}^{2}}}\left[\right.$ using change of scale probability $\mathrm{F}[\mathrm{f}(\mathrm{at})]=\frac{1}{|\mathrm{a}|} \mathrm{F}\left(\frac{\mathrm{s}}{\mathrm{a}}\right)$ If $\mathrm{a}=\frac{1}{\sqrt{2}}$, we have $$ \mathrm{F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{t}^{2} / 2}\right]=\mathrm{e}^{-\mathrm{s}^{2} / 2} $$ ## Self-reciprocal function: A function $f(t)$ with the property that $F[f(t)]=f(s)$ is said to be self-reciprocal under Fourier transform, e.g. the function $\mathrm{e}^{-\mathrm{t}^{2} / 2}$ is self-reciprocal function under F.T. The function $\mathrm{e}^{-\mathrm{t}^{2} / 2}$ is also self reciprocal under F.C.T. To prove :- $\mathrm{F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{t}^{2} / 2}\right]=\mathrm{e}^{-\mathrm{s}^{2} / 2}$ Let $\quad \mathrm{F}_{\mathrm{c}}\left[\mathrm{e}^{-\mathrm{t}^{2} / 2}\right]=\mathrm{e}^{-\mathrm{s}^{2} / 2}$ $\Rightarrow \quad \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}^{2} / 2} \cos \mathrm{st} \mathrm{dt}=\mathrm{e}^{-\mathrm{s}^{2} / 2}$ Differentiating w.r.t. ' $\mathrm{s}$ ' on both sides, we get $$ \mathrm{F}_{\mathrm{s}}\left[\mathrm{t} \mathrm{e}^{-\mathrm{t}^{2} / 2}\right]=\mathrm{se}^{-\mathrm{s}^{2} / 2} $$ Hence the function $\mathrm{t}^{-\mathrm{t}^{2} / 2}$ is self-reciprocal function under Fourier Sine Transform. Example. :- We know that if $\mathrm{f}(\mathrm{t})= \begin{cases}1, & |\mathrm{t}| \leq \mathrm{a} \\ 0, & |\mathrm{t}|>\mathrm{a}\end{cases}$ Then $F(s)=\sqrt{\frac{2}{\pi}} \frac{\sin \mathrm{sa}}{\mathrm{s}}, \mathrm{s} \neq 0$ By definition of F. T., $F(s)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i s t} d t$ then $f(t)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} F(s) e^{-s t} d s$ Put $F(s)$ from (1), we get $$ \begin{aligned} & \mathrm{f}(\mathrm{t})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \sqrt{\frac{2 \pi}{\mathrm{s}}} \frac{\sin \mathrm{sa}}{\mathrm{s}} \mathrm{e}^{\mathrm{ist}} \mathrm{ds} \\ & \Rightarrow \quad \mathrm{f}(\mathrm{t})=\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\sin \mathrm{sa}}{\mathrm{s}} \mathrm{e}^{\mathrm{ist}} \mathrm{ds}= \begin{cases}1, & |\mathrm{t}|<\mathrm{a} \\ 0, & |\mathrm{t}|>\mathrm{a}\end{cases} \end{aligned} $$ L.H.S. $=\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\sin \mathrm{sa}}{\mathrm{s}}[\cos \mathrm{st}+\mathrm{i} \sin \mathrm{st}] \mathrm{ds}$ $$ =\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\sin \mathrm{sa}}{\mathrm{s}} \cos \mathrm{st} \mathrm{ds}+\frac{\mathrm{i}}{\pi} \int_{-\infty}^{\infty} \frac{\sin \mathrm{sa}}{\mathrm{s}} \sin \mathrm{st} \mathrm{ds} $$ Since integrand in second integral is an odd function, so the integral is zero $\Rightarrow \quad$ L.H.S. $=\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\sin s \mathrm{a}}{\mathrm{s}} \cos \mathrm{st} \mathrm{ds}$ $\Rightarrow \quad \int_{-\infty}^{\infty} \frac{\sin \mathrm{sa}}{\mathrm{s}} \cos \mathrm{st} \mathrm{ds}=\left\{\begin{array}{lll}\pi & , & |\mathrm{t}|<\mathrm{a} \\ 0 & , & |\mathrm{t}|>\mathrm{a} \\ \pi / 2 & , & |\mathrm{t}|=\mathrm{a}\end{array}\right.$ or $\quad \int_{0}^{\infty} \frac{\sin s a \cos s t}{s} d s=\left\{\begin{array}{lll}\pi / 2 & , & |t|<a \\ 0 & , & |t|>a \\ \pi / 4 & , & |t|=a\end{array}\right.$ Evaluate $\int_{0}^{\infty} \frac{\sin \mathrm{s}}{\mathrm{s}} \mathrm{ds}$ Solution : Put $\mathrm{t}=0, \mathrm{a}=1$ in (2), we get $$ \int_{0}^{\infty} \frac{\sin \mathrm{s}}{\mathrm{s}} \mathrm{ds}=\frac{\pi}{2} \text { when }|\mathrm{t}|<\mathrm{a} . $$ ## Relation between Laplace Transform and Fourier Ttransform Consider the function $$ \mathrm{f}(\mathrm{t})= \begin{cases}\mathrm{e}^{-\mathrm{xt}} \varphi \mathrm{t}, & \mathrm{t}>0 \\ 0, & \mathrm{t}<0\end{cases} $$ Taking F.T. of $\mathrm{f}(\mathrm{t})$, $$ \begin{aligned} & \mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{F}(\mathrm{s})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\mathrm{ist}} \mathrm{f}(\mathrm{t}) \mathrm{dt} \\ &=\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{ist}} \mathrm{e}^{-\mathrm{xt}} \phi(\mathrm{t}) \mathrm{dt} \\ &=\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \mathrm{e}^{-(\mathrm{x}+\mathrm{is}) \mathrm{t}} \phi(\mathrm{t}) \mathrm{dt} \\ &=\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{pt}} \phi(\mathrm{t}) \text { where } \mathrm{p}=\mathrm{x}+\mathrm{i} \\ & \Rightarrow \quad \mathrm{F}[\mathrm{f}(\mathrm{t})]=\frac{1}{\sqrt{2 \pi}} \mathrm{L}[\phi(\mathrm{t})] \end{aligned} $$ ## F.T. of Derivatives $$ \begin{gathered} \text { If } \mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{F}(\mathrm{s}) \& \mathrm{f}(\mathrm{t}) \rightarrow 0 \text { as } \mathrm{t} \rightarrow \pm \infty \text {, then } \\ \mathrm{F}\left[\mathrm{f}^{\prime}(\mathrm{t})\right]=\text { i s } \mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{i} s \mathrm{~F}(\mathrm{~s}) \end{gathered} $$ Proof :- By definition, $$ \begin{aligned} F\left[f^{\prime}(t)\right] & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f^{\prime}(t) e^{-i s t} d t \\ & =\frac{1}{\sqrt{2 \pi}}\left[e^{-i s t}(t)\right]_{-\infty}^{\infty}-\frac{1}{\sqrt{2 \pi}}(-i s) \int_{-\infty}^{\infty} f(t) e^{-i s t} d t \\ & =\text { is } \cdot \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i s t} d t \quad[\Theta \mathrm{f}(t) \rightarrow 0 \text { as } t \rightarrow \pm \infty] \\ \Rightarrow \quad F\left[f^{\prime}(t)\right]=\text { is } F[f(t)] & =\text { is } F(s) \end{aligned} $$ Now $$ \mathrm{F}\left[\mathrm{f}^{\prime \prime}(\mathrm{t})\right]=\text { is } \mathrm{F}\left[\mathrm{f}^{\prime}(\mathrm{t})\right]=(\text { is })^{2} \mathrm{~F}[\mathrm{f}(\mathrm{t})] $$ In general, we have $$ \begin{aligned} \mathrm{F}\left[\mathrm{f}^{(\mathrm{n})}(\mathrm{t})\right] & =\text { is } \mathrm{F}\left[\mathrm{f}^{(\mathrm{n}-1)}(\mathrm{t})\right] \\ & =(\mathrm{i} \mathrm{s})^{\mathrm{n}} \mathrm{F}[\mathrm{f}(\mathrm{t})]=(\text { is })^{\mathrm{n}} \mathrm{F}(\mathrm{s}) \end{aligned} $$ Find Fourier sine $\&$ cosine transform of $\mathrm{f}^{\prime}(\mathrm{t}), \mathrm{f}^{\prime \prime}(\mathrm{t})$. Derivation :- By definition $$ \begin{aligned} F_{c}\left[f^{\prime}(t)\right] & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f^{\prime}(t) \cos s t d t \\ & =\sqrt{\frac{2}{\pi}}[\cos s t f(t)]_{0}^{\infty}+s \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin s t d t \end{aligned} $$ Assuming $\mathrm{f}(\mathrm{t}) \rightarrow 0$ as $\mathrm{t} \rightarrow \infty$, $$ \begin{aligned} \Rightarrow \quad \mathrm{F}_{\mathrm{c}}\left[\mathrm{f}^{\prime}(\mathrm{t})\right] & =\sqrt{\frac{2}{\pi}} \mathrm{f}(0)+\mathrm{s} \mathrm{F}_{\mathrm{s}}[\mathrm{f}(\mathrm{t})] \\ \mathrm{F}_{\mathrm{S}}\left[\mathrm{f}^{\prime}(\mathrm{t})\right] & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{f}^{\prime}(\mathrm{t}) \sin \mathrm{st} \mathrm{dt} \\ & =\sqrt{\frac{2}{\pi}}[\sin \operatorname{stf}(\mathrm{t})]_{0}^{\infty}-\mathrm{s} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{f}(\mathrm{t}) \cos \mathrm{st} d \mathrm{t} \end{aligned} $$ Assuming $\mathrm{f}(\mathrm{t}) \rightarrow 0$ as $\mathrm{t} \rightarrow \infty$, $\Rightarrow \quad \mathrm{F}_{\mathrm{s}}\left[\mathrm{f}^{\prime}(\mathrm{t})\right]=-\mathrm{s} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{f}(\mathrm{t}) \cos \mathrm{st} \mathrm{dt}$ $\Rightarrow \quad \mathrm{F}_{\mathrm{S}}\left[\mathrm{f}^{\prime}(\mathrm{t})\right]=-\mathrm{s} \mathrm{F}_{\mathrm{c}}[\mathrm{f}(\mathrm{t})]$ Now $$ \begin{aligned} \mathrm{F}_{\mathrm{c}}\left[\mathrm{f}^{\prime \prime}(\mathrm{t})\right] & =-\sqrt{\frac{2}{\pi}} \mathrm{f}^{\prime}(0)+\mathrm{s} \mathrm{F}_{\mathrm{s}}\left[\mathrm{f}^{\prime}(\mathrm{t})\right] \quad \text { (using (1) } \\ & =-\mathrm{f}^{\prime}(0) \sqrt{\frac{2}{\pi}}+\mathrm{s}\left[-\mathrm{s} \mathrm{F}_{\mathrm{c}}[\mathrm{f}(\mathrm{t})]\right] \quad[\text { By using }(2)] \\ \Rightarrow \quad \mathrm{F}_{\mathrm{c}}\left[\mathrm{f}^{\prime \prime}(\mathrm{t})\right] & =-\mathrm{f}^{\prime}(0) \sqrt{\frac{2}{\pi}}-\mathrm{s}^{2} \mathrm{~F}_{\mathrm{c}}[\mathrm{f}(\mathrm{t})] \end{aligned} $$ and $\quad \mathrm{F}_{\mathrm{s}}\left[\mathrm{f}^{\prime \prime}(\mathrm{t})\right]=-\mathrm{s} \mathrm{F}_{\mathrm{c}}\left[\mathrm{f}^{\prime}(\mathrm{t})\right]$ [using (2)] $$ \begin{aligned} & =-\mathrm{s}\left\{-\sqrt{\frac{2}{\pi}} \mathrm{f}(0)+\mathrm{s}_{\mathrm{s}}[\mathrm{f}(\mathrm{t})]\right\} \quad[\operatorname{using}(1)] \\ \Rightarrow \quad \mathrm{F}_{\mathrm{S}}\left[\mathrm{f}^{\prime \prime}(\mathrm{t})\right] & =\sqrt{\frac{2}{\pi}} \mathrm{s} \mathrm{f}(0)-\mathrm{s}^{2} \mathrm{~F}_{\mathrm{S}}[\mathrm{f}(\mathrm{t})] \end{aligned} $$ Theorem:- If $\mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{F}(\mathrm{s})$, then $$ \frac{d^{n}}{d s^{n}} F[f(t)]=(-i)^{n} F\left[t^{n} f(t)\right], n=1,2,3 $$ Proof:- By definition, $$ \mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{F}(\mathrm{s})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{ist}} \mathrm{dt} $$ Differentiate w.r.t. 's' under integral sign, we get $$ \begin{aligned} \frac{d}{d s} F(s) & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) \frac{\partial}{\partial s}\left(e^{-i s t}\right) d t \\ & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t)(-i t) e^{-i s t} d t \\ \Rightarrow \quad \frac{d}{d s} F(s) & =(-i) \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} t f(t) e^{-i s t} d t \\ \Rightarrow \quad \frac{d}{d s} F(s) & =(-i) F[t f(t)] \\ \text { Now } \quad \frac{d^{2}}{d s^{2}} F(s) & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}(-i t)^{2} f(t) e^{-i s t} d t \\ \Rightarrow \quad \frac{d^{2}}{d^{2}} F(s) & =(-i)^{2} F\left[t^{2} f(t)\right] \end{aligned} $$ Generalising the result, we have $$ \frac{d^{n}}{d s^{n}} F[f(t)]=(-i)^{n} F\left[t^{n} f(t)\right], n=1,2, \ldots $$ Theorem :- If $\mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{F}(\mathrm{s})$ and $\quad \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{dt}=\mathrm{F}(0)=0$ then $F\left[\int_{-\infty}^{t} f(x) d x\right]=\frac{1}{i s} F(s)$ Proof : Consider $\phi(t)=\int_{-\infty}^{t} f(x) d x$ Then $\quad \phi^{\prime}(\mathrm{t})=\mathrm{f}(\mathrm{t})$ Hence if $\mathrm{F}[\phi(\mathrm{t})]=\Phi(\mathrm{s})$, then $\quad \mathrm{F}\left[\phi^{\prime}(\mathrm{t})\right]=\mathrm{F}[\mathrm{f}(\mathrm{t})]=$ is $\Phi(\mathrm{s})$ $\Rightarrow \quad \Phi(\mathrm{s})=\frac{1}{\mathrm{is}} \mathrm{F}[\mathrm{f}(\mathrm{t})]$ $\Rightarrow \quad \Phi(\mathrm{s})=\frac{\mathrm{F}(\mathrm{s})}{\text { is }}$ $\Rightarrow \quad \mathrm{F}[\phi(\mathrm{t})]=\frac{\mathrm{F}(\mathrm{s})}{\mathrm{is}}$ $\Rightarrow \quad \mathrm{F}\left[\int_{-\infty}^{\mathrm{t}} \mathrm{f}(\mathrm{x}) \mathrm{dx}\right]=\frac{1}{\mathrm{is}} \mathrm{F}(\mathrm{s})$ Convolution :- Let $f_{1}(t)$ and $f_{2}(t)$ be two given functions, the convolution of $f_{1}(t) \& f_{2}(t)$ is defined by the function $$ \begin{aligned} f(t) & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x) f_{2}(t-x) d x \\ & =f_{1}(t) * f_{2}(t) \end{aligned} $$ Special case :- $$ \begin{aligned} & \mathrm{f}_{1}(\mathrm{t})=0 \text { for } \mathrm{t}<0 \\ & \mathrm{f}_{2}(\mathrm{t})=0 \text { for } \mathrm{t}<0 \end{aligned} $$ Then $$ \begin{gathered} f(t)=f_{1}(t) * f_{2}(t)=\frac{1}{\sqrt{2 \pi}} \int_{0}^{t} f_{1}(x) f_{2}(t-x) d x \\ \text { Proof :- } f(t)=\frac{1}{\sqrt{2 \pi}}\left[\int_{-\infty}^{0} f_{1}(x) f_{2}(t-x) d x+\int_{0}^{t} f_{1}(x) f_{2}(t-x) d x+\int_{t}^{\infty} f_{1}(x) f_{2}(t-x) d x\right] \end{gathered} $$ Now $\mathrm{I}_{1}=0 \quad\left[\Theta \mathrm{f}_{1}(\mathrm{t})=0\right.$ for $\left.\mathrm{t}<0\right]$ and $I_{3}=\frac{1}{\sqrt{2 \pi}} \int_{t}^{\infty} f_{1}(x) f_{2}(t-x) d x$ Now when $\mathrm{t}<0 \Rightarrow \mathrm{f}_{2}(\mathrm{t})=0$ $\therefore \quad$ when $\mathrm{t}-\mathrm{x}<0 \Rightarrow \mathrm{f}_{2}(\mathrm{t}-\mathrm{x})=0$ $\Rightarrow \quad \mathrm{f}_{2}(\mathrm{t}-\mathrm{x})=0$ for $\mathrm{t}<\mathrm{x}$ or $\mathrm{x}>\mathrm{t}$ So $I_{3}=0 \quad\left[\Theta f_{2}(t-x)=0\right.$ for $\left.x>t\right]$ Using (2) \\& (3) in (1), we get $f(t)=\frac{1}{\sqrt{2 \pi}} \int_{1}^{t} f_{1}(x) f_{2}(t-x) d x$ ## Commutative Property: $$ \mathrm{f}_{1}(\mathrm{t}) * \mathrm{f}_{2}(\mathrm{t})=\mathrm{f}_{2}(\mathrm{t}) * \mathrm{f}_{1}(\mathrm{t}) $$ Proof :- By definition, $$ f_{1} * f_{2}=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x) f_{2}(t-x) d x $$ put $\mathrm{t}-\mathrm{x}=\mathrm{y} \Rightarrow \mathrm{dx}=-\mathrm{dy}$ when $\mathrm{x}=-\infty, \mathrm{y}=\infty$ and when $\mathrm{x}=\infty, \mathrm{y}=-\infty$ $\Rightarrow \quad \mathrm{f}_{1} * \mathrm{f}_{2}=\frac{1}{\sqrt{2 \pi}} \int_{\infty}^{-\infty} \mathrm{f}_{1}(\mathrm{t}-\mathrm{y}) \mathrm{f}_{2}(\mathrm{y})(-\mathrm{dy})$ $\Rightarrow \quad \mathrm{f}_{1} * \mathrm{f}_{2}=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}_{2}(\mathrm{y}) \mathrm{f}_{1}(\mathrm{t}-\mathrm{y}) \mathrm{dy}=\mathrm{f}_{2} * \mathrm{f}_{1}$ ## Associativity property $$ \left(\mathrm{f}_{1} * \mathrm{f}_{2}\right) * \mathrm{f}_{3}=\mathrm{f}_{1} *\left(\mathrm{f}_{2} * \mathrm{f}_{3}\right) $$ Take $\mathrm{g}(\mathrm{t})=\mathrm{f}_{1} * \mathrm{f}_{2}$ $$ h(t)=f_{2} * f_{3} $$ Then (1) becomes, $g(t) * f_{3}=f_{1} * h(t)$ ## Convolution Theorem (or Falting Theorem) on Fourier Transforms If $$ \begin{aligned} & \mathrm{F}\left[\mathrm{f}_{1}(\mathrm{t})\right]=\mathrm{F}_{1}(\mathrm{f}) \\ & \mathrm{F}\left[\mathrm{f}_{2}(\mathrm{t})\right]=\mathrm{F}_{2}(\mathrm{~s}) \text { i.e. } \\ & \mathrm{F}\left[\mathrm{f}_{1}(\mathrm{t}) * \mathrm{f}_{2}(\mathrm{t})\right]=\mathrm{F}_{1}(\mathrm{~s}) \mathrm{F}_{2}(\mathrm{~s}) \end{aligned} $$ where $$ \mathrm{f}_{1}(\mathrm{t}) * \mathrm{f}_{2}(\mathrm{t})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}_{1}(\mathrm{x}) \mathrm{f}_{2}(\mathrm{t}-\mathrm{x}) \mathrm{dx} $$ Proof :- We have by definition of F.T., $$ \begin{aligned} F\left[f_{1}(t) * f_{2}(t)\right] & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}\left[f_{1}(t)+f_{2}(t)\right] e^{-i s t} d t \\ & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}\left[\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x) f_{2}(t-x) d x\right] e^{-i s t} d t \end{aligned} $$ [By using definition of convolution] changing the order of integration, we get $$ \begin{aligned} F\left[f_{1}(t) * f_{2}(t)\right] & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x)\left[\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{2}(t-x) e^{-i s t} d t\right] d x \\ & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x)\left[\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{2}(t-x) e^{-i s(t-x)} e^{-i s x} d t\right] d x \\ & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x)\left[\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{2}(y) e^{-i s y} d y\right] e^{-i s x} d x \\ \Rightarrow \quad F\left[f_{1}(t) * f_{2}(t)\right] & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x) e^{-i s x} F_{2}(s) d x \\ & =F_{2}(s) \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x) e^{-i s x} d x \\ \Rightarrow \quad F\left[f_{1}(t) * f_{2}(t)\right] & =F_{1}(s) F_{2}(s) \end{aligned} $$ * The convolution can be used to obtain the Fourier transform of the product of two functions. By definition of F.T., we have $$ F\left[f_{1}(t) f_{2}(t)\right]=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x) f_{2}(x) e^{-i s x} d x $$ By using inverse F.T. of $f_{2}(x)$, we get $$ \begin{aligned} F\left[f_{1}(t) f_{2}(t)\right] & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x)\left[\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} F_{2}\left(s^{\prime}\right) e^{i s^{\prime} x} d s^{\prime}\right] e^{-i s x} d x \\ & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} d s^{\prime} F_{2}\left(s^{\prime}\right)\left[\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f_{1}(x) e^{-i\left(s-s^{\prime}\right) x} d x\right] \\ \Rightarrow \quad F\left[f_{1}(t) f_{2}(t)\right] & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} F_{2}\left(s^{\prime}\right) F_{1}\left(s-s^{\prime}\right) d s^{\prime} \\ & =F_{2}(s) * F_{1}(s) \quad[B y \text { using definition of convolution }] \\ \Rightarrow \quad F\left[f_{1}(t) f_{2}(t)\right] & =F_{2} * F_{1} \\ \Rightarrow \quad F\left[f_{1}(t) f_{2}(t)\right] & =F_{1} * F_{2} \\ \text { If } F(z)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) & e^{-i z t} d t \end{aligned} $$ Then inverse function $\mathrm{f}(\mathrm{t})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{F}(\mathrm{z}) \mathrm{e}^{-\mathrm{izt}} \mathrm{dz}$ Example :- Find F.T. of $\mathrm{f}(\mathrm{t})=\mathrm{e}^{-|\mathrm{t}|}$ and verify the inverse transform. Sol. Now $|t|=t$ for $t>0$ $$ |t|=-t \text { for } t<0 $$ So $$ \begin{aligned} & f(t)=\left\{\begin{array}{lll} e^{t} & , & t<0 \\ e^{-t} & , & t>0 \end{array}\right. \\ & \therefore \quad \mathrm{F}(\mathrm{z})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{t}) \mathrm{e}^{-\mathrm{izt}} \mathrm{dt} \\ & =\frac{1}{\sqrt{2 \pi}}\left[\int_{-\infty}^{0} \mathrm{e}^{\mathrm{t}(1-\mathrm{iz})} \mathrm{dt}+\int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}(1+\mathrm{iz})} \mathrm{dt}\right] \end{aligned} $$ $$ =\frac{1}{\sqrt{2 \pi}}\left[\frac{1}{1-\mathrm{iz}}+\frac{1}{1+\mathrm{iz}}\right]=\frac{1}{\sqrt{2 \pi}} \frac{2}{\left(1+\mathrm{z}^{2}\right)} $$ We can invert the transform using contour integration (Residue theorem). First consider $\mathrm{t}>0$, then $$ \begin{aligned} \mathrm{f}(\mathrm{t}) & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi}} \frac{2}{1+\mathrm{z}^{2}} \mathrm{e}^{\mathrm{izt}} \mathrm{dz}=\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{i} z \mathrm{t}}}{1+\mathrm{z}^{2}} \mathrm{dz} \\ & =\frac{1}{\pi} \lim _{\mathrm{R} \rightarrow \infty} \oint_{\mathrm{C}} \frac{\mathrm{e}^{\mathrm{i} z \mathrm{t}}}{1+\mathrm{z}^{2}} \mathrm{dz} \end{aligned} $$ where $\mathrm{c}$ is the contour shown in figure. There is a simple pole at $z=i$ and $\operatorname{Res}(z=i)=\operatorname{Lim}_{z \rightarrow i} \frac{(z-i) e^{i z t}}{(z-i)(z+i)}=\frac{1}{2 i} e^{-t}$ Therefore by Residue theorem, $$ \frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{i} z \mathrm{t}}}{1+\mathrm{z}^{2}} \mathrm{dz}=\frac{1}{\pi} 2 \pi \mathrm{i}\left(\frac{\mathrm{e}^{-\mathrm{t}}}{2 \mathrm{i}}\right)=\mathrm{e}^{-\mathrm{t}} $$ Consider $\mathrm{t}<0$, then we choose the contour with a semi-circular arc lying below the $\mathrm{x}$-axis where $\mathrm{c}$ is the contour shown in figure. Therefore there is a simple pole at $\mathrm{z}=-\mathrm{i}$. $\operatorname{Res}(z=-)=\operatorname{Lim}_{z \rightarrow-i} \frac{(z+i) e^{i z t}}{(z+i)(z-i)}=\frac{e^{t}}{-2 i}$ But in this case contour is in clockwise direction, hence the desired result is $$ \frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{i} z \mathrm{t}}}{1+\mathrm{z}^{2}} \mathrm{dz}=\frac{-1}{\pi} 2 \pi \mathrm{i}\left(\frac{-\mathrm{e}^{\mathrm{t}}}{2 \mathrm{i}}\right)=\mathrm{e}^{\mathrm{t}} $$ At $t=0$, we can evaluate directly $$ \begin{aligned} \frac{1}{\pi} \int_{-\infty}^{\infty} \frac{1}{1+\mathrm{z}^{2}} \mathrm{dz} & =\frac{1}{\pi}\left(\tan ^{-1} \mathrm{z}\right)_{-\infty}^{\infty}=\frac{1}{\pi}\left[\frac{\pi}{2}-\left(\frac{-\pi}{2}\right)\right] \\ & =1 \end{aligned} $$ Example : Find F.T. of $f(t)=\left\{\begin{array}{cc}\sin w t, & 0<t<\infty \\ 0, & \text { otherwise }\end{array}\right.$ or $\quad \mathrm{f}(\mathrm{t})=\sin w \mathrm{t}(\mathrm{t})$ where $$ H(t)= \begin{cases}1, & \mathrm{t}>0 \\ 0, & \mathrm{t}<0\end{cases} $$ is a unit step function or Heaviside's unit step function. Sol. Now $F(z)=\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \sin w t e^{-i z t} d t$ $$ \begin{aligned} & =\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty}\left[\frac{e^{i w t}-e^{i w t}}{2 i}\right] e^{-i z t} d t \\ & =\frac{1}{\sqrt{2 \pi}} \cdot \frac{1}{2 i} \int_{0}^{\infty}\left[e^{i t(w-z)}-e^{i t(w+z)}\right] d t \\ & =\frac{1}{\sqrt{2 \pi}} \cdot \frac{1}{2 \mathrm{i}}\left[\frac{e^{-i t-w+z)}}{-i(-w+z)} \frac{-e^{-t(w+z)}}{-i(w+z)}\right]_{0}^{\infty} \\ & =\frac{1}{\sqrt{2 \pi}} \cdot \frac{1}{2 \mathrm{i}}\left[\frac{1}{i(z-w)}-\frac{1}{i(w+z)}\right] \\ & =\frac{-1}{2 i^{2} \sqrt{2 \pi}}\left[\frac{-1}{z-w}+\frac{1}{z+w}\right]=\frac{1}{2 \sqrt{2 \pi}}\left[\frac{-z-w+z-w}{z^{2}-w^{2}}\right] \\ & =\frac{-1}{\sqrt{2 \pi}}\left(\frac{w}{z^{2}-w^{2}}\right) \end{aligned} $$ Here $F(z)=\frac{-w}{\sqrt{2 \pi}\left(z^{2}-w^{2}\right)}$ is analytic in $z$-plane except at $z= \pm w$ If $\mathbf{t}>\mathbf{0}:-\mathrm{f}(\mathrm{t})=\frac{-1}{2 \pi} \operatorname{Lim}_{\mathrm{R} \rightarrow \infty} \oint_{\mathrm{C}} \frac{w \mathrm{e}^{\mathrm{izt}}}{\mathrm{z}^{2}-\mathrm{w}^{2}} \mathrm{dz}$ where $\mathrm{c}$ is contour. There are two simple pole $\mathrm{z}=\mathrm{w} \& \mathrm{z}=\mathrm{w}$ inside $\mathrm{c}$. where $f(t)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} F(z) e^{i z t} d z$ $$ =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \frac{-\mathrm{w}}{\sqrt{2 \pi}\left(\mathrm{z}^{2}-\mathrm{w}^{2}\right)} \mathrm{e}^{\mathrm{izt}} \mathrm{dz} $$ $\Rightarrow \quad \mathrm{f}(\mathrm{t})=\frac{-1}{2 \pi} \int_{-\infty}^{\infty} \frac{\mathrm{we}}{\mathrm{z}^{2} \mathrm{zt}} \mathrm{dz}$ $\therefore \quad \frac{-1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi}} \frac{\mathrm{we}^{\mathrm{izt}}}{\mathrm{z}^{2}-\mathrm{w}^{2}} \mathrm{dz}=\frac{-1}{2 \pi} \operatorname{Lim}_{\mathrm{R} \rightarrow \infty} \oint_{\mathrm{C}} \frac{\mathrm{we}}{\mathrm{z}^{2}-\mathrm{w}^{2}} \mathrm{dzt}$ $\operatorname{Res}(z=w)=\operatorname{Lim}_{z \rightarrow w}(z-w) \frac{w e^{i z t}}{(z-w)(z+w)}=\frac{w e^{i w t}}{2 w}$ $$ =\frac{e^{i w t}}{2} $$ $\operatorname{Res}(z=-w)=\operatorname{Lim}_{z \rightarrow-w}(z+w) \frac{w e^{i z t}}{(z-w)(z+w)}=\frac{w e^{-i w t}}{-2 w}$ $$ =\frac{-1}{2} \mathrm{e}^{-\mathrm{iwt}} $$ Therefore by Residue theorem, $$ \begin{aligned} & \frac{-1}{2 \pi} \operatorname{Lim}_{R \rightarrow \infty} \oint_{C} \frac{w e^{i z t}}{z^{2}-w^{2}} d z=\frac{-1}{2 \pi}(2 \pi i) \text { (Sum of residue } \\ & =-i\left[\frac{e^{i w t}-e^{i w t}}{2}\right] \\ & \Rightarrow \quad f(t)=-i(i \sin w t)=\sin w t \end{aligned} $$ If $\mathbf{t}<\mathbf{0}$, we choose the contour with a semi-circular arc below $\mathrm{x}$-axis $\&$ since there are no poles inside the contour, the result is zero. ## LESSON-2 ## Solution of Ordinary Differential Equation Consider the $n$th order differential equation $$ A_{n} \frac{d^{n} y}{d t^{n}}+a_{n-1} \frac{d^{n-1} y}{d t^{n-1}}+\ldots .+a_{1} \frac{d y}{d t}+a y=f(t) $$ Taking F.T. of both sides, $$ \left[\mathrm{a}_{\mathrm{n}}(\mathrm{i} \mathrm{s})^{\mathrm{n}}+\mathrm{a}_{\mathrm{n}-1}(\mathrm{i} \mathrm{s})^{\mathrm{n}-1}+\ldots+\mathrm{a}_{1}(\mathrm{i} \mathrm{s})+\mathrm{a}\right] \mathrm{F}[\mathrm{y}(\mathrm{t})]=\mathrm{F}[\mathrm{f}(\mathrm{t})] $$ Let $\quad \mathrm{F}[\mathrm{f}(\mathrm{t})]=\mathrm{G}(\mathrm{s})$ $$ \begin{aligned} & F[y(t)]=Y(s) \\ & P(i ~ s)=a_{n}(i ~ s)^{n}+a_{n-1}(i s)^{n-1}+\ldots .+a_{1}(i s)+a \end{aligned} $$ is a polynomial in (i s). Then we have $Y(s)=\frac{G(s)}{P(\text { is })}$ Taking inverse F.T., we have the solution is given by $$ y(t)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} Y(s) e^{i s t} d s $$ Example:- Solve using F.T. technique $$ \frac{d^{2} y}{d t^{2}}+3 \frac{d y}{d t}+2 y=e^{-|t|} $$ Solution :- Taking F.T. on both sides, $$ \begin{aligned} & {\left[(\mathrm{i} z)^{2}+3(\mathrm{i} z)+2\right] \mathrm{F}[\mathrm{y}(\mathrm{t})]=\mathrm{F}\left[\mathrm{e}^{-|\mathrm{t}|}\right] } \\ \Rightarrow \quad & \mathrm{P}(\mathrm{i} z) \mathrm{F}[\mathrm{y}(\mathrm{t})]=\frac{1}{\sqrt{2 \pi}} \frac{2}{1+\mathrm{z}^{2}} \\ \Rightarrow \quad & \mathrm{Y}(\mathrm{z})=\frac{1}{\sqrt{2 \pi}} \frac{2}{\left(1+\mathrm{z}^{2}\right) \cdot \frac{1}{\mathrm{P}(\mathrm{iz})}} \text { where } \mathrm{Y}(\mathrm{z})=\mathrm{F}[\mathrm{y}(\mathrm{t})] \\ \Rightarrow \quad & \mathrm{y}(\mathrm{t})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \frac{2}{\sqrt{2 \pi}} \frac{\mathrm{e}^{\mathrm{izt}} \mathrm{dz}}{\left(1+\mathrm{z}^{2}\right)\left[(\mathrm{iz})^{2}+3 \mathrm{iz}+2\right]} \\ & =\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{izt}} \mathrm{dz}}{\left(\mathrm{z}^{2}+1\right)\left(-\mathrm{z}^{2}+3 \mathrm{iz}+2\right)} \end{aligned} $$ $$ \begin{aligned} & =\frac{-1}{\pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{izt}} \mathrm{dz}}{\left(\mathrm{z}^{2}+1\right)\left(\mathrm{z}^{2}-3 \mathrm{iz}-2\right)} \\ & =\frac{-1}{\pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{i} z \mathrm{t}} \mathrm{dz}}{(\mathrm{z}-1)(\mathrm{z}+1)(\mathrm{z}-\mathrm{i})(\mathrm{z}-2 \mathrm{i})} \\ & =\frac{-1}{\pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{i} z \mathrm{t}} \mathrm{dz}}{(\mathrm{z}+\mathrm{i})(\mathrm{z}-\mathrm{i})^{2}(\mathrm{z}-2 \mathrm{i})} \end{aligned} $$ For $\mathbf{t}>\mathbf{0}$ The singularity within the contour are a simple pole at $\mathrm{z}=2 \mathrm{i}$ and a double pole at $\mathrm{z}=\mathrm{i}$. $$ \begin{aligned} \therefore \quad \text { Res. }(z=2 i) & =\operatorname{Lim}_{z \rightarrow 2 i} \frac{(z-2 i) e^{i z t}}{(z+i)(z-i)^{2}(z-2 i)} \\ & =\operatorname{Lt}_{z \rightarrow 2 i} \frac{e^{i z t}}{(z+i)(z-i)^{2}} \\ & =\frac{e^{-2 t}}{3 i \cdot i^{2}}=\frac{1}{3 i} e^{-2 t} \end{aligned} $$ Res. $(z=i)=\operatorname{Lim}_{z \rightarrow i} \frac{d}{d z} \frac{(z-i)^{2} e^{i z t}}{(z+i)(z-i)^{2}(z-2 i)}$ $$ =\operatorname{Lim}_{z \rightarrow i} \frac{d}{d z} \frac{e^{i z t}}{(z+i)(z-2 i)} $$ $\Rightarrow \quad$ Res. $(z=i)=\operatorname{Lim}_{z \rightarrow i} \frac{(z+i)(z-2 i) e^{i z t}(i t)-e^{i z t}(2 z+i-2 i)}{[(z+i)(z-2 i)]^{2}}$ $$ \begin{aligned} & =\underset{z \rightarrow i}{\operatorname{Lt}} \frac{(z+i)(z-2 i)(i t) e^{i z t}-e^{i z t}(2 z-i)}{(z+i)^{2}(z-2 i)^{2}} \\ & =\frac{2 i \cdot(-i)(i t) e^{-t}-e^{-t} i}{4 i^{2} \cdot(-i)^{2}} \\ & =\frac{2 i e^{-t}-i e^{-t}}{4(-1)(-1)}=\frac{2 i t e^{-t}-i e^{-t}}{4} \\ \therefore \quad y(t) & =\frac{-1}{\pi} \cdot 2 \pi i\left[\frac{+i}{3(1)} e^{-2 t}+\frac{2 i t e^{-t}-i e^{-t}}{4}\right] \end{aligned} $$ $$ \begin{aligned} & =-2 i\left[\frac{4 e^{-2 t}+6 i t e^{-t}-3 i e^{-t}}{12}\right] \\ & =\frac{-i}{6}\left[6 i t e^{-t}-3 i e^{-t}+4 i e^{-2 t}\right] \\ & =\frac{1}{6}\left(6 \mathrm{te}^{-t}-3 \mathrm{e}^{-t}+4 \mathrm{e}^{-2 t}\right) \\ & =\frac{2}{3} \mathrm{e}^{-2 t}+t \mathrm{e}^{-t}-\frac{1}{2} \mathrm{e}^{-t} \end{aligned} $$ Verification :- Put (2) in L.H.S. of DE. (1) we get $$ \begin{aligned} \text { L.H.S. } & =\frac{d^{2} y}{d^{2}}+\frac{3 d y}{d t}+2 y=\frac{d}{d t}\left[\frac{2}{3} e^{-2 t}(-2)+e^{-t}-t e^{-t}+\frac{1}{2} e^{-t}\right] \\ & +3\left[\frac{-4}{3} e^{-2 t}+e^{-t}-t e^{-t}+\frac{1}{2} e^{-t}\right]+\frac{4}{3} e^{-2 t}+2 t e^{-t}-e^{-t} \\ \text { L.H.S. } & =\frac{-4}{3} e^{-2 t}(-2) \frac{-3}{2} e^{-t}-e^{-t}+t e^{-t}-4 e^{-2 t}+\frac{9}{2} e^{-t}-3 t e^{-t} \\ & +\frac{4}{3} e^{-2 t}+2 t e^{-t}-e^{-t}=2 e^{-t}-e^{-t}=e^{-t} \end{aligned} $$ For $\mathbf{t}<\mathbf{0}$ $$ \text { Res. } \begin{aligned} (z=-i) & =\operatorname{Lim}_{z \rightarrow-i} \frac{(z+i) e^{i z t}}{(z+i)(z-i)^{2}(z-2 i)} \\ & =\frac{e^{t}}{(-2 i)^{2}(-3 i)}=\frac{e^{t}}{4(-1)(-3 i)}=\frac{e^{t}}{12 i} \end{aligned} $$ So $\quad y(t)=\frac{-1}{\pi}(-2 \pi i) \frac{e^{t}}{12 i}=\frac{e^{t}}{6}$ Verification :- Put (3) in L.H.S. of (1), $$ \begin{aligned} \text { L.H.S. } & =\frac{\mathrm{d}^{2} y}{\mathrm{dt}^{2}}+\frac{3 \mathrm{dy}}{\mathrm{dt}}+2 \mathrm{y}=\frac{\mathrm{d}}{\mathrm{dt}}\left[\frac{\mathrm{e}^{\mathrm{t}}}{6}\right]+3 \frac{\mathrm{e}^{\mathrm{t}}}{6}+\frac{2 \mathrm{e}^{\mathrm{t}}}{6} \\ & =\frac{\mathrm{e}^{\mathrm{t}}}{6}+\frac{3 \mathrm{e}^{\mathrm{t}}}{6}+\frac{2 \mathrm{e}^{\mathrm{t}}}{6}=\frac{6 \mathrm{e}^{\mathrm{t}}}{6}=\mathrm{e}^{\mathrm{t}} \text {. Hence verified. } \end{aligned} $$ Example:- Solve using F.T. techniques $$ \frac{\mathrm{d}^{2} \mathrm{y}}{\mathrm{dt}^{2}}+\frac{3 \mathrm{dy}}{\mathrm{dt}}+2 \mathrm{y}=\mathrm{H}(\mathrm{t}) \sin \mathrm{wt} $$ Solution :- Then we have on taking F.T. on both sides, $$ \begin{aligned} & {\left[(i z)^{2}+3 i z+2\right] F[y(t)]=\frac{1}{\sqrt{2 \pi}} \frac{w}{w^{2}-z^{2}}} \\ & \text { Then } \mathrm{y}(\mathrm{t})=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi}}\left(\frac{\mathrm{w}}{\mathrm{w}^{2}-\mathrm{z}^{2}}\right) \frac{\mathrm{e}^{\mathrm{izt}} \mathrm{dz}}{\left(-\mathrm{z}^{2}+3 \mathrm{iz}+2\right)} \\ & \Rightarrow \quad \mathrm{y}(\mathrm{t})=\frac{\mathrm{w}}{2 \pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{izt}} \mathrm{dt}}{\left(\mathrm{z}^{2}-\mathrm{w}^{2}\right)\left(\mathrm{z}^{2}-3 \mathrm{iz}-2\right)} \\ & =\frac{\mathrm{w}}{2 \pi} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{\mathrm{lzt}} \mathrm{dt}}{\left(\mathrm{z}^{2}-\mathrm{w}^{2}\right)(\mathrm{z}-\mathrm{i})(\mathrm{z}-2 \mathrm{i})} \end{aligned} $$ ## For $\mathbf{t}>\mathbf{0}$ In this case singularity are at $\mathrm{z}= \pm \mathrm{w}, \mathrm{z}=2 \mathrm{i}, \mathrm{z}=\mathrm{i}$ and all these lies inside the upper half plane. Res. $\quad(z=2 i)=\operatorname{Lim}_{z \rightarrow 2 i}\left[\frac{(z-2 i) e^{i z t}}{(z-w)(z+w)(z-i)(z-2 i)}\right]$ $$ =\frac{e^{-2 t}}{(2 i-w)(2 i+w) i}=\frac{e^{-2 t}}{-\left(4+w^{2}\right) i^{2}}=\frac{i e^{-2 t}}{w^{2}+4} $$ Res. $\quad(z=i)=\operatorname{Lim}_{z \rightarrow i}\left[\frac{(z-i) e^{i z t}}{(z i-)(z-2 i)\left(z^{2}-w^{2}\right)}\right]$ $$ =\frac{\mathrm{e}^{-\mathrm{t}}}{-\mathrm{i}\left(-1-\mathrm{w}^{2}\right)}=\frac{\mathrm{e}^{-\mathrm{t}} \mathrm{i}}{\left(-1-\mathrm{w}^{2}\right)}=\frac{\mathrm{e}^{-\mathrm{t}}}{\mathrm{i}\left(\mathrm{w}^{2}+1\right)} $$ Res. $\quad(z=w)=\operatorname{Lim}_{z \rightarrow w} \frac{(z-w) e^{i z t}}{(z-w)(z+w)(z-i)(z-2 i)}$ $$ =\frac{e^{i w t}}{2 w(w-i)(w-2 i)}=\frac{e^{i w t}}{2 w\left(w^{2}-3 i w-2\right)} $$ Res. $\quad(z=-w)=\operatorname{Lim}_{z \rightarrow-w} \frac{(z+w) e^{i z t}}{(z-w)(z+w)(z-i)(z-2 i)}$ $$ =\frac{e^{-i w t}}{(-2 w)(-w-i)(-w-2 i)}=\frac{e^{-i w t}}{-2 w(w+i)(w+2 i)} $$ $\Rightarrow \quad$ Res. $(\mathrm{z}=-\mathrm{w})=\frac{\mathrm{e}^{-\mathrm{iwt}}}{-2 \mathrm{w}\left(\mathrm{w}^{2}+3 \mathrm{iw}-2\right)}=\frac{-\mathrm{e}^{-\mathrm{iwt}}}{2 \mathrm{w}\left(\mathrm{w}^{2}+3 \mathrm{iw}-2\right)}$ $$ \begin{aligned} \therefore \mathrm{y}(\mathrm{t}) & =\frac{\mathrm{w}}{2 \pi} \cdot 2 \pi \mathrm{i}\left[\frac{\mathrm{ie}^{-2 \mathrm{t}}}{\mathrm{w}^{2}+4}+\frac{\mathrm{ie}^{-\mathrm{t}}}{-\left(\mathrm{w}^{2}+1\right)}+\frac{\mathrm{e}^{\mathrm{iwt}}}{2 \mathrm{w}(\mathrm{w}-\mathrm{i})(\mathrm{w}-2 \mathrm{i})}-\frac{\mathrm{e}^{-\mathrm{iwt}}}{2 \mathrm{w}(\mathrm{w}+\mathrm{i})(\mathrm{w}+2 \mathrm{i})}\right] \\ & =\frac{-\mathrm{we}^{-2 \mathrm{t}}}{\mathrm{w}^{2}+4}+\frac{\mathrm{we}^{-\mathrm{t}}}{\mathrm{w}^{2}+1}+\frac{\mathrm{i}}{2} \frac{\mathrm{e}^{\mathrm{iwt}}}{(\mathrm{w}-\mathrm{i})(\mathrm{w}-2 \mathrm{i})} \frac{-\mathrm{i}}{2} \frac{\mathrm{e}^{-\mathrm{iwt}}}{(\mathrm{w}+\mathrm{i})(\mathrm{w}+2 \mathrm{i})} \end{aligned} $$ ## For verification :- $$ \begin{aligned} & y^{\prime}(t)=\frac{2 w e^{-2 t}}{w^{2}+4}-\frac{w e^{-t}}{w^{2}+1}-\frac{w e^{-i w t}}{2(w-i)(w-2 i)}-\frac{w e^{-i w t}}{2(w+i)(w+2 i)} \\ & \Rightarrow \quad y^{\prime \prime}(t)=\frac{-4 \mathrm{we}^{-2 t}}{\mathrm{w}^{2}+4}+\frac{\mathrm{we}^{-\mathrm{t}}}{\mathrm{w}^{2}+1} \frac{-\mathrm{i} \mathrm{w}^{2} \mathrm{e}^{\mathrm{iwt}}}{2(\mathrm{w}-\mathrm{i})(\mathrm{w}-2 \mathrm{i})}+\frac{\mathrm{iw} \mathrm{e}^{2} \mathrm{e}^{-\mathrm{iwt}}}{2(\mathrm{w}+\mathrm{i})(\mathrm{w}+2 \mathrm{i})} \end{aligned} $$ Then $$ \begin{aligned} & y^{\prime \prime}+3 y^{\prime}+2 y=\frac{-4 w e^{-2 t}}{w^{2}+4}+\frac{w e^{-t}}{w^{2}+1}+\frac{i w^{2} e^{-i w t}}{2(w+i)(w+2 i)} \\ & \frac{-i w^{2} e^{i w t}}{2(w-i)(w-2 i)}+\frac{6 w e^{-2 t}}{w^{2}+4}-\frac{3 w e^{-t}}{w^{2}+1}-\frac{3 w e^{i w t}}{2(w-i)(w-2 i)} \\ & \frac{-3 w e^{-i w t}}{2(w+i)(w+2 i)}-\frac{2 w e^{-2 t}}{w^{2}+4}+\frac{2 w e^{-t}}{w^{2}+1}+\frac{2 i e^{i w t}}{2(w-i)(w-2 i)} \frac{-w e^{-i w t}}{2(w+i)(w+2 i)} \\ & \Rightarrow \quad y^{\prime \prime}+3 y^{\prime}+2 y=\frac{e^{-i w t}}{2(w+i)(w+2 i)}\left[i w^{2}-3 w-2 i\right] \\ & +\frac{\mathrm{e}^{\mathrm{iwt}}}{2(\mathrm{w}-\mathrm{i})(\mathrm{w}-2 \mathrm{i})}\left[-\mathrm{i} \mathrm{w}^{2}-3 \mathrm{w}+2 \mathrm{i}\right] \\ & =\frac{i e^{-i w t}}{2(w+i)(w+2 i)}\left(w^{2}+3 i w-2\right) \frac{-e^{i w t}\left(w^{2}-3 i w-2\right)}{2(w-i)(w-2 i)} \\ & =\frac{-\mathrm{i}}{2}\left(\mathrm{e}^{\mathrm{i} w t}-\mathrm{e}^{-\mathrm{iwt}}\right)=-\mathrm{i}^{2}\left(\frac{\mathrm{e}^{\mathrm{i} w t}-\mathrm{e}^{-\mathrm{iwt}}}{2 \mathrm{i}}\right) \end{aligned} $$ $\Rightarrow \quad \mathrm{y}^{\prime \prime}+3 \mathrm{y}^{\prime}+2 \mathrm{y}=\mathrm{H}(\mathrm{t}) \sin \mathrm{w} \mathrm{t}$ Hence $\mathrm{y}^{\prime \prime}+3 \mathrm{y}^{\prime}+2 \mathrm{y}=\sin \mathrm{wt}$. $\mathrm{H}(\mathrm{t})$ is verified for $\mathrm{t}>0$. for $\mathbf{t}<\mathbf{0}$, there is no pole inside the lower half plane so residue $=0$. $\therefore \quad \mathrm{y}(\mathrm{t})=0$ $\therefore \quad \mathrm{y}^{\prime \prime}+3 \mathrm{y}^{\prime}+2 \mathrm{y}=\mathrm{H}(\mathrm{t}) \sin \mathrm{wt}$ is verified. For $\mathrm{t}<0$. $\therefore \quad$ for $\mathrm{t}<0, \mathrm{H}(\mathrm{t})=0$ So R.H.S. of (1) is zero. And equal to L.H.S. Hence verify the result. Example: Find FST of $\frac{e^{-a t}}{t}$ Solution :- The sine transform of function $f(t)=\frac{e^{-a t}}{t}$ $$ \begin{aligned} I=F_{S}[f(t)] & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(t) \sin s t d t \\ & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{e^{-a t}}{t} \sin s t d t \end{aligned} $$ Differentiation w.r.t. s, we get $$ \begin{aligned} \frac{\mathrm{dI}}{\mathrm{ds}}=\frac{\mathrm{d}}{\mathrm{ds}} \mathrm{F}_{\mathrm{s}}(\mathrm{f}(\mathrm{t})) & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\mathrm{e}^{-\mathrm{at}}}{\mathrm{t}} \cdot \mathrm{t} \cos \mathrm{st} \mathrm{dt}=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{at}} \cos \mathrm{st} \mathrm{dt} \\ & =\sqrt{\frac{2}{\pi}}\left[\frac{\mathrm{e}^{-\mathrm{at}}}{\mathrm{a}^{2}+\mathrm{s}^{2}}(-\mathrm{a} \cos \mathrm{st}+\mathrm{s} \sin \mathrm{st})\right]_{0}^{\infty} \\ \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{ds}} \mathrm{F}_{\mathrm{s}}[\mathrm{f}(\mathrm{t})] & =\sqrt{\frac{2}{\pi}} \cdot \frac{\mathrm{a}}{\mathrm{a}^{2}+\mathrm{s}^{2}}=\frac{\mathrm{dI}}{\mathrm{ds}} \end{aligned} $$ Integrating we get, $$ \begin{aligned} & \int \frac{-d}{d s} F_{s}[f(t)]=\sqrt{\frac{2}{\pi}} \int \frac{a}{a^{2}+s^{2}} d s \\ \Rightarrow & I=F_{s}[f(t)]=\sqrt{\frac{2}{\pi}} \tan ^{-1}\left(\frac{s}{a}\right)+A \end{aligned} $$ where A is constant of integration. For $\mathrm{s}=0$, we get from (2), $$ \mathrm{I}=\sqrt{\frac{2}{\pi}}(0)+\mathrm{A} \Rightarrow \mathrm{I}=\mathrm{A} $$ when $\mathrm{s}=0$, from $(1), \mathrm{I}=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty}(0) \mathrm{dt}=0$ from (3) \\& (4), we get $$ \mathrm{A}=0 $$ Hence the required FST of given function is $$ \mathrm{F}_{\mathrm{s}}\left[\frac{\mathrm{e}^{-\mathrm{at}}}{\mathrm{t}}\right]=\sqrt{\frac{2}{\pi}} \tan ^{-1}\left(\frac{\mathrm{s}}{\mathrm{a}}\right) . $$ ## Solution of partial differential equations (boundary value problems) Example: - Determine the distribution of temperature in the semi-infinite medium, $x \geq 0$ when the end $x=0$ is maintained at zero temperature \\& the initial distribution of temperature is $f(x)$. ## Solution :- Heat equation is given by $$ \frac{\partial \mathrm{u}(\mathrm{x}, \mathrm{t})}{\partial \mathrm{t}}=\mathrm{c}^{2} \frac{\partial^{2} \mathrm{u}(\mathrm{x}, \mathrm{t})}{\partial \mathrm{x}^{2}}, \mathrm{x}>0, \mathrm{t}>0 $$ where $u(x, t)$ is the distribution of temperature at any point $x$ and time $t$. We want to determine the solution of $(1)$ subject to initial condition $u(x, 0)=f(x) \ldots(2)$ and the boundary condition $\mathrm{u}(0, \mathrm{t})=0$ Since $(u)_{x=0}$ is given, we apply F.S.T. Denote $F_{s}[u(x, t)]=\bar{u}_{s}(s, t)=\bar{u}_{s} \quad$ and $F_{c}[u(x, t)]=\bar{u}_{c}$ Taking FST of both sides of (1), we get $$ \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{t}} \sin \mathrm{st} \mathrm{dx}=\mathrm{c}^{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \sin \mathrm{sxdx} $$ $\Rightarrow \quad \sqrt{\frac{2}{\pi}} \frac{d}{d t} \int_{0}^{\infty} u \sin s x d x=c^{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} u}{\partial x^{2}} \sin s x d x$ $\Rightarrow \quad \sqrt{\frac{2}{\pi}} \frac{\mathrm{d}}{\mathrm{dt}} \int_{0}^{\infty} \mathrm{u} \sin \mathrm{sx} \mathrm{dx}=\frac{\mathrm{d}}{\mathrm{dt}} \mathrm{F}_{\mathrm{s}}[\mathrm{u}(\mathrm{x}, \mathrm{t})]$ $$ =\mathrm{c}^{2}\left\{\sqrt{\frac{2}{\pi}}\left[\frac{\partial \mathrm{u}}{\partial \mathrm{x}} \sin \mathrm{sx}\right]_{0}^{\infty}-\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{s} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \cos \mathrm{sx} \mathrm{dx}\right\} $$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=0-\mathrm{c}^{2} \mathrm{~s} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \cos \mathrm{s} \mathrm{dx}$ if $\frac{\partial \mathrm{u}}{\partial \mathrm{x}} \rightarrow 0$ as $\mathrm{x} \rightarrow \infty$ $$ \begin{aligned} & =-C^{2} s \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial u}{\partial x} \cos s x d x \\ & =-c^{2} s \sqrt{\frac{2}{\pi}}[u(x, t) \cos s x]_{0}^{\infty}-c^{2} s^{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} u \sin s x d x \end{aligned} $$ $$ \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=\mathrm{c}^{2} \sqrt{\frac{2}{\pi}} \mathrm{su}(0, \mathrm{t})-\mathrm{c}^{2} \mathrm{~s}^{2} \overline{\mathrm{u}}_{\mathrm{s}} ; \text { assuming } \mathrm{u} \rightarrow 0 \text { as } \mathrm{x} \rightarrow \infty $$ $\Rightarrow \quad \frac{\mathrm{d} \overline{\mathrm{u}}_{\mathrm{s}}}{\mathrm{dt}}=\mathrm{c}^{2}\left[\sqrt{\frac{2}{\pi}} \mathrm{s} \mathrm{u}(0, \mathrm{t})-\mathrm{s}^{2} \overline{\mathrm{u}}_{\mathrm{s}}\right]$ By using (3), $u(0, t)=0$, we get $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}+\mathrm{c}^{2} \mathrm{~s}^{2} \overline{\mathrm{u}}_{\mathrm{s}}=0$ Also taking FST of (2), we get $$ \begin{aligned} & \mathrm{F}_{\mathrm{s}}[\mathrm{u}(\mathrm{x}, 0)]=\mathrm{F}_{\mathrm{s}}[\mathrm{f}(\mathrm{x})] \\ \Rightarrow \quad & \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0)=\overline{\mathrm{f}}_{\mathrm{s}}(\mathrm{s}) \end{aligned} $$ from (4), we have (D+ $\left.C^{2} \mathrm{~s}^{2}\right) \overline{\mathrm{u}}_{\mathrm{s}}=0$ Auxiliary equation is $\mathrm{m}+\mathrm{c}^{2} \mathrm{~s}^{2}=0$ $\Rightarrow \quad \mathrm{m}=-\mathrm{c}^{2} \mathrm{~s}^{2}$ Solution of (4) is $$ \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, \mathrm{t})=\overline{\mathrm{u}}_{\mathrm{s}}=\mathrm{A} \mathrm{e}^{-\mathrm{c}^{2} \mathrm{~s}^{2} \mathrm{t}} $$ To find A, we use (5), from (*), we have $\bar{u}_{s}(s, 0)=A$ $\Rightarrow \quad \mathrm{A}=\overline{\mathrm{f}}_{\mathrm{s}}(\mathrm{s}) \quad[\operatorname{using}(5)]$ Hence solution is $$ \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, \mathrm{t})=\overline{\mathrm{f}}_{\mathrm{s}}(\mathrm{s}) \mathrm{e}^{-\mathrm{c}^{2} \mathrm{~s}^{2} \mathrm{t}} $$ Taking inverse FST, we get $$ \begin{gathered} \mathrm{F}_{\mathrm{s}}^{-1}\left[\overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, \mathrm{t})\right]=\mathrm{F}_{\mathrm{s}}^{-1}\left[\overline{\mathrm{f}}_{\mathrm{s}}(\mathrm{s}) \mathrm{e}^{-\mathrm{c}^{2} \mathrm{~s}^{2} \mathrm{t}}\right] \\ \Rightarrow \quad \mathrm{u}(\mathrm{x}, \mathrm{t})=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \overline{\mathrm{u}}_{\mathrm{s}} \sin \mathrm{sx} \mathrm{ds} \\ \Rightarrow \quad \mathrm{u}(\mathrm{x}, \mathrm{t})=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \overline{\mathrm{f}}_{\mathrm{s}}(\mathrm{s}) \mathrm{e}^{-\mathrm{c}^{2} \mathrm{~s}^{2} \mathrm{t}} \sin \mathrm{sx} \mathrm{ds} \end{gathered} $$ This is the required solution of PDE. Example:- The temperature $\mathrm{u}$ in the semi-infinite $\operatorname{rod} 0 \leq \mathrm{x}<\infty($ or $\mathrm{x} \geq 0)$ is determined by equation $$ \frac{\partial \mathrm{u}}{\partial \mathrm{t}}=\mathrm{K} \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} $$ subject to conditions (i) $\quad \mathrm{u}=0$ when $\mathrm{t}=0, \mathrm{x} \geq 0$ $$ \text { i.e. } u(x, 0)=0 $$ (ii) $\frac{\partial \mathrm{u}}{\partial \mathrm{x}}=-\mu(\mathrm{a}$ constant $)$ when $\mathrm{x}=0, \mathrm{t}>0$ $$ \text { i.e. } \frac{\partial u}{\partial x}(0, t)=-\mu $$ or $\quad \mathrm{u}_{\mathrm{x}}(0, \mathrm{t})=-\mu$ Solution :- Since $\left(\frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right)_{\mathrm{x}=0}$ is given, So taking Fourier cosine transform of both sides of (1), $$ \begin{aligned} & \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{t}} \cos \mathrm{sxdx}=\mathrm{K} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \cos \mathrm{sx} \mathrm{dx} \\ \Rightarrow \quad & \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, \mathrm{t})=\mathrm{K} \sqrt{\frac{2}{\pi}}\left[\frac{\partial \mathrm{u}}{\partial \mathrm{x}} \cos \mathrm{s} x\right]_{0}^{\infty}+\mathrm{Ks} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \sin \mathrm{sx} \mathrm{dx} \end{aligned} $$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}}=-\mathrm{K} \sqrt{\frac{2}{\pi}}\left(\frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right)_{\mathrm{x}=0}+\mathrm{Ks} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \sin \mathrm{sxdx}$ $\left[\right.$ By $\operatorname{assu} \min g \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \rightarrow 0$ as $\left.\mathrm{x} \rightarrow \infty\right]$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}}=\mathrm{K} \sqrt{\frac{2}{\pi}} \mu+\mathrm{Ks} \sqrt{\frac{2}{\pi}}[\mathrm{u} \sin \mathrm{sx}]_{0}^{\infty}-\mathrm{Ks} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{us} \cos \mathrm{sx} \mathrm{dx}$ $=\mathrm{K} \sqrt{\frac{2}{\pi}} \mu-\mathrm{Ks}^{2} \overline{\mathrm{u}}_{\mathrm{c}}$ if $\mathrm{u} \rightarrow 0$ as $\mathrm{x} \rightarrow \infty$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}}+\mathrm{Ks}^{2} \overline{\mathrm{u}}_{\mathrm{c}}=\mathrm{K} \mu \sqrt{\frac{2}{\pi}}$ $\overline{\mathrm{u}}_{\mathrm{c}}=\sqrt{\frac{2}{\pi}} \frac{\mu}{\mathrm{s}^{2}}\left(1-\mathrm{e}^{-\mathrm{Ks}^{2} \mathrm{t}}\right)$ $\mathrm{u}(\mathrm{x}, \mathrm{t})=\frac{2 \mu}{\pi} \int_{0}^{\infty} \frac{\cos \mathrm{s} \mathrm{x}}{\mathrm{s}^{2}}\left(1-\mathrm{e}^{-\mathrm{Ks}^{2} \mathrm{t}}\right) \mathrm{ds}$ This is linear DE of Ist order. I.F. $\mathrm{e}^{\int k s^{2} d t}=e^{s^{2} k t}$ Solution of (2) is $\bar{u}_{c} \cdot e^{s^{2} k t}=\int K \cdot \mu \sqrt{\frac{2}{\pi}} e^{s^{2} k t} d t+A$ $\Rightarrow \quad \bar{u}_{c} e^{s^{2} k t}=A+K \mu \sqrt{\frac{2}{\pi}} \frac{e^{s^{2} k t}}{s^{2} k}=A+\frac{\mu}{s^{2}} \sqrt{\frac{2}{\pi}} e^{s^{2} k t}$ $\Rightarrow \quad \bar{u}_{c}=\mathrm{Ae}^{-\mathrm{s}^{2} \mathrm{kt}}+\sqrt{\frac{2}{\pi}} \frac{\mu}{\mathrm{s}^{2}}$ Put $\mathrm{t}=0 \Rightarrow \overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, 0)=\mathrm{A}+\sqrt{\frac{2}{\pi}} \frac{\mu}{\mathrm{s}^{2}}$ from condition (i), $$ \overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, 0)=\int_{0}^{\infty} \mathrm{u}(\mathrm{x}, 0) \cos \mathrm{s} \mathrm{d} \mathrm{dx}=0 $$ (4), (5) $\Rightarrow \mathrm{A}=-\sqrt{\frac{2}{\pi}} \frac{\mu}{\mathrm{s}^{2}}$ $$ \therefore \quad \text { (3) } \Rightarrow \quad \bar{u}_{c}=\sqrt{\frac{2}{\pi}} \frac{\mu}{\mathrm{s}^{2}}\left(1-\mathrm{e}^{-\mathrm{k}^{2} \mathrm{st}}\right) $$ ## Finite Fourier Transform :- Finite Fourier Since Transform :- Let $\mathrm{f}(\mathrm{x})$ denote a function that is sectionally continuous over some finite interval $(0, \lambda)$ of the variable $x$. The finite fourier sine transform of $f(x)$ on the interval is defined as $$ f_{s}(s)=\int_{0}^{\lambda} f(x) \sin \frac{s \pi x}{\lambda} d x \text { where } s \text { is an integer } $$ ## Inversion formula for sine transform $$ f(x)=\frac{2}{\lambda} \sum_{s=1}^{\infty} f_{s}(s) \sin \frac{s \pi x}{\lambda} \text { for the interval }(0, \lambda) $$ If $(0, \pi)$ is the interval for $\mathrm{f}_{\mathrm{s}}(\mathrm{s})$, $$ f(x)=\frac{2}{\pi} \sum_{s=1}^{\infty} f_{s}(s) \sin s x $$ Finite FCT :- Let $\mathrm{f}(\mathrm{x})$ denote a function that is sectionally continuous over some finite interval $(0, \lambda)$ of the variable $x$. The finite Fourier cosine transform of $f(x)$ on the interval is defined as $$ f_{c}(s)=\int_{0}^{\lambda} f(x) \cos \frac{s \pi x}{\lambda} d x \text { where } s \text { is an integer, } $$ If $(0, \pi)$ is the interval, $$ f_{c}(s)=\int_{0}^{\pi} f(x) \cos s x d x $$ ## Inversion formula for FCT $$ f(x)=\frac{1}{\lambda} f_{c}(0)+\frac{2}{\lambda} \sum_{s=1}^{\infty} f_{c}(s) \cos \frac{s \pi x}{\lambda} $$ where $\mathrm{f}_{\mathrm{c}}(0)=\int_{0}^{\lambda} \mathrm{f}(\mathrm{x}) \mathrm{dx}$ If $\mathrm{f}(\mathrm{x})=1,(0, \pi)$, then $$ f_{s}(s)=\int_{0}^{\pi} \sin s x d x=\left(\frac{-\cos s x}{s}\right)_{0}^{\pi} $$ $$ =\frac{1}{\mathrm{~s}}[-\cos \mathrm{s} \pi+\cos 0]=\frac{1}{\mathrm{~s}}\left[1-(-1)^{2}\right] $$ and $\mathrm{f}_{\mathrm{c}}(\mathrm{s})=\int_{0}^{\pi} \cos \mathrm{sx} \mathrm{dx}=\left(\frac{\sin \mathrm{sx}}{\mathrm{s}}\right)_{0}^{\pi}=0$ if $\mathrm{s}=1,2,3, \ldots$ If $s=0$, then $$ \mathrm{f}_{\mathrm{c}}(\mathrm{s})=\int_{0}^{\pi} 1 . \mathrm{dx}=\pi $$ Find Finite FST and finite FCT of $\frac{\partial u}{\partial x}, \frac{\partial^{2} u}{\partial x^{2}}, u(x, t)$ for $0<x<\lambda, t>0$ Solution :- By definition, finite FST of $\frac{\partial u}{\partial x}$ is $$ \begin{aligned} \mathrm{f}_{\mathrm{s}}(\mathrm{s}) & =\int_{0}^{\lambda} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \sin \frac{\mathrm{s} \pi \mathrm{x}}{\lambda} \mathrm{dx} \\ & =\left.\sin \frac{\mathrm{s} \pi \mathrm{x}}{\lambda} \mathrm{u}(\mathrm{x}, \mathrm{t})\right|_{0} ^{\lambda}-\frac{\mathrm{s} \pi}{\lambda} \int_{0}^{\lambda} \mathrm{u}(\mathrm{x}, \mathrm{t}) \cos \frac{\mathrm{s} \pi \mathrm{x}}{\lambda} \mathrm{dx} \\ \Rightarrow \quad \mathrm{f}_{\mathrm{s}}(\mathrm{s}) & =\mathrm{F}_{\mathrm{s}}\left[\frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right]=\frac{-\mathrm{s} \pi}{\lambda} \int_{0}^{\lambda} \mathrm{u}(\mathrm{x}, \mathrm{t}) \cos \frac{\mathrm{s} \pi \mathrm{x}}{\lambda} \mathrm{dx} \\ \Rightarrow \quad \mathrm{f}_{\mathrm{s}}(\mathrm{s}) & =\mathrm{F}_{\mathrm{s}}\left[\frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right]=\frac{-\mathrm{s} \pi}{\lambda} \mathrm{F}_{\mathrm{c}}[\mathrm{u}(\mathrm{x}, \mathrm{t})] \\ & =\frac{\mathrm{s} \pi}{\lambda} \mathrm{F}_{\mathrm{s}}[\mathrm{u}(\mathrm{x}, \mathrm{t})]-[\mathrm{u}(0, \mathrm{t})-\mathrm{u}(\lambda, \mathrm{t}) \cos \mathrm{s} \pi] \\ \mathrm{F}_{\mathrm{c}}\left[\frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right] & =\int_{0}^{\lambda} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \cos \frac{\mathrm{s} \pi \mathrm{x}}{\lambda} \mathrm{dx}=\left.\cos \frac{\mathrm{s} \pi \mathrm{x}}{\lambda} \mathrm{u}(\mathrm{x}, \mathrm{t})\right|_{0} ^{\lambda}+\int_{0}^{\lambda} \frac{\mathrm{s} \pi}{\lambda} \mathrm{u}(\mathrm{x}, \mathrm{t}) \sin \frac{\mathrm{s} \pi \mathrm{x}}{\lambda} \mathrm{dx} \end{aligned} $$ To calculate the finite FST \\& finite FCT of $\frac{\partial^{2} u}{\partial x^{2}}$ Replace u by $\frac{\partial u}{\partial x}$ in (1) and (2), we get $$ \mathrm{F}_{\mathrm{s}}\left[\frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}}\right]=\frac{-\mathrm{s} \pi}{\lambda} \mathrm{F}_{\mathrm{c}}\left[\frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right] $$ $$ \begin{aligned} & =\frac{-\mathrm{s} \pi}{\lambda}\left[\frac{\mathrm{s} \pi}{\lambda} \mathrm{F}_{\mathrm{s}}(\mathrm{u})\right]+\frac{\mathrm{s} \pi}{\lambda}\{\mathrm{u}(0, \mathrm{t})-\mathrm{u}(\lambda, \mathrm{t}) \cos \mathrm{s} \pi\} \\ & =\frac{-\mathrm{s}^{2} \pi^{2}}{\lambda^{2}} \mathrm{~F}_{\mathrm{s}}[\mathrm{u}]+\frac{\mathrm{s} \pi}{\lambda}[\mathrm{u}(0, \mathrm{t})-\mathrm{u}(\lambda, \mathrm{t}) \cos \mathrm{s} \pi] \end{aligned} $$ Similarly $\mathrm{F}_{\mathrm{c}}\left[\frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}}\right]=\frac{+\mathrm{s} \pi}{\lambda} \mathrm{F}_{\mathrm{s}}\left[\frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right]-\left[\mathrm{u}_{\mathrm{x}}(0, \mathrm{t})-\mathrm{u}_{\mathrm{x}}(\lambda, \mathrm{t}) \cos \mathrm{s} \pi\right]$ $$ \begin{aligned} & =\frac{\mathrm{s} \pi}{\lambda}\left(\frac{-\mathrm{s} \pi}{\lambda}\right) \mathrm{F}_{\mathrm{c}}[\mathrm{u}(\mathrm{x}, \mathrm{t})]-\left\{\mathrm{u}_{\mathrm{x}}(0, \mathrm{t})-\mathrm{u}_{\mathrm{x}}(\lambda, \mathrm{t}) \cos \mathrm{s} \pi\right] \\ & =\frac{-\mathrm{s}^{2} \pi^{2}}{\lambda^{2}} \mathrm{~F}_{\mathrm{c}}[\mathrm{u}]-\left\{\mathrm{u}_{\mathrm{x}}(0, \mathrm{t})-\mathrm{u}_{\mathrm{x}}(\lambda, \mathrm{t}) \cos \mathrm{s} \pi\right\} \end{aligned} $$ Example:- Use finite Fourier transform to solve $$ \begin{aligned} & \frac{\partial \mathrm{u}}{\partial \mathrm{t}}=\frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \\ & \mathrm{u}(0, \mathrm{t})=0, \mathrm{u}(4, \mathrm{t})=0 \end{aligned} $$ $\mathrm{u}(\mathrm{x}, 0)=2 \mathrm{x}$ when $0<\mathrm{x}<4, \mathrm{t}>0$. Solution :- $(\mathrm{u})_{\mathrm{x}=0}$ is given, we apply FST. Here $\lambda=4$. Taking Finite FST of (1), we get $$ \int_{0}^{4} \frac{\partial \mathrm{u}}{\partial \mathrm{t}} \sin \left(\frac{\mathrm{s} \pi \mathrm{x}}{4}\right) \mathrm{dx}=\int_{0}^{4} \frac{\partial^{2} \mathrm{u}}{\mathrm{dx}^{2}} \sin \left(\frac{\mathrm{s} \pi \mathrm{x}}{4}\right) \mathrm{dx} $$ Writing $\overline{\mathrm{u}}_{\mathrm{s}}=\mathrm{F}_{\mathrm{S}}[\mathrm{u}(\mathrm{x}, \mathrm{t})]$, we get $$ \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=\frac{-\mathrm{s}^{2} \pi^{2}}{16} \overline{\mathrm{u}}_{\mathrm{s}}+\frac{\mathrm{s} \pi}{4}(0,-0) $$ [Using equation (3) in previous article and given condition] $$ \begin{aligned} & \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}+\frac{\mathrm{s}^{2} \pi^{2}}{16} \overline{\mathrm{u}}_{\mathrm{s}}=0 \\ & \Rightarrow \quad \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, \mathrm{t})=\mathrm{E} \mathrm{e}^{\frac{-\mathrm{s}^{2} \pi^{2} \mathrm{t}}{16}} \end{aligned} $$ To find A, we will take finite FST of $u(x, 0)=2 x$, we get $$ \begin{aligned} \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0) & =\int_{0}^{4} 2 \mathrm{x} \sin \left(\frac{\mathrm{s} \pi \mathrm{x}}{4}\right) \mathrm{dx} \\ & =2\left[\mathrm{x}\left(-\cos \frac{\mathrm{s} \pi \mathrm{x}}{4}\right) \cdot \frac{4}{\mathrm{~s} \pi}\right]_{0}^{4}+2 \cdot \frac{4}{\mathrm{~s} \pi} \int_{0}^{4} \cos \frac{\pi \mathrm{s} \mathrm{x}}{4} \mathrm{dx} \\ & =2(-1)^{\mathrm{s}+1} \frac{16}{\mathrm{~s} \pi}+\frac{8}{\mathrm{~s} \pi}\left[\sin \left(\frac{\mathrm{s} \pi \mathrm{x}}{4}\right) \cdot \frac{4}{\mathrm{~s} \pi}\right]_{0}^{4} \\ & =2(-1)^{\mathrm{s}+1} \cdot \frac{16}{\mathrm{~s} \pi}+\frac{8}{\mathrm{~s} \pi}(0) \\ & =(-1)^{\mathrm{s}+1} \frac{32}{\mathrm{~s} \pi}=-\frac{32}{\mathrm{~s} \pi} \cos \mathrm{s} \pi \end{aligned} $$ from $(*), \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0)=\mathrm{A}$ $$ \therefore \quad \mathrm{A}=\frac{-32}{\mathrm{~s} \pi} \cos \mathrm{s} \pi $$ Hence $\bar{u}_{s}(s, t)=-\frac{32}{s} \cos s \pi e^{\frac{-s^{2} \pi^{2}}{16} t}$ Taking Inverse Finite Fourier sine transform, we get $$ \begin{aligned} \mathrm{u}(\mathrm{x}, \mathrm{t}) & =\frac{2}{4} \sum_{\mathrm{s}=1}^{\infty}\left(\frac{-32}{\mathrm{~s} \pi} \cos \mathrm{s} \pi\right) \mathrm{e}^{\frac{-\mathrm{s}^{2} \pi^{2}}{16} \mathrm{t}} \sin \left(\frac{\mathrm{s} \pi \mathrm{x}}{4}\right) \\ \therefore \quad \mathrm{u}(\mathrm{x}, \mathrm{t}) & =\frac{-16}{\pi} \sum_{\mathrm{s}=1}^{\infty} \frac{\cos \mathrm{s} \pi}{\mathrm{s}} \sin \left(\frac{\mathrm{s} \pi \mathrm{x}}{4}\right) \mathrm{e}^{\frac{-\mathrm{s}^{2} \pi^{2} \mathrm{t}}{16}} \end{aligned} $$ Example :- Solve the equation $$ \frac{\partial \mathrm{u}}{\partial \mathrm{t}}=2 \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} $$ subject to conditions (i) $\mathrm{u}(0, \mathrm{t})=0$ (ii) $\mathrm{u}(\mathrm{x}, 0)=\mathrm{e}^{-\mathrm{x}}$ (iii) $u(x, t)$ is bounded when $x>0, t>0$ Solution :- Since $(u)_{x=0}$ is given, so taking FST, we get $$ \begin{aligned} & \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{t}} \sin \mathrm{sx} \mathrm{dx}=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} 2 \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \sin \mathrm{sxdx} \\ & \Rightarrow \quad \sqrt{\frac{2}{\pi}} \frac{\mathrm{d}}{\mathrm{dt}} \int_{0}^{\infty} \mathrm{u} \sin \mathrm{sx} \mathrm{dx}=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} 2 \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \sin \mathrm{sxdx} \\ & \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{u} \sin \mathrm{sx} \mathrm{dx}=2 \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \sin \mathrm{sx} \mathrm{dx} \\ & \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=2 \sqrt{\frac{2}{\pi}}\left[\left(\sin \mathrm{sx} \frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right)_{0}^{\infty}-\mathrm{s} \int_{0}^{\infty} \cos \mathrm{s} x \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \mathrm{dx}\right] \\ & =2 \sqrt{\frac{2}{\pi}}(0)-2 \sqrt{\frac{2}{\pi}} \mathrm{s} \int_{0}^{\infty} \cos \mathrm{sx} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \mathrm{dx} \\ & {\left[\operatorname{assuming} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \rightarrow 0 \text { as } \mathrm{x} \rightarrow \infty\right]} \\ & \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=2 \sqrt{\frac{2}{\pi}}\left[-\mathrm{s}(\mathrm{u}(\mathrm{x}, \mathrm{t}) \cos \mathrm{s} \mathrm{x})_{0}^{\infty}-\mathrm{s}^{2} \int_{0}^{\infty} \sin \operatorname{sxu}(\mathrm{x}, \mathrm{t}) \mathrm{dx}\right] \\ & =2 \sqrt{\frac{2}{\pi}}\left[-s^{2} \int_{0}^{\infty} u(x, t) \sin s x d x\right] \quad[\operatorname{assuming} u(x, t) \rightarrow 0 \text { as } x \rightarrow \infty] \\ & =-2 s^{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} u(x, t) \sin s x d x \\ & \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=-2 \mathrm{~s}^{2} \overline{\mathrm{u}}_{\mathrm{s}} \\ & \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}+2 \mathrm{~s}^{2} \overline{\mathrm{u}}_{\mathrm{s}}=0 \end{aligned} $$ A.E. is $\quad \mathrm{D}+2 \mathrm{~s}^{2}=0$ Solution is $\bar{u}_{s}(s, t)=A e^{-2 s^{2} t}$ To find A, taking FST of condition (ii) $\mathrm{u}(\mathrm{x}, 0)=\mathrm{e}^{-\mathrm{x}}$ Taking FST, we get $$ \begin{aligned} \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0) & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{x}} \sin \mathrm{sx} \mathrm{dx} \\ & =\sqrt{\frac{2}{\pi}} \frac{\mathrm{s}}{1+\mathrm{s}^{2}} \text { at } \mathrm{t}=0 \end{aligned} $$ $$ \left[\Theta u \sin g \int e^{a x} \sin b x d x=\frac{e^{a x}}{a^{2}+b^{2}}(a \sin b x-b \cos b x) \text { taking } a=-1, b=s\right. $$ and then taking lim, we get $\left.\sqrt{\frac{2}{\pi}} \frac{\mathrm{s}}{1+\mathrm{s}^{2}}\right]$ Putting $t=0$ in (3), we get $$ \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0)=\mathrm{A} $$ from (4) and (5), we get $$ A=\sqrt{\frac{2}{\pi}}\left(\frac{s}{1+s^{2}}\right) $$ Putting value of A in (3), we get $$ \bar{u}_{\mathrm{s}}(\mathrm{s}, \mathrm{t})=\sqrt{\frac{2}{\pi}}\left(\frac{\mathrm{s}}{1+\mathrm{s}^{2}}\right) \mathrm{e}^{-2 \mathrm{~s}^{2} \mathrm{t}} $$ Taking inverse FST, we get $$ \begin{aligned} \mathrm{u}(\mathrm{x}, \mathrm{t}) & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \sqrt{\frac{2}{\pi}}\left(\frac{\mathrm{s}}{1+\mathrm{s}^{2}}\right) \mathrm{e}^{-2 \mathrm{~s}^{2} \mathrm{t}} \sin \mathrm{sx} \mathrm{ds} \\ & =\frac{2}{\pi} \int_{0}^{\infty} \frac{\mathrm{s}}{1+\mathrm{s}^{2}} \mathrm{e}^{-2 \mathrm{~s}^{2} \mathrm{t}} \text { sin } \mathrm{sx} \text { ds which is required solution. } \end{aligned} $$ Example: Solve the equation $$ \frac{\partial \mathrm{u}}{\partial \mathrm{t}}=\frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} $$ subject to conditions $$ \begin{aligned} & \text { (i) } \quad \mathrm{u}_{\mathrm{x}}(0, \mathrm{t})=0 \\ & \text { (ii) } \quad u(x, 0)= \begin{cases}\mathrm{x}, & 0<\mathrm{x} \leq 1 \\ 0, & \mathrm{x}>1\end{cases} \end{aligned} $$ (iii) $\mathrm{u}(\mathrm{x}, \mathrm{t})$ is bounded Solution :- Since $\left(\mathrm{u}_{\mathrm{x}}\right)_{\mathrm{x}=0}$ is given, so taking FCT of (1), we get $$ \begin{aligned} & \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial u}{\partial t} \cos s x d x=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} u}{\partial x^{2}} \cos s x d x \\ \Rightarrow & \sqrt{\frac{2}{\pi}} \frac{d}{d t} \int_{0}^{\infty} u \cos s x d x=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} u}{\partial x^{2}} \cos s x d x \\ \Rightarrow \quad & \frac{d}{d t} \bar{u}_{c}=\sqrt{\frac{2}{\pi}}\left[\left(\frac{\partial u}{\partial x} \cos s x\right)_{0}^{\infty}+s \int_{0}^{\infty} \sin s x \frac{\partial u}{\partial x} d x\right] \end{aligned} $$ $\left[\Theta \mathrm{u} \rightarrow 0\right.$ as $\mathrm{x} \rightarrow \infty \Rightarrow \mathrm{u}_{\mathrm{x}}=0$ as $\mathrm{x} \rightarrow \infty$ and also $\left.\mathrm{u}_{\mathrm{x}}(0, \mathrm{t})=0\right]$ $$ \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}} \sqrt{\frac{2}{\pi}}\left[(0-0)+\mathrm{s}[\sin \operatorname{sxu}(\mathrm{x}, \mathrm{t})]_{0}^{\infty}-\mathrm{s}^{2} \int_{0}^{\infty} \mathrm{u}(\mathrm{x}, \mathrm{t}) \cos \mathrm{sx} \mathrm{dx}\right. $$$$ \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}}=\sqrt{\frac{2}{\pi}}\left(-\mathrm{s}^{2}\right) \int_{0}^{\infty} \cos \mathrm{s} x \mathrm{u}(\mathrm{x}, \mathrm{t}) \mathrm{dx} $$ $$ =\left(-s^{2}\right) \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} u(x, t) \cos s x d x $$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}}=-\mathrm{s}^{2} \overline{\mathrm{u}}_{\mathrm{c}}$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{c}}+\mathrm{s}^{2} \overline{\mathrm{u}}_{\mathrm{c}}=0$ A.E. is $D+s^{2}=0$ $\therefore$ solution is $\overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, \mathrm{t})=\mathrm{Ae}^{-\mathrm{s}^{2} \mathrm{t}}$ Now to find A, taking FCT of condition (ii) $$ \begin{aligned} \overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, 0) & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{u}(\mathrm{x}, 0) \cos \mathrm{s} \mathrm{dx} \\ & =\sqrt{\frac{2}{\pi}} \int_{0}^{1} \mathrm{x} \cos \mathrm{sx} \mathrm{dx} \quad[\Theta \mathrm{u}(\mathrm{x}, 0)=0, \mathrm{x}>1] \\ & =\sqrt{\frac{2}{\pi}}\left[\left[\mathrm{x} \frac{\sin \mathrm{sx}}{\mathrm{s}}\right]_{0}^{1}-\int_{0}^{1} \frac{\sin \mathrm{sx}}{\mathrm{s}} \mathrm{dx}\right] \end{aligned} $$ $$ \begin{aligned} & =\sqrt{\frac{2}{\pi}}\left\{\frac{\sin \mathrm{s}}{\mathrm{s}}+\left[\frac{\cos \mathrm{s} \mathrm{x}}{\mathrm{s}^{2}}\right]_{0}^{1}\right\} \\ & =\sqrt{\frac{2}{\pi}}\left[\frac{\sin \mathrm{s}}{\mathrm{s}}+\frac{\cos \mathrm{s}-1}{\mathrm{~s}^{2}}\right] \text { at } \mathrm{t}=0 \end{aligned} $$ Putting $t=0$ in (3), we get $$ \overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, 0)=\mathrm{A} \Rightarrow \mathrm{A}=\sqrt{\frac{2}{\pi}}\left[\frac{\sin \mathrm{s}}{\mathrm{s}}+\frac{\cos \mathrm{s}-1}{\mathrm{~s}^{2}}\right] $$ from (3) and (4), we get $$ \overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, \mathrm{t})=\sqrt{\frac{2}{\pi}}\left[\frac{\sin \mathrm{s}}{\mathrm{s}}+\frac{\cos \mathrm{s}-1}{\mathrm{~s}^{2}}\right] \mathrm{e}^{-\mathrm{s}^{2} \mathrm{t}} $$ Taking inverse FCT, we get $$ \begin{aligned} \mathrm{u}(\mathrm{x}, \mathrm{t}) & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \overline{\mathrm{u}}_{\mathrm{c}}(\mathrm{s}, \mathrm{t}) \cos \mathrm{sx} \mathrm{ds} \\ \Rightarrow \quad \mathrm{u}(\mathrm{x}, \mathrm{t}) & =\frac{2}{\pi} \int_{0}^{\infty}\left(\frac{\sin \mathrm{s}}{\mathrm{s}}+\frac{\cos \mathrm{s}-1}{\mathrm{~s}^{2}}\right) \mathrm{e}^{-\mathrm{s}^{2} \mathrm{t}} \cos \mathrm{s} \mathrm{d} d \mathrm{~s} \end{aligned} $$ which is required solution. Example:- Solve the equation $$ \frac{\partial \mathrm{u}}{\partial \mathrm{t}}=\frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}}, \mathrm{x}>0, \mathrm{y}>0 $$ Subject to condition (i) $\mathrm{u}(0, \mathrm{t})=0$ (ii) $\quad u(x, 0)=\left\{\begin{array}{ll}1, & 0<x<1 \\ 0, & x \geq 1\end{array}\right.$ when $\mathrm{t}=0$ (iii) $u(x, t)$ is bounded. Solution :- Since $(\mathrm{u})_{\mathrm{x}=0}$ is given, so taking FST of (1), we get $$ \begin{aligned} & \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{t}} \sin \mathrm{sx} \mathrm{dx}=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \sin \mathrm{sxdx} \\ \Rightarrow \quad \sqrt{\frac{2}{\pi}} \frac{\mathrm{d}}{\mathrm{dt}} \int_{0}^{\infty} \mathrm{u} \sin \mathrm{sxdx} & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \frac{\partial^{2} \mathrm{u}}{\partial \mathrm{x}^{2}} \sin \mathrm{sxdx} \end{aligned} $$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{u} \sin \mathrm{sx} \mathrm{dx}=\sqrt{\frac{2}{\pi}}\left[\sin \mathrm{sx} \frac{\partial \mathrm{u}}{\partial \mathrm{x}}\right]_{0}^{\infty}-\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{s} \cos \mathrm{s} x \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \mathrm{dx}$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=(0)-\sqrt{\frac{2}{\pi}} \mathrm{s} \int_{0}^{\infty} \frac{\partial \mathrm{u}}{\partial \mathrm{x}} \cos \mathrm{s} \mathrm{d} \mathrm{dx} \quad\left[\right.$ assuming $\frac{\partial \mathrm{u}}{\partial \mathrm{x}} \rightarrow 0$ as $\left.\mathrm{x} \rightarrow \infty\right]$ $$ =-s \sqrt{\frac{2}{\pi}}[\cos s x . u(x, t)]_{0}^{\infty}+s \sqrt{\frac{2}{\pi}} \int_{0}^{\infty}(-s \sin \mathrm{s} x) u(x, t) d x $$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}=\mathrm{s} \sqrt{\frac{2}{\pi}} \mathrm{u}(0, \mathrm{t})-\mathrm{s}^{2} \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{u} \sin \mathrm{sxdx}$ $$ =0-\mathrm{s}^{2} \overline{\mathrm{u}}_{\mathrm{s}} \quad[\Theta \mathrm{u}(0, \mathrm{t})=0 \text { given }] $$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \overline{\mathrm{u}}_{\mathrm{s}}+\mathrm{s}^{2} \overline{\mathrm{u}}_{\mathrm{s}}=0$ A.E. is $\mathrm{D}+\mathrm{s}^{2}=0$ $\therefore$ Solution is $\overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, \mathrm{t})=\mathrm{Ae}^{-\mathrm{s}^{2} \mathrm{t}}$ To find A, we take FST of condition (ii), $$ \begin{aligned} \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0) & =\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{u}(\mathrm{x}, 0) \sin \mathrm{sxdx} \\ & =\sqrt{\frac{2}{\pi}} \int_{0}^{1}(1) \sin \mathrm{sx} \mathrm{dx} \quad[\Theta \mathrm{u}(\mathrm{x}, 0)=0 \text { for } \mathrm{x} \geq 1] \\ & =\sqrt{\frac{2}{\pi}}\left[\frac{-\cos \mathrm{sx}}{\mathrm{s}}\right]_{0}^{1} \\ \Rightarrow \quad \overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0) & =\sqrt{\frac{2}{\pi}}\left[-\frac{\cos \mathrm{s}}{\mathrm{s}}+\frac{1}{\mathrm{~s}}\right]=\sqrt{\frac{2}{\pi}}\left(\frac{1-\cos \mathrm{s}}{\mathrm{s}}\right) \end{aligned} $$ From (3), we have $\overline{\mathrm{u}}_{\mathrm{s}}(\mathrm{s}, 0)=\mathrm{A}$ $$ \begin{aligned} & A=\sqrt{\frac{2}{\pi}}\left(\frac{1-\cos s}{s}\right) \\ \therefore \quad & (3) \Rightarrow \bar{u}_{s}(s, t)=\sqrt{\frac{2}{\pi}}\left(\frac{1-\cos s}{s}\right) e^{-s^{2} t} \end{aligned} $$ Taking inverse FST, we get $$ \begin{aligned} & u(x, t)=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \sqrt{\frac{2}{\pi}}\left(\frac{1-\cos s}{s}\right) e^{-s^{2} t} \sin s x d s \\ & \Rightarrow \quad \mathrm{u}(\mathrm{x}, \mathrm{t})=\frac{2}{\pi} \int_{0}^{\infty}\left(\frac{1-\cos \mathrm{s}}{\mathrm{s}}\right) \sin \mathrm{sx} \mathrm{e}^{-\mathrm{s}^{2} \mathrm{t}} \mathrm{ds} \end{aligned} $$ which is required solution. ## LESSON 3 ## Transformation of coordinates Let the rectangular coordinates $(\mathrm{x}, \mathrm{y}, \mathrm{z})$ of any point be expressed as functions of $\left(\mathrm{u}_{1}, \mathrm{u}_{2}\right.$, $\mathrm{u}_{3}$ ) so that $$ \left.\begin{array}{l} \mathrm{x}=\mathrm{x}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right) \\ \mathrm{y}=\mathrm{y}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right) \\ \mathrm{z}=\mathrm{z}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right. \end{array}\right\} $$ Suppose (1) can be solved for $\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}$ in terms of $\mathrm{x}, \mathrm{y}$, $\mathrm{z}$ i.e. $$ \left.\begin{array}{l} \mathrm{u}_{1}=\mathrm{u}_{1}(\mathrm{x}, \mathrm{y}, \mathrm{z}) \\ \mathrm{u}_{2}=\mathrm{u}_{2}(\mathrm{x}, \mathrm{y}, \mathrm{z}) \\ \mathrm{u}_{3}=\mathrm{u}_{3}(\mathrm{x}, \mathrm{y}, \mathrm{z}) \end{array}\right\} $$ Here correspondence between $(x, y, z)$ and $\left(u_{1}, u_{2}, u_{3}\right)$ is unique i.e. if to each point $P(x, y, z)$ of some region $R$, there corresponds one \\& only one triad $\left(u_{1}, u_{2}, u_{3}\right)$, then $\left.u_{1}, u_{2}, u_{3}\right)$ are said to be curvilinear coordinates of the point $P$. The set of equations (1) \\& (2) define a transformation of co-ordinates. ## Co-ordinate surfaces and curves:- The surfaces $$ \mathrm{u}_{1}=\mathrm{c}_{1}, \mathrm{u}_{2}=\mathrm{c}_{2}, \mathrm{u}_{3}=\mathrm{c}_{3} $$ where $c_{1}, c_{2}, c_{3}$ are constants i.e. surfaces whose equations are $$ \begin{aligned} & \mathrm{u}_{1}=\mathrm{u}_{1}(\mathrm{x}, \mathrm{y}, \mathrm{z})=\mathrm{c}_{1} \\ & \mathrm{u}_{2}=\mathrm{u}_{2}(\mathrm{x}, \mathrm{y}, \mathrm{z})=\mathrm{c}_{2} \\ & \mathrm{u}_{3}=\mathrm{u}_{3}(\mathrm{x}, \mathrm{y}, \mathrm{z})=\mathrm{c}_{3} \end{aligned} $$ are called co-ordinate surfaces and each pair of these surfaces intersect in curves called coordinate curves or lines. So $\mathrm{u}_{1}=\mathrm{c}_{1}$ and $\mathrm{u}_{2}=\mathrm{c}_{2}$ gives $\mathrm{u}_{3}$-curve Similarly $\mathrm{u}_{2}=\mathrm{c}_{2}$ and $\mathrm{u}_{3}=\mathrm{c}_{3}$ gives $\mathrm{u}_{1}-$ curve and $\quad \mathrm{u}_{1}=\mathrm{c}_{1}$ and $\mathrm{u}_{3}=\mathrm{c}_{3}$ gives $\mathrm{u}_{2}$ - curve So if $\mathrm{u}_{2}=\mathrm{c}_{2}$ and $\mathrm{u}_{3}=\mathrm{c}_{3}$ and if $\mathrm{u}_{1}$ is the only variables then point $\mathrm{P}$ describe a curve known as $\mathrm{u}_{1}$ curve which is a function of $\mathrm{u}_{1}$. ## Orthgonal Curvilinear Co-ordinates If the three co-ordinate surfaces intersect at right angles (orthogonal), the curvilinear coordinate system is called orthogonal i.e. co-ordinates $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ are said to be orthogonal curvilinear co-ordinates. Then $\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}$ co-ordinate curves of a curvilinear system are analogous to the $\mathrm{x}, \mathrm{y}, \mathrm{z}-$ coordinates axes of rectangular system. ## Unit vectors in curvilinear system $$ \text { Let } \rho=x \hat{i}+y \hat{j}+z \hat{k} $$ be position vector of $\mathrm{P}$, then (1) can be written as $$ \mathrm{r}=\mathrm{r}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right) $$ A tangent vector to the $u_{1}$-curve at $P\left(\right.$ for which $u_{2} \& u_{3}$ are constant $)$ is $\frac{\partial r}{\partial u_{1}}$ Then, A unit tangent vector in this direction is $$ \hat{\mathrm{e}}_{1}=\frac{\partial \mathrm{r} / \partial \mathrm{u}_{1}}{\left|\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right|} $$ or $\quad \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}=\hat{\mathrm{e}}_{1}\left|\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right|$ $\Rightarrow \quad \frac{\partial \mathrm{P}}{\partial \mathrm{u}_{1}}=\mathrm{h}_{1} \hat{\mathrm{e}}_{1}$ where $\mathrm{h}_{1}=\left|\frac{\partial \mathrm{P}}{\partial \mathrm{u}_{1}}\right|$ Similarly if $\hat{\mathrm{e}}_{2}$ and $\hat{\mathrm{e}}_{3}$ are unit tangent vectors to the $\mathrm{u}_{2}-\& \mathrm{u}_{3}-$ curves at $\mathrm{P}$ respectively, then $$ \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}=\mathrm{h}_{2} \hat{\mathrm{e}}_{2}, \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}=\mathrm{h}_{3} \hat{\mathrm{e}}_{3} $$ where $\mathrm{h}_{2}=\left|\frac{\partial \mathrm{P}}{\partial \mathrm{u}_{2}}\right|, \mathrm{h}_{3}=\left|\frac{\partial \mathrm{P}}{\partial \mathrm{u}_{3}}\right|$ The quantities $\mathrm{h}_{1}, \mathrm{~h}_{2}, \mathrm{~h}_{3}$ are called scale factors. Also, condition for the orthogonality of co-ordinate surfaces are $$ \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}=0, \quad \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}=0, \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}=0 $$ Also the vectors $\nabla \mathrm{u}_{1}, \nabla \mathrm{u}_{2}, \nabla \mathrm{u}_{3}$ at $\mathrm{P}$ are directed along the normal to the co-ordinate surfaces $$ \mathrm{u}_{1}=\mathrm{c}_{1}, \mathrm{u}_{2}=\mathrm{c}_{2}, \mathrm{u}_{3}=\mathrm{c}_{3} \text { respectively. } $$ So unit-vectors in these directions are given by $$ \hat{\mathrm{E}}_{1}=\frac{\nabla \mathrm{u}_{1}}{\left|\nabla \mathrm{u}_{1}\right|}, \quad \hat{\mathrm{E}}_{2}=\frac{\nabla \mathrm{u}_{2}}{\left|\nabla \mathrm{u}_{2}\right|}, \quad \hat{\mathrm{E}}_{3}=\frac{\nabla \mathrm{u}_{3}}{\left|\nabla \mathrm{u}_{3}\right|} $$ Thus, at each point $\mathrm{P}$ of a curvilinear system, there are, in general, two sets of unit vectors $\hat{\mathrm{e}}_{1}, \hat{\mathrm{e}}_{2}, \hat{\mathrm{e}}_{3}$ tangent to the coordinate curves and $\hat{\mathrm{E}}_{1} . \hat{\mathrm{E}}_{2}, \hat{\mathrm{E}}_{3}$ normal to the co-ordinate surfaces. The sets become identical if $\&$ only if the curvilinear coordinate system is orthogonal. ## Arc length and Volume element $$ \begin{aligned} & \text { From } \mathrm{r}=\mathrm{r}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right) \\ & \Rightarrow \quad \mathrm{d} \stackrel{\rho}{\mathrm{r}}=\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \mathrm{du}_{1}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \mathrm{du}_{2}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \mathrm{du}_{3} \\ & =\mathrm{h}_{1} d \mathrm{u}_{1} \hat{\mathrm{e}}_{1}+\mathrm{h}_{2} \mathrm{du}_{2} \hat{\mathrm{e}}_{2}+\mathrm{h}_{3} d \mathrm{u}_{3} \hat{\mathrm{e}}_{3} \end{aligned} $$ Then, the differential of arc length ds is determined from $$ (\mathrm{ds})^{2}=\mathrm{dr} \cdot \mathrm{dr} $$ For orthogonal system, $$ \begin{gathered} \hat{\mathrm{e}}_{1} \cdot \hat{\mathrm{e}}_{2}=\hat{\mathrm{e}}_{2} \cdot \hat{\mathrm{e}}_{3}=\hat{\mathrm{e}}_{3} \cdot \hat{\mathrm{e}}_{1}=0 \\ \text { or } \quad \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}=\frac{\partial \tilde{\mathrm{r}}}{\partial \mathrm{u}_{2}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}=\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}=0 \end{gathered} $$ using $(*)$ $$ \begin{aligned} \text { and }(\mathrm{ds})^{2}=( & \left.\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right)\left(\mathrm{du}_{1}\right)^{2} \\ & +\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}\right)\left(\mathrm{du}_{2}\right)^{2} \\ & +\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right)\left(\mathrm{du}_{3}\right)^{2}+2 \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \mathrm{du}_{1} \mathrm{du}_{2} \\ & +2 \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \mathrm{du}_{2} \mathrm{du}_{3}+2 \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \mathrm{du}_{1} \mathrm{du}_{3} \\ \Rightarrow \quad(\mathrm{ds})^{2}= & \left(\frac{\partial \rho}{\partial \mathrm{u}_{1}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right)\left(\mathrm{du}_{1}\right)^{2}+\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}\right)\left(\mathrm{du}_{2}\right)^{2}+\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right)\left(\mathrm{du}_{3}\right)^{2} \\ = & \left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right)^{2}\left(\mathrm{du}_{1}\right)^{2}+\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}\right)^{2}\left(\mathrm{du}_{2}\right)^{2}+\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right)^{2}\left(\mathrm{du}_{3}\right)^{2}\left[\mathrm{using}_{(2)]}\right. \end{aligned} $$ $\Rightarrow \quad(\mathrm{ds})^{2}=\mathrm{h}_{1}^{2}\left(\mathrm{du}_{1}\right)^{2}+\mathrm{h}_{2}^{2}\left(\mathrm{du}_{2}\right)^{2}+\mathrm{h}_{3}^{2}\left(\mathrm{du}_{3}\right)^{2}$ where $\mathrm{h}_{1}=\left|\frac{\partial \mathrm{P}}{\partial \mathrm{u}_{1}}\right|=\frac{1}{\left|\nabla \mathrm{u}_{1}\right|}$ $$ \begin{aligned} & \mathrm{h}_{2}=\left|\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}\right|=\frac{1}{\left|\nabla \mathrm{u}_{2}\right|} \\ & \mathrm{h}_{3}=\left|\frac{\partial \mathrm{\rho}}{\partial \mathrm{u}_{3}}\right|=\frac{1}{\left|\nabla \mathrm{u}_{3}\right|} \end{aligned} $$ Now if $r=\ddot{r}(\mathrm{x}, \mathrm{y}, \mathrm{z})$ Then $\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{r}}=\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}} \hat{\mathrm{i}}+\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{y}} \hat{\mathrm{j}}+\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{z}} \hat{\mathrm{k}}=\nabla \mathrm{u}_{1}$ $\Rightarrow \quad \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}=\frac{1}{\nabla \mathrm{u}_{1}}$ ## Length along $u_{1}$-curve For this $\mathrm{u}_{2} \& \mathrm{u}_{3}$ are constant. $\begin{aligned} & \mathrm{u}_{2}=\mathrm{c}, \mathrm{u}_{3}=\mathrm{c} \\ \Rightarrow \quad & \mathrm{du}_{2}=0, \mathrm{du}_{3}=0\end{aligned}$ If $\mathrm{ds}_{1} \rightarrow$ differential of length along $\mathrm{u}_{1}$-curve $\left(\mathrm{ds}_{1}\right)^{2}=\mathrm{h}_{1}^{2}\left(\mathrm{du}_{1}\right)^{2}$ $\Rightarrow \quad \mathrm{ds}_{1}=\mathrm{h}_{1} \mathrm{du}_{1}$ Similarly length along $\mathrm{u}_{2}$-curve is $$ \mathrm{ds}_{2}=\mathrm{h}_{2} \mathrm{du}_{2} $$ and length along $\mathrm{u}_{3}$-curve is $$ \mathrm{ds}_{3}=\mathrm{h}_{3} \mathrm{du}_{3} $$ ## The volume element For an orthogonal curvilinear coordinate system is given by $$ \mathrm{dV}=\left|\left(\mathrm{h}_{1} \mathrm{du}_{1} \hat{\mathrm{e}}_{1}\right) \cdot\left(\mathrm{h}_{2} \mathrm{du}_{2} \hat{\mathrm{e}}_{2}\right)\left(\mathrm{h}_{3} \mathrm{du}_{3} \hat{\mathrm{e}}_{3}\right)\right|[\Theta \text { volume }=|\hat{\mathrm{a}} \cdot(\hat{\mathrm{b}} \times \hat{\mathrm{c}})| $$ $\Rightarrow \quad \mathrm{dv}=\mathrm{h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3} \mathrm{du}_{1} \mathrm{du}_{2} \mathrm{du}_{3}$ Differential operators in terms of orthogonal curvilinear coordinates $\left(\mathbf{u}_{1}, \mathbf{u}_{2}, \mathbf{u}_{3}\right)$ ## GRADIENT Let $\phi \rightarrow$ scalar point function and $\quad \stackrel{\mu}{\mathrm{f}}=\mathrm{f}_{1} \hat{\mathrm{a}}+\mathrm{f}_{2} \hat{\mathrm{b}}+\mathrm{f}_{3} \hat{\mathrm{c}}$ $$ =\mathrm{f}_{1} \hat{\mathrm{e}}_{1}+\mathrm{f}_{2} \hat{\mathrm{e}}_{2}+\mathrm{f}_{3} \hat{\mathrm{e}}_{3} $$ i.e. $\quad \phi=\phi\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ $$ \begin{array}{r} =\phi\left[\mathrm{u}_{1}(\mathrm{x}, \mathrm{y}, \mathrm{z}), \mathrm{u}_{2}(\mathrm{x}, \mathrm{y}, \mathrm{z}), \mathrm{u}_{3}(\mathrm{x}, \mathrm{y}, \mathrm{z})\right] \\ \Rightarrow \quad \frac{\partial \varphi}{\partial \mathrm{x}}=\frac{\partial \varphi}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}+\frac{\partial \varphi}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}+\frac{\partial \varphi}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}} \end{array} $$ $$ \begin{aligned} & \frac{\partial \varphi}{\partial \mathrm{y}}=\frac{\partial \varphi}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{y}}+\frac{\partial \varphi}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{y}}+\frac{\partial \varphi}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{y}} \\ & \frac{\partial \varphi}{\partial \mathrm{z}}=\frac{\partial \varphi}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{z}}+\frac{\partial \varphi}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{z}}+\frac{\partial \varphi}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{z}} \end{aligned} $$ Operating $\hat{\mathrm{i}} \times(1)+\hat{\mathrm{j}} \times(2)+\hat{\mathrm{k}} \times(3)$, we get $\nabla \phi=\operatorname{grad} \phi=\frac{\partial \varphi}{\partial \mathrm{u}_{1}} \nabla \mathrm{u}_{1}+\frac{\partial \varphi}{\partial \mathrm{u}_{2}} \nabla \mathrm{u}_{2}+\frac{\partial \varphi}{\partial \mathrm{u}_{3}} \partial \mathrm{u}_{3}$ But $\hat{\mathrm{e}}_{1}=\mathrm{h}_{1} \nabla \mathrm{u}_{1}, \hat{\mathrm{e}}_{2}=\mathrm{h}_{2} \nabla \mathrm{u}_{2}, \hat{\mathrm{e}}_{3}=\mathrm{h}_{3} \nabla \mathrm{u}_{3}$ $\Rightarrow \quad \operatorname{grad} \phi=\nabla \phi=\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}} \frac{\partial \varphi}{\partial \mathrm{u}_{1}}+\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{2}} \frac{\partial \varphi}{\partial \mathrm{u}_{2}}+\frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{3}} \frac{\partial \varphi}{\partial \mathrm{u}_{3}}$ Example:- If $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ are orthogonal coordinate then prove that (i) $\quad\left|\nabla \mathrm{u}_{\mathrm{p}}\right|=\mathrm{hp}^{-1}, \mathrm{p}=1,2,3$ (ii) $\quad \hat{\mathrm{e}}_{\mathrm{p}}=\hat{\mathrm{E}}_{\mathrm{p}}$ Proof :- (i) let $\phi=\mathrm{u}_{1}$, then $$ \nabla \mathrm{u}_{1}=\frac{1}{\mathrm{~h}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{u}_{1}} \hat{\mathrm{e}}_{1}=\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}} $$ $$ \left|\nabla \mathrm{u}_{1}\right|=\frac{\left|\hat{\mathrm{e}}_{1}\right|}{\left|\mathrm{h}_{1}\right|}=\frac{1}{\mathrm{~h}_{1}}=\mathrm{h}_{1}^{-1} $$ Similarly $\phi=\mathrm{u}_{2}$, then $$ \begin{aligned} & \nabla \mathrm{u}_{2}=\frac{1}{\mathrm{~h}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{u}_{2}} \hat{\mathrm{e}}_{2}=\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{2}} \\ & \Rightarrow \quad\left|\nabla \mathrm{u}_{2}\right|=\frac{\left|\hat{\mathrm{e}}_{2}\right|}{\left|\mathrm{h}_{2}\right|}=\frac{1}{\mathrm{~h}_{2}}=\mathrm{h}_{2}^{-1} \end{aligned} $$ Similarly if $\phi=\mathrm{u}_{3}$, then $\left|\nabla \mathrm{u}_{3}\right|=\mathrm{h}_{3}^{-1}$ (ii) By definition, $\hat{\mathrm{E}}_{\mathrm{p}}=\frac{\nabla \mathrm{u}_{\mathrm{p}}}{\left|\nabla \mathrm{u}_{\mathrm{p}}\right|}$ $\Rightarrow \quad \hat{\mathrm{E}}_{\mathrm{p}}=\mathrm{h}_{\mathrm{p}} \nabla \mathrm{u}_{\mathrm{p}}=\hat{\mathrm{e}}_{\mathrm{p}}$ ## Results :- I. $\operatorname{div}(\phi \hat{\mathrm{f}})=\nabla \cdot(\phi \hat{\mathrm{f}})=\phi \operatorname{div} \mathrm{f}+\hat{\mathrm{f}} \cdot \operatorname{Grad} \phi$ $$ =\phi \nabla \cdot \hat{\mathrm{f}}+\hat{\mathrm{f}} \cdot \nabla \phi $$ II. $\operatorname{div}(\hat{\mathrm{f}} \times \hat{\mathrm{g}})=\nabla \cdot(\hat{\mathrm{f}} \times \hat{\mathrm{g}})=\operatorname{curl} \hat{\mathrm{f}} \cdot \hat{\mathrm{g}}-\operatorname{curl} \hat{\mathrm{g}} \cdot \hat{\mathrm{f}}$ III. Curl $\operatorname{grad} \phi=\nabla \times \nabla \phi=0$ $$ \operatorname{div} \operatorname{curl} \hat{\mathrm{f}}=\nabla \cdot \nabla \times \hat{\mathrm{f}}=0 $$ IV. $\operatorname{Curl}(\phi \hat{\mathrm{f}})=\nabla \times(\phi \hat{\mathrm{f}})=\operatorname{grad} \phi \times \hat{\mathrm{f}}+\phi \operatorname{Curl} \hat{\mathrm{f}}$ ## DIVERGENCE Consider a vector function where $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ are orthogonal curvilinear coordinates. $$ \stackrel{\mathrm{N}}{\mathrm{f}}=\mathrm{f}_{1}\left(\hat{\mathrm{e}}_{2} \times \hat{\mathrm{e}}_{3}\right)+\mathrm{f}_{2}\left(\hat{\mathrm{e}}_{3} \times \hat{\mathrm{e}}_{1}\right)+\mathrm{f}_{3}\left(\hat{\mathrm{e}}_{1} \times \hat{\mathrm{e}}_{2}\right) $$ using $\quad \hat{\mathrm{e}}_{1}=\mathrm{h}_{1} \nabla \mathrm{u}_{1}$ $$ \begin{aligned} & \hat{\mathrm{e}}_{2}=\mathrm{h}_{2} \nabla \mathrm{u}_{2} \\ & \hat{\mathrm{e}}_{3}=\mathrm{h}_{3} \nabla \mathrm{u}_{3} \end{aligned} $$ $\stackrel{\mu}{\mathrm{f}}=\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)+\mathrm{f}_{2} \mathrm{~h}_{3} \mathrm{~h}_{1}\left(\nabla \mathrm{u}_{3} \times \nabla \mathrm{u}_{1}\right)+\mathrm{f}_{3} \mathrm{~h}_{1} \mathrm{~h}_{2}\left(\nabla \mathrm{u}_{1} \times \nabla \mathrm{u}_{2}\right)$ $$ \begin{aligned} \therefore \quad \operatorname{dif} \mathrm{f} & =\nabla \cdot \stackrel{\mu}{\mathrm{f}}=\nabla \cdot\left[\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)\right] \\ & +\nabla \cdot\left[\mathrm{f}_{2} \mathrm{~h}_{3} \mathrm{~h}_{1}\left(\nabla \mathrm{u}_{3} \times \nabla \mathrm{u}_{1}\right)\right] \\ & +\nabla \cdot\left[\mathrm{f}_{3} \mathrm{~h}_{1} \mathrm{~h}_{2}\left(\nabla \mathrm{u}_{1} \times \nabla \mathrm{u}_{2}\right)\right] \end{aligned} $$ Taking first art, $\nabla .\left[\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)\right]=\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3} \nabla .\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)+\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right) . \nabla\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right)$ $\Rightarrow \quad \nabla \cdot\left[\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)\right]=\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3} \nabla \cdot\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)+\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)$. $\nabla\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right)$ $\Rightarrow \quad \nabla \cdot\left[\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)\right]=\operatorname{curl} \nabla \mathrm{u}_{2} \cdot \nabla \mathrm{u}_{3}-\operatorname{curl} \nabla \mathrm{u}_{3} \cdot \nabla \mathrm{u}_{2}$ $=\left(\operatorname{curl} \operatorname{grad} \mathrm{u}_{2}\right) \cdot\left(\nabla \mathrm{u}_{3}\right)-\left(\operatorname{curl} \operatorname{grad} \mathrm{u}_{3}\right) \cdot \nabla \mathrm{u}_{2}$ $=0-0=0$ $\therefore \quad$ from (1), we get $\nabla \cdot\left[\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)\right]=\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right) \cdot \nabla\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right)$ $$ \begin{aligned} & =\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right) \cdot\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \nabla \mathrm{u}_{1}+\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \nabla \mathrm{u}_{2}\right. \\ & \left.+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \nabla \mathrm{u}_{3}\right] \\ & =\frac{\hat{\mathrm{e}}_{2} \times \hat{\mathrm{e}}_{3}}{\mathrm{~h}_{2} \mathrm{~h}_{3}}\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}}+\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{2}}+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{3}}\right] \end{aligned} $$ $$ \begin{aligned} {\left[\Theta \nabla \mathrm{u}_{1}\right.} & \left.=\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}}, \nabla \mathrm{u}_{2}=\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{2}}, \nabla \mathrm{u}_{3}=\frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{3}}\right] \\ & =\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{2} \mathrm{~h}_{3}} \cdot\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}}+\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{2}}+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{3}}\right] \\ & =\frac{1}{\mathrm{~h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}} \frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right) \end{aligned} $$ $\Rightarrow \quad \nabla \cdot\left[\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\left(\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}\right)\right]=\frac{1}{\mathrm{~h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}} \cdot \frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right)$ Similarly $\nabla \cdot\left[\mathrm{f}_{2} \mathrm{~h}_{3} \mathrm{~h}_{1}\left(\nabla \mathrm{u}_{3} \times \Delta \mathrm{u}_{1}\right)\right]=\frac{1}{\mathrm{~h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}} \frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{2} \mathrm{~h}_{3} \mathrm{~h}_{1}\right)$ and $\nabla \cdot\left[\mathrm{f}_{3} \mathrm{~h}_{1} \mathrm{~h}_{2}\left(\nabla \mathrm{u}_{1} \times \Delta \mathrm{u}_{2}\right)\right]=\frac{1}{\mathrm{~h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}} \frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{3} \mathrm{~h}_{1} \mathrm{~h}_{2}\right)$ So from $(*)$, we get $\therefore \nabla . \stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{~h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}}\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right)+\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{2} \mathrm{~h}_{3} \mathrm{~h}_{1}\right)+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{3} \mathrm{~h}_{1} \mathrm{~h}_{2}\right)\right]$ CURL Consider $\stackrel{\mu}{\mathrm{f}}=\mathrm{f}_{1} \hat{\mathrm{e}}_{1}+\mathrm{f}_{2} \hat{\mathrm{e}}_{2}+\mathrm{f}_{3} \hat{\mathrm{e}}_{3}$ $\Rightarrow \quad \stackrel{\sim}{\mathrm{f}}=\mathrm{f}_{1} \mathrm{~h}_{1} \nabla \mathrm{u}_{1}+\mathrm{f}_{2} \mathrm{~h}_{2} \nabla \mathrm{u}_{2}+\mathrm{f}_{3} \mathrm{~h}_{3} \nabla \mathrm{u}_{3} \quad\left[\Theta \mathrm{h}_{1} \nabla \mathrm{u}_{1}=\hat{\mathrm{e}}_{1}\right]$ then Curl $\stackrel{\sim}{\mathrm{f}}=\nabla \times \stackrel{\mu}{\mathrm{f}}=\nabla \times\left(\mathrm{h}_{1} \mathrm{f}_{1} \Delta \mathrm{u}_{1}\right)$ $+\nabla \times\left(\mathrm{h}_{2} \mathrm{f}_{2} \nabla \mathrm{u}_{2}\right)$ $+\times\left(\mathrm{h}_{3} \mathrm{f}_{3} \nabla \mathrm{u}_{3}\right)$ Taking the first art, we have by using property $\operatorname{Curl}(\phi \stackrel{\mu}{F})=\nabla \phi \times F+\phi \nabla \times F$ we get $\begin{aligned} \nabla \times\left(\mathrm{h}_{1} \mathrm{f}_{1} \nabla \mathrm{u}_{1}\right) & =\nabla\left(\mathrm{h}_{1} \mathrm{f}_{1}\right) \times \nabla \mathrm{u}_{1}+\mathrm{h}_{1} \mathrm{f}_{1}\left(\nabla \times \nabla \mathrm{u}_{1}\right) \\ & =\nabla\left(\mathrm{h}_{1} \mathrm{f}_{1}\right) \times \Delta \mathrm{u}_{1} \quad\left[\Theta \Delta \times \Delta \mathrm{u}_{1}=\text { curl grad } \mathrm{u}_{1}=0\right] \\ \Rightarrow \quad \nabla \times\left(\mathrm{h}_{1} \mathrm{f}_{1} \nabla \mathrm{u}_{1}\right)= & {\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right) \nabla \mathrm{u}_{1}+\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right) \nabla \mathrm{u}_{2}+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right) \nabla \mathrm{u}_{3} \times \nabla \mathrm{u}_{1}\right] }\end{aligned}$ [ $\Theta$ By definition of gradient of a function] $\Rightarrow \quad \nabla \times\left[h_{1} f_{1} \nabla u_{1}\right]=\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right) \frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}}+\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right) \frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{2}}+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right) \frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{3}} \times \frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}}\right]$ $\Rightarrow \quad \nabla \times\left(\mathrm{h}_{1} \mathrm{f}_{1} \nabla \mathrm{u}_{1}\right)=\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{1} \mathrm{~h}_{3}} \frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right)-\frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{2} \mathrm{~h}_{1}} \cdot \frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right)$ Similarly $\nabla \times\left(\mathrm{h}_{1} \mathrm{f}_{2} \nabla \mathrm{u}_{2}\right)=\frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{2} \mathrm{~h}_{1}} \frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{2} \mathrm{~h}_{2}\right) \frac{-\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{3} \mathrm{~h}_{2}} \frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{2} \mathrm{~h}_{2}\right)$ and $\quad \nabla \times\left(\mathrm{h}_{3} \mathrm{f}_{3} \nabla \mathrm{u}_{3}\right)=\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{3} \mathrm{~h}_{2}} \frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{3} \mathrm{~h}_{3}\right)-\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{1} \mathrm{~h}_{3}} \frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{3} \mathrm{~h}_{3}\right)$ $\therefore$ from $(1)$, $$ \begin{aligned} & \text { Curl } \stackrel{\rho}{\mathrm{f}}=\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{2} \mathrm{~h}_{3}}\left[\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{3} \mathrm{~h}_{3}\right)-\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{2} \mathrm{~h}_{2}\right)\right] \\ & +\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{3} \mathrm{~h}_{1}}\left[\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right)-\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{3} \mathrm{~h}_{3}\right)\right] \\ & +\frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{1} \mathrm{~h}_{2}}\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{2} \mathrm{~h}_{2}\right)-\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{1} \mathrm{~h}_{1}\right)\right] \\ & =\frac{1}{h_{1} h_{2} h_{3}}\left|\begin{array}{ccc} h_{1} \hat{e}_{1} & h_{2} \hat{e}_{2} & h_{3} \hat{e}_{3} \\ \frac{\partial}{\partial u_{1}} & \frac{\partial}{\partial u_{2}} & \frac{\partial}{\partial u_{3}} \\ h_{1} f_{1} & h_{2} f_{2} & h_{3} f_{3} \end{array}\right| \end{aligned} $$ ## LAPLACIAN OF SCALAR POINT FUNCTION $\left(\nabla^{2} \phi\right)$ Now $$ \begin{aligned} \nabla^{2} \phi & =\nabla \cdot(\nabla \phi) \\ \Rightarrow \quad \nabla^{2} \phi & =\nabla \cdot\left[\frac{\partial \varphi}{\partial \mathrm{u}_{1}} \nabla \mathrm{u}_{1} \frac{\partial \varphi}{\partial \mathrm{u}_{2}} \nabla \mathrm{u}_{2}+\frac{\partial \varphi}{\partial \mathrm{u}_{3}} \nabla \mathrm{u}_{3}\right] \\ & =\nabla \cdot\left[\frac{1}{\mathrm{~h}_{1}} \frac{\partial \varphi}{\partial \mathrm{u}_{1}} \hat{\mathrm{e}}_{1}+\frac{1}{\mathrm{~h}_{2}} \frac{\partial \varphi}{\partial \mathrm{u}_{2}} \hat{\mathrm{e}}_{2}+\frac{1}{\mathrm{~h}_{3}} \frac{\partial \varphi}{\partial \mathrm{u}_{3}} \hat{\mathrm{e}}_{3}\right] \end{aligned} $$ We know that $$ \nabla \cdot \stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{~h}_{2} \mathrm{~h}_{1} \mathrm{~h}_{3}}\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\mathrm{f}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}\right)+\frac{\partial}{\partial \mathrm{u}_{2}}\left(\mathrm{f}_{2} \mathrm{~h}_{3} \mathrm{~h}_{1}\right)+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\mathrm{f}_{3} \mathrm{~h}_{1} \mathrm{~h}_{2}\right)\right] $$ Here put $\mathrm{f}_{1}=\frac{1}{\mathrm{~h}_{1}} \frac{\partial \varphi}{\partial \mathrm{u}_{1}}$, $$ \mathrm{f}_{2}=\frac{1}{\mathrm{~h}_{2}} \frac{\partial \varphi}{\partial \mathrm{u}_{2}}, \mathrm{f}_{3}=\frac{1}{\mathrm{~h}_{3}} \frac{\partial \varphi}{\partial \mathrm{u}_{3}} $$ Therefore $$ \nabla^{2} \phi=\frac{1}{\mathrm{~h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3}}\left[\frac{\partial}{\partial \mathrm{u}_{1}}\left(\frac{\mathrm{h}_{2} \mathrm{~h}_{3}}{\mathrm{~h}_{1}} \frac{\partial \varphi}{\partial \mathrm{u}_{1}}\right)+\frac{\partial}{\partial \mathrm{u}_{2}}\left(\frac{\mathrm{h}_{3} \mathrm{~h}_{1}}{\mathrm{~h}_{2}} \frac{\partial \varphi}{\partial \mathrm{u}_{2}}\right)+\frac{\partial}{\partial \mathrm{u}_{3}}\left(\frac{\mathrm{h}_{1} \mathrm{~h}_{2}}{\mathrm{~h}_{3}} \frac{\partial \varphi}{\partial \mathrm{u}_{3}}\right)\right] $$ ## CYLINDRICAL POLAR COORDINATES $(\mathbf{r}, \theta, \mathrm{z})$ Let $\mathrm{P}$ is a point having Cartesian co-ordinates $(\mathrm{x}, \mathrm{y}, \mathrm{z})$ $\mathrm{OM}=\mathrm{x}, \quad \mathrm{MN}=\mathrm{y}(=\mathrm{OQ}), \mathrm{PN}=\mathrm{z}$ $$ \text { M } $$ $\mathrm{x}=\mathrm{OM}=\mathrm{ON} \cos \theta$ $\Rightarrow \quad \mathrm{x}=\mathrm{r} \cos \theta$ $\& \quad \mathrm{y}=\mathrm{MN}=\mathrm{N} \sin \theta$ $\Rightarrow \quad \mathrm{y}=\mathrm{r} \operatorname{sion} \theta$ and $\quad \mathrm{z}=\mathrm{z}$ so we have, $$ r=\sqrt{x^{2}+y^{2}} $$ and $\quad \theta=\tan ^{-1} \frac{\mathrm{y}}{\mathrm{x}}$ Determine the transformation from cylindrical to rectangular coordinates :- Operating equation $(1)^{2}+(2)^{2}$, we get $$ r^{2}=x^{2}+y^{2} \Rightarrow r=\sqrt{x^{2}+y^{2}} $$ $(2) /(1) \Rightarrow \frac{y}{x}=\tan \theta$ $$ \Rightarrow \theta=\tan ^{-1}\left(\frac{\mathrm{y}}{\mathrm{x}}\right) $$ and $\quad \mathrm{z}=\mathrm{z}$. $(d s)^{2}=(d x)^{2}+(d y)^{2}(d z)^{2}$ Now $\mathrm{dx}=\cos \theta \mathrm{dr}-\mathrm{r} \sin \theta \mathrm{d} \theta$ $$ d y=\sin \theta d r+r \cos \theta d \theta $$ $$ \begin{array}{rlrl} & \mathrm{dz}=\mathrm{dz} \\ \therefore & (4) \Rightarrow \\ \therefore \quad & (\mathrm{ds})^{2} & =(\mathrm{dr} \cos \theta-\mathrm{r} \sin \theta \mathrm{d} \theta)^{2}+\left(\sin \theta \mathrm{dr}+\mathrm{r} \cos \theta \mathrm{d} \theta^{2}+(\mathrm{dz})^{2}\right. \\ & =(\mathrm{dr})^{2}\left(\sin ^{2} \theta+\cos ^{2} \theta\right)+(\mathrm{rd} \theta)^{2}\left(\sin ^{2} \theta+\cos ^{2} \theta\right)+(\mathrm{dz})^{2} \\ \Rightarrow \quad(\mathrm{ds})^{2} & =(\mathrm{dr})^{2}+(\mathrm{rd} \theta)^{2}+(\mathrm{dz})^{2} \end{array} $$ Comparing it with $(\mathrm{ds})^{2}=\mathrm{h}_{1}^{2}\left(\mathrm{du}_{1}\right)^{2}+\mathrm{h}_{2}^{2}\left(\mathrm{du}_{2}\right)^{2}+\mathrm{h}_{3}^{2}\left(\mathrm{du}_{3}\right)^{2}$ We get, $$ \mathrm{h}_{1}=1, \mathrm{~h}_{2}=\mathrm{r}, \mathrm{h}_{3}=1, \mathrm{u}_{1}=\mathrm{r}, \mathrm{u}_{2}=\theta, \quad \mathrm{u}_{3}=\mathrm{z} $$ Take $\hat{\mathrm{e}}_{1}=\hat{\mathrm{e}}_{\mathrm{r}}, \hat{\mathrm{e}}_{2}=\hat{\mathrm{e}}_{\theta}, \hat{\mathrm{e}}_{3}=\hat{\mathrm{e}}_{\mathrm{z}}$ Using these, we have $\operatorname{grad} \Phi=\frac{\partial \Phi}{\partial \mathrm{r}} \hat{\mathrm{e}}_{\mathrm{r}}+\frac{1}{\mathrm{r}} \frac{\partial \Phi}{\partial \theta} \hat{\mathrm{e}}_{\theta}+\frac{\partial \Phi}{\partial \mathrm{z}} \hat{\mathrm{e}}_{\mathrm{z}}$ where $\Phi$ is a scalar point function $\operatorname{div} \stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{r}}\left[\frac{\partial}{\partial \mathrm{r}}\left(\mathrm{f}_{1} \mathrm{r}\right)+\frac{\partial}{\partial \theta}\left(\mathrm{f}_{2}\right)+\frac{\partial}{\partial \mathrm{z}}\left(\mathrm{f}_{3} \mathrm{r}\right)\right]$ $\operatorname{Curl} \stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{r}}\left|\begin{array}{ccc}\hat{\mathrm{e}}_{\mathrm{r}} & \mathrm{re}_{\theta} & \hat{\mathrm{e}}_{\mathrm{z}} \\ \frac{\partial}{\partial \mathrm{r}} & \frac{\partial}{\partial \theta} & \frac{\partial}{\partial \mathrm{z}} \\ \mathrm{f}_{1} & \mathrm{f}_{2} \mathrm{r} & \mathrm{f}_{3}\end{array}\right|$ $$ \begin{aligned} \nabla^{2} \Phi & =\frac{1}{\mathrm{r}}\left[\frac{\partial}{\partial \mathrm{r}}\left(\mathrm{r} \frac{\partial \Phi}{\partial \mathrm{r}}\right)+\frac{\partial}{\partial \theta}\left(\frac{1}{\mathrm{r}} \frac{\partial \Phi}{\partial \theta}\right)+\frac{\partial}{\partial \mathrm{z}}\left(\mathrm{r} \frac{\partial \Phi}{\partial \mathrm{z}}\right)\right] \\ & =\frac{\partial^{2} \Phi}{\partial \mathrm{r}^{2}}+\frac{1}{\mathrm{r}} \frac{\partial \Phi}{\partial \mathrm{r}}+\frac{1}{\mathrm{r}^{2}} \frac{\partial^{2} \Phi}{\partial \theta^{2}}+\frac{\partial^{2} \Phi}{\partial \mathrm{z}^{2}} \end{aligned} $$ SPHERICAL POLAR COORDINATES $(r, \theta, \phi)$ $\mathrm{OM}=\mathrm{x}, \mathrm{MN}=\mathrm{y}, \quad \mathrm{PN}=\mathrm{z}, \quad \mathrm{OP}=\mathrm{r} \quad \mathrm{z}=\mathrm{PN}=\mathrm{r} \quad \cos \theta \mathrm{ON}=\mathrm{r} \sin \theta$ In $\Delta \mathrm{OMN}, \mathrm{x}=\mathrm{OM}=\mathrm{ON} \cos \phi$ $$ \begin{aligned} \Rightarrow \quad x & =r \sin \theta \cos \phi \\ y & =M N=O N \sin \phi \quad \Rightarrow \quad y=r \sin \theta \sin \phi \end{aligned} $$ Now $(d s)^{2}=(d x)^{2}+(d y)^{2}+(d z)^{2}$ $$ \begin{aligned} & d x=\sin \theta \cos \phi d r+r \cos \theta \cos \phi d \theta-r \sin \theta \sin \phi d \phi \\ & d y=\sin \theta \sin \phi d r+r \cos \theta \sin \phi d \theta+r \sin \theta \cos \phi d \phi \\ & d z=\cos \theta d r-r \sin \theta d \theta \end{aligned} $$ Put the value of (2) in (1) and collecting the coefficients of $(\mathrm{dr})^{2},(\mathrm{~d} \theta)^{2},(\mathrm{~d} \phi)^{2}$, we get $$ (d s)^{2}=(d r)^{2}(1)+(d \theta)^{2}\left(r^{2}\right)+(d \phi)^{2}\left(r^{2} \sin ^{2} \theta\right) $$ Comparing it with, $$ (\mathrm{ds})^{2}=\mathrm{h}_{1}^{2}\left(\mathrm{du}_{1}\right)^{2}+\mathrm{h}_{2}^{2}\left(\mathrm{du}_{2}\right)^{2}+\mathrm{h}_{3}^{2}\left(\mathrm{du}_{3}\right)^{2} $$ we get $$ \begin{aligned} & \mathrm{h}_{1}=1, \mathrm{~h}_{2}=\mathrm{r}, \mathrm{h}_{3}=\mathrm{r} \sin \theta \\ & \mathrm{u}_{1}=\mathrm{r}, \mathrm{u}_{2}=\theta, \mathrm{u}_{3}=\phi \\ & \hat{\mathrm{e}}_{1}=\hat{\mathrm{e}}_{\mathrm{r}}, \hat{\mathrm{e}}_{2}=\hat{\mathrm{e}}_{\theta}, \hat{\mathrm{e}}_{3}=\hat{\mathrm{e}}_{\varphi} \end{aligned} $$ So $\operatorname{grad} \Phi=\hat{\mathrm{e}}_{\mathrm{r}} \frac{\partial \Phi}{\partial \mathrm{r}}+\frac{\hat{\mathrm{e}}_{\theta}}{\mathrm{r}} \frac{\partial \Phi}{\partial \theta}+\frac{\hat{\mathrm{e}}_{\varphi}}{\mathrm{r} \sin \theta} \frac{\partial \Phi}{\partial \varphi}$ $$ \operatorname{div} \stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{r}^{2} \sin \theta}\left[\frac{\partial}{\partial \mathrm{r}}\left(\mathrm{f}_{1} \mathrm{r}^{2} \sin \theta\right)+\frac{\partial}{\partial \theta}\left(\mathrm{f}_{2} \mathrm{r} \sin \theta+\frac{\partial}{\partial \varphi}\left(\mathrm{f}_{3} \mathrm{r}\right)\right]\right. $$ Curl $\stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{r}^{2} \sin \theta}\left[\frac{\partial}{\partial \mathrm{r}}\left(\frac{\mathrm{r}^{2} \sin \theta}{(1)} \frac{\partial \Phi}{\partial \mathrm{r}}\right)\right]$ $$ \begin{gathered} \text { Curl } \stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{r}^{2} \sin \theta} \stackrel{\rho}{\mathrm{f}}=\frac{1}{\mathrm{r}}\left|\begin{array}{ccc} \hat{\mathrm{e}}_{\mathrm{r}} & \mathrm{re}_{\theta} & \mathrm{r} \sin \theta \\ \frac{\partial}{\partial \mathrm{r}} & \frac{\partial}{\partial \theta} & \frac{\partial}{\partial \varphi} \\ \mathrm{e}_{1} & \mathrm{f}_{2} \mathrm{r} & \mathrm{f}_{3} \mathrm{r} \sin \theta \end{array}\right| \\ \nabla^{2} \Phi=\frac{1}{\mathrm{r}^{2} \sin \theta}\left[\frac{\partial}{\partial \mathrm{r}}\left(\mathrm{r}^{2} \sin \theta \frac{\partial \Phi}{\partial \mathrm{r}}\right)+\frac{\partial}{\partial \theta}\left(\sin \theta \frac{\partial \Phi}{\partial \theta}\right)+\frac{\partial}{\partial \varphi}\left(\frac{1}{\sin \theta} \frac{\partial \Phi}{\partial \varphi}\right)\right] \\ =\frac{\partial^{2} \Phi}{\partial \mathrm{r}^{2}}+\frac{2}{\mathrm{r}} \frac{\partial \Phi}{\partial \mathrm{r}}+\frac{1}{\mathrm{r}^{2}} \frac{\partial^{2} \Phi}{\partial \theta^{2}}+\frac{\cos \theta}{\mathrm{r}^{2} \sin \theta} \frac{\partial \Phi}{\partial \theta}+\frac{1}{\mathrm{r}^{2} \sin ^{2} \theta} \frac{\partial^{2} \Phi}{\partial \varphi^{2}} \end{gathered} $$ Example:- Prove that cylindrical coordinate system is orthogonal. Solution :- The position vector of any point in cylindrical coordinate is $$ \begin{aligned} & \rho=x \hat{i}+y \hat{j}+z \hat{k} \\ \therefore \quad & \rho=r \cos \theta \hat{i}+r \sin \theta \hat{j}+z \hat{k} \\ & \rho=\rho(s) \frac{\partial x}{\partial s}, \frac{\partial y}{\partial s}, \frac{\partial z}{\partial s} \text { are unit vector } \end{aligned} $$ The tangent vector to the $\mathrm{r}, \theta$ and $\mathrm{z}$ curves are given by $$ \begin{aligned} \frac{\partial \mu}{\partial \mathrm{r}} & , \frac{\partial \mu}{\partial \theta}, \frac{\partial \mu}{\partial \mathrm{r}} \text { respectively. } \\ \text { where } \frac{\partial \mu}{\partial \mathrm{r}} & =\cos \theta \hat{\mathrm{i}}+\sin \theta \hat{\mathrm{j}} \\ \frac{\partial \mu}{\partial \theta} & =-\mathrm{r} \sin \theta \hat{\mathrm{i}}+\mathrm{r} \cos \theta \hat{\mathrm{j}} \\ \frac{\partial \mu}{\partial \mathrm{r}} & =\hat{\mathrm{k}} \end{aligned} $$ Therefore the unit-vector in these tangent directions are $$ \begin{aligned} & \hat{\mathrm{e}}_{1}=\hat{\mathrm{e}}_{\mathrm{r}}=\frac{\partial \mathrm{r}}{\partial \mathrm{r}}|| \frac{\partial \mathrm{r}}{\partial \mathrm{r}} \mid \\ \Rightarrow \quad \hat{\mathrm{e}}_{\mathrm{r}} & =\frac{\cos \theta \hat{\mathrm{i}}+\sin \theta \hat{\mathrm{j}}}{\sqrt{\cos ^{2} \theta+\sin ^{2} \theta}}=\cos \theta \hat{\mathrm{i}}+\sin \theta \hat{\mathrm{j}} \\ \hat{\mathrm{e}}_{2} & =\hat{\mathrm{e}}_{\theta}=\frac{\partial \mathrm{r} / \partial \theta}{\left|\frac{\partial \mathrm{r}}{\partial \theta}\right|}=\frac{-\mathrm{r} \sin \theta \hat{\mathrm{i}}+\mathrm{r} \cos \theta \hat{\mathrm{j}}}{\sqrt{\mathrm{r}^{2} \sin ^{2} \theta+\mathrm{r}^{2} \cos ^{2} \theta}} \end{aligned} $$ $$ \begin{aligned} & \Rightarrow \quad \hat{\mathrm{e}}_{\theta}=\frac{-\mathrm{r} \sin \theta \hat{\mathrm{i}}+\mathrm{r} \cos \theta \hat{\mathrm{j}}}{\mathrm{r}}=-\sin \theta \hat{\mathrm{i}}+\cos \theta \hat{\mathrm{j}} \\ & \hat{\mathrm{e}}_{3}=\hat{\mathrm{e}}_{\mathrm{z}}=\frac{\partial \mathrm{r} / \partial \mathrm{z}}{|\partial \mathrm{r} / \partial \mathrm{z}|}=\hat{\mathrm{k}} \end{aligned} $$ Now $\hat{\mathrm{e}}_{\mathrm{r}} \cdot \hat{\mathrm{e}}_{\theta}=(\cos \theta \hat{\mathrm{i}}+\sin \theta \hat{\mathrm{j}}) \cdot(-\sin \theta \hat{\mathrm{i}}+\cos \theta \hat{\mathrm{j}})$ $$ \begin{aligned} = & -\cos \theta \sin \theta+\sin \theta \cos \theta=0 \\ \hat{\mathrm{e}}_{\mathrm{r}} \cdot \hat{\mathrm{e}}_{z} & =(\cos \theta \hat{\mathrm{i}}+\sin \theta \hat{\mathrm{j}}) \cdot \hat{\mathrm{k}}=0 \\ \hat{\mathrm{e}}_{\theta} \cdot \hat{\mathrm{e}}_{\mathrm{z}} & =(-\sin \theta \hat{\mathrm{i}}+\cos \theta \hat{\mathrm{j}}) \cdot \hat{\mathrm{k}}=0 \\ \hat{\mathrm{e}}_{\mathrm{r}} \cdot \hat{\mathrm{e}}_{\mathrm{r}} & =(\cos \theta \hat{\mathrm{i}}+\sin \theta \&) \cdot(\cos \theta \hat{\mathrm{i}}+\sin \theta \&) \\ & =\cos ^{2} \theta+\sin ^{2} \theta=1 \\ \hat{\mathrm{e}}_{\theta} \cdot \hat{\mathrm{e}}_{\theta} & =(-\sin \theta \hat{\mathrm{i}}+\cos \theta \hat{\mathrm{j}}) \cdot(-\sin \theta \hat{\mathrm{i}}+\cos \theta \hat{\mathrm{j}}) \\ & =\sin ^{2} \theta+\cos ^{2} \theta=1 \\ \hat{\mathrm{e}}_{\mathrm{z}} \cdot \hat{\mathrm{e}}_{\mathrm{z}} & =\hat{\mathrm{k}} \cdot \hat{\mathrm{k}}=1 \end{aligned} $$ This shows that $\hat{\mathrm{e}}_{\mathrm{r}}, \hat{\mathrm{e}}_{\theta}, \hat{\mathrm{e}}_{\mathrm{z}}$ are mutually $\perp_{\mathrm{r}}$ and therefore the coordinate system is orthogonal. Example:- (a) Find the unit vectors $\hat{\mathrm{e}}_{\mathrm{r}}, \hat{\mathrm{e}}_{\theta}, \hat{\mathrm{e}}_{\varphi}$ of a spherical co-ordinate system in terms of $\hat{\mathrm{i}}, \hat{\mathrm{j}}, \hat{\mathrm{k}}$. Solution :- The position vector of any point in spherical coordinate is $$ \begin{aligned} \stackrel{\mu}{R} & =x \hat{i}+y \hat{j}+z \hat{k} \\ & =(r \sin \theta \cos \phi) \hat{i}+(r \sin \theta \sin \phi) \hat{j}+(r \cos \theta) \hat{k} \end{aligned} $$ We want to find $\frac{\partial \vec{R}}{\partial \mathrm{r}}, \frac{\partial \overrightarrow{\mathrm{R}}}{\partial \theta}, \frac{\partial \mathrm{R}}{\partial \varphi}$ Now $\frac{\partial \tilde{R}}{\partial \mathrm{r}}=\sin \theta \cos \phi \hat{\mathrm{i}} \sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}}$ So $\quad \hat{\mathrm{e}}_{\mathrm{r}}=\frac{\partial \mathrm{k} / \partial \mathrm{r}}{|\partial \mathrm{R} / \partial \mathrm{r}|}=\frac{\sin \theta \cos \varphi \hat{\mathrm{i}}+\sin \theta \sin \varphi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}}}{\sqrt{\sin ^{2} \theta\left(\cos ^{2} \varphi+\sin ^{2} \varphi\right)+\cos ^{2} \theta}}$ $\Rightarrow \quad \hat{\mathrm{e}}_{\mathrm{r}}=\sin \theta \cos \phi \hat{\mathrm{i}}+\sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}}$ Also $\frac{\partial \mu}{\partial \theta}=r \cos \theta \cos \phi \hat{i}+r \cos \theta \sin \phi \hat{j}-r \sin \theta \hat{k}$ So $\quad \hat{\mathrm{e}}_{\theta}=\frac{\partial \mathrm{R} / \partial \theta}{|\partial \mathrm{R} / \partial \theta|}=\frac{\mathrm{r} \cos \theta \cos \varphi \hat{\mathrm{i}}+\mathrm{r} \cos \theta \sin \varphi \hat{\mathrm{j}}-\mathrm{r} \sin \theta \hat{\mathrm{k}}}{\sqrt{\mathrm{r}^{2} \cos ^{2} \theta\left(\cos ^{2} \varphi+\sin ^{2} \varphi\right)+\mathrm{r}^{2} \sin ^{2} \theta}}$ $$ =\frac{r \cos \theta \cos \varphi \hat{i}+r \cos \theta \sin \varphi \hat{\mathrm{j}}-r \sin \theta \hat{\mathrm{k}}}{\mathrm{r}} $$ $\Rightarrow \quad \hat{\mathrm{e}}_{\theta}=\cos \theta \cos \phi \hat{\mathrm{i}}+\cos \theta \sin \phi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}}$ Also $\frac{\partial \tilde{k}}{\partial \varphi}=-r \sin \theta \sin \phi \hat{i}+r \sin \theta \cos \phi \hat{j}+\theta \cdot \hat{k}$ So $\quad \hat{\mathrm{e}}_{\varphi}=\frac{\partial \hat{\mathrm{R}} / \partial \varphi}{|\partial \mathrm{R} / \partial \varphi|}=\frac{-\mathrm{r} \sin \theta \sin \varphi \hat{\mathrm{i}}+\mathrm{r} \sin \theta \cos \varphi \hat{\mathrm{j}}+\theta \cdot \hat{\mathrm{k}}}{\sqrt{\mathrm{r}^{2} \sin ^{2} \theta\left(\sin ^{2} \varphi+\cos ^{2} \varphi\right)}}$ $$ =\frac{-r \sin \theta \sin \varphi \hat{i}+r \sin \theta \cos \varphi \hat{\mathrm{j}}}{r \sin \theta} $$ $\Rightarrow \quad \hat{\mathrm{e}}_{\varphi}=-\sin \phi \hat{\mathrm{i}}+\cos \phi \hat{\mathrm{j}}$ Example:- Prove that a spherical coordinate system is orthogonal. Solution :- $\hat{\mathrm{e}}_{\mathrm{r}} \cdot \hat{\mathrm{e}}_{\theta}=(\sin \theta \cos \phi \hat{\mathrm{i}}+\sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}})$. $$ \begin{gathered} (\cos \theta \cos \phi \hat{\mathrm{i}}+\cos \theta \sin \phi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}}) \\ =\cos ^{2} \phi \sin \theta \cos \theta+\sin ^{2} \phi \sin \theta \cos \theta-\sin \theta \cos \theta \\ =\sin \theta \cos \theta\left(\sin ^{2} \phi+\cos ^{2} \phi\right)-\sin \theta \cos \theta \\ \Rightarrow \quad \hat{\mathrm{e}}_{\mathrm{r}} \cdot \hat{\mathrm{e}}_{\theta}=0 \\ \quad \hat{\mathrm{e}}_{\theta} \cdot \hat{\mathrm{e}}_{\varphi}=(\cos \theta \cos \phi \hat{\mathrm{i}}+\cos \theta \sin \phi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}}) \cdot(-\sin \phi \hat{\mathrm{i}}+\cos \phi \hat{\mathrm{j}}) \\ =-\sin \phi \cos \phi \cos \theta+\cos \theta \cos \phi \sin \phi-0 \\ \Rightarrow \quad \hat{\mathrm{e}}_{\theta} \cdot \hat{\mathrm{e}}_{\varphi}=0 \end{gathered} $$ Also $\hat{\mathrm{e}}_{\varphi} \cdot \hat{\mathrm{e}}_{\mathrm{r}}=(-\sin \phi \hat{\mathrm{i}}+\cos \phi \hat{\mathrm{j}})(\sin \theta \cos \phi \hat{\mathrm{i}}+\sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}})$ $$ =-\sin \theta \sin \phi \cos \phi+\sin \theta \sin \phi \cos \phi+0 $$ $\Rightarrow \quad \hat{\mathrm{e}}_{\varphi} \cdot \hat{\mathrm{e}}_{\mathrm{r}}=0$ and $\hat{\mathrm{e}}_{\mathrm{r}} \cdot \hat{\mathrm{e}}_{\mathrm{r}}=(\sin \theta \cos \phi \hat{\mathrm{i}}+\sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}})(\sin \theta \cos \phi \hat{\mathrm{i}}$ $$ \begin{aligned} & +\sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}}) \\ = & \cos ^{2} \phi \sin ^{2} \theta+\sin ^{2} \phi+\cos ^{2} \theta \end{aligned} $$ $$ \begin{aligned} & =\sin ^{2} \theta\left(\cos ^{2} \phi+\sin ^{2} \phi\right)+\cos ^{2} \theta \\ & \Rightarrow \quad \hat{\mathrm{e}}_{\mathrm{r}} \cdot \hat{\mathrm{e}}_{\mathrm{r}}=1 \\ & \text { and } \hat{\mathrm{e}}_{\theta} \cdot \hat{\mathrm{e}}_{\theta}=(\cos \theta \cos \phi \hat{\mathrm{i}}+\cos \theta \sin \phi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}}) \\ & \\ & \qquad \begin{aligned} & \quad(\cos \theta \cos \phi \hat{\mathrm{i}}+\cos \theta \sin \phi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}}) \\ &= \cos ^{2} \phi \cos ^{2} \theta+\cos ^{2} \theta \sin ^{2} \phi+\sin ^{2} \theta \\ &= \cos ^{2} \theta\left(\sin ^{2} \phi+\cos ^{2} \phi\right)+\sin ^{2} \theta \\ & \Rightarrow \quad \hat{\mathrm{e}}_{\theta} \cdot \hat{\mathrm{e}}_{\theta}=1 \\ & \text { and } \quad \hat{\mathrm{e}}_{\varphi} \cdot \hat{\mathrm{e}}_{\varphi}=\left(-\sin \phi \hat{\mathrm{i}}+\cos ^{2} \hat{\mathrm{j}}\right) \cdot(-\sin \phi \hat{\mathrm{i}}+\cos \phi \hat{\mathrm{j}}) \\ &=\sin ^{2} \phi+\cos ^{2} \phi=1 \\ & \Rightarrow \quad \hat{\mathrm{e}}_{\varphi} \cdot \hat{\mathrm{e}}_{\varphi}=1 \end{aligned} \end{aligned} $$ This shows that $\hat{\mathrm{e}}_{\mathrm{r}}, \hat{\mathrm{e}}_{\theta}, \hat{\mathrm{e}}_{\varphi}$ are mutually and therefore the coordinate system is orthogonal. Example:- Represent the vector $\hat{A}=2 y \hat{i}-z \hat{j}+3 x \hat{k}$ in spherical coordinates Solution :- Here $\mathrm{x}=\mathrm{r} \sin \theta \cos \phi$ $$ \begin{aligned} & y=r \sin \theta \sin \phi \\ & z=r \cos \theta \end{aligned} $$ and $\hat{\mathrm{e}}_{\mathrm{r}}=\sin \theta \cos \phi \hat{\mathrm{i}}+\sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}}$ $$ \begin{aligned} & \hat{\mathrm{e}}_{\theta}=\cos \theta \cos \phi \hat{\mathrm{i}}+\cos \theta \sin \phi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}} \\ & \hat{\mathrm{e}}_{\varphi}=-\sin \phi \hat{\mathrm{i}}+\cos \phi \hat{\mathrm{j}} \end{aligned} $$ Solving (3), we get $$ \begin{aligned} & \hat{\mathrm{i}}=\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \cos \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \cos \phi-\hat{\mathrm{e}}_{\varphi} \sin \phi \\ & \hat{\mathrm{j}}=\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \sin \phi+\hat{\mathrm{e}}_{\theta} \sin \phi \cos \theta+\hat{\mathrm{e}}_{\varphi} \cos \phi \\ & \hat{\mathrm{k}}=\hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta \end{aligned} $$ Put (2) \\& (4) in (1), we get $$ \begin{array}{r} \tilde{A}=2 r \sin \theta \sin \phi\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \cos \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \cos \phi-\hat{\mathrm{e}}_{\varphi} \sin \phi\right) \\ -\mathrm{r} \cos \theta\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \sin \phi+\hat{\mathrm{e}}_{\theta} \sin \phi \cos \theta+\hat{\mathrm{e}}_{\varphi} \cos \theta\right) \end{array} $$ $$ \begin{array}{rl} & +3 r \sin \theta \cos \phi\left(\hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta\right) \\ \Rightarrow \quad \mathrm{A}=\hat{\mathrm{e}}_{\mathrm{r}} & \left(2 \mathrm{r} \sin ^{2} \theta \cos ^{2} \phi-\mathrm{r} \sin \theta \cos \theta \sin \phi+3 \mathrm{r} \sin \theta \cos \phi \cos \theta\right) \\ + & \hat{\mathrm{e}}_{\theta}\left(2 \mathrm{r} \sin \theta \cos \theta \sin \phi \cos \phi-\mathrm{r} \cos ^{2} \theta \sin \phi-3 \mathrm{r} \sin ^{2} \theta \cos \phi\right) \\ +\hat{\mathrm{e}}_{\varphi}(-2 \mathrm{r} \sin \theta \sin \phi-\mathrm{r} \cos \theta \cos \phi) \\ \Rightarrow \quad \mathrm{A}=\hat{\mathrm{e}}_{\mathrm{r}} & 2 \mathrm{r} \sin \theta\left(\sin \theta \cos ^{2} \phi+\cos \theta \sin \phi\right) \\ & +\hat{\mathrm{e}}_{\theta} \mathrm{r}\left(2 \sin \theta \cos \theta \sin \phi \cos \phi-\mathrm{r} \cos ^{2} \theta \sin \phi-3 \sin ^{2} \theta \cos \phi\right) \\ \quad-\hat{\mathrm{e}}_{\varphi} \mathrm{r}\left(2 \sin \theta \sin ^{2} \phi+\cos \theta \cos \phi\right) \\ \Rightarrow \quad \mathrm{A}=\mathrm{A}_{\mathrm{r}} & \hat{\mathrm{e}}_{\mathrm{r}}+\mathrm{A}_{\theta} \hat{\mathrm{e}}_{\theta}+\mathrm{A}_{\varphi} \hat{\mathrm{e}}_{\varphi} \end{array} $$ Example :- Prove that for cylindrical coordinate system $(\mathrm{r}, \theta, \mathrm{z})$, $$ \begin{aligned} & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{r}}=b \hat{\mathrm{e}}_{\mathrm{r}} \\ & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\theta}=-d \hat{\mathrm{e}}_{\mathrm{r}} \end{aligned} $$ Solution :-We have $\hat{e}_{r}=\cos \theta \hat{i}+\sin \theta \hat{j}$ $$ \begin{aligned} & \hat{\mathrm{e}}_{\theta}=-\sin \theta \hat{\mathrm{i}}+\cos \theta \hat{\mathrm{j}} \\ \therefore \quad & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{r}}=(-\sin \theta) \& \hat{\mathrm{i}}+(\cos \theta) \& \hat{\mathrm{j}} \\ \Rightarrow \quad & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{r}}=(-\sin \theta \hat{\mathrm{i}}+\cos \theta \hat{\mathrm{j}}) \&=\& \hat{\mathrm{e}}_{\theta} \quad \text { [using (2)] } \end{aligned} $$ Also $\frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\theta}=(-\cos \theta) \& \hat{\mathrm{i}}+(-\sin \theta) \& \hat{\mathrm{j}}$ $$ \begin{aligned} & =-(\cos \theta \hat{i}+\sin \theta \hat{j}) \& \\ \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\theta} & =-\& \hat{\mathrm{e}}_{\mathrm{r}}[\operatorname{using}(1)] \end{aligned} $$ Example:- Express the velocity $\forall$ and acceleration $\hat{a}$ of a particle in cylindrical coordinates. Solution :- Position vector of a particle P in rectangular coordinates $$ \mathrm{r}=\mathrm{x} \hat{\mathrm{i}}+\mathrm{y} \hat{\mathrm{j}}+\mathrm{z} \hat{\mathrm{k}} $$ vector $\mathrm{r}$ in cylindrical coordinate system is $$ \mathrm{r}=\mathrm{r} \cos \theta\left(\cos \theta \hat{\mathrm{e}}_{\mathrm{r}}-\sin \theta \hat{\mathrm{e}}_{\theta}\right) $$ $$ \begin{aligned} & \quad+r \sin \theta\left(\sin \theta \hat{\mathrm{e}}_{\mathrm{r}}+\cos \theta \hat{\mathrm{e}}_{\theta}\right)+\mathrm{z} \hat{\mathrm{e}}_{\mathrm{z}} \\ & \Rightarrow \quad \mathrm{\rho}=\mathrm{r} \cos ^{2} \theta \hat{\mathrm{e}}_{\mathrm{r}}-\mathrm{r} \cos \theta \sin \theta \hat{\mathrm{e}}_{\theta}+\mathrm{r} \sin ^{2} \theta \hat{\mathrm{e}}_{\mathrm{r}}+\mathrm{r} \sin \theta \cos \theta \hat{\mathrm{e}}_{\theta}+\mathrm{z} \hat{\mathrm{e}}_{\mathrm{z}} \\ & \Rightarrow \quad \mathrm{r}=\mathrm{r} \hat{\mathrm{e}}_{\mathrm{r}}+\mathrm{z} \hat{\mathrm{e}}_{\mathrm{z}} \end{aligned} $$ Then velocity $\rho=\frac{d \mu}{d t}$ $$ \begin{aligned} \Rightarrow \quad \rho & =\frac{d r}{d t} \hat{e}_{r}+r \frac{d}{d t} \hat{e}_{r}+\frac{d z}{d t} \hat{e}_{z}+z \frac{d}{d t} \hat{e}_{z} \\ \rho & =\& \hat{e}_{r}+r \& \hat{e}_{\theta}+\& \hat{e}_{z} \quad\left[\Theta \frac{d}{d t} \hat{e}_{z}=0\right] \end{aligned} $$ Differentiating (1) again, we obtain acceleration $\rho=\frac{d \rho}{d t}=\frac{d^{2} \rho}{d t^{2}}$ $$ \begin{aligned} \Rightarrow \quad \rho & =\frac{d}{d t}\left(\& \hat{e}_{r}+r \& \hat{e}_{\theta}+z \hat{e}_{z}\right) \\ & =\hat{e}_{r}+\frac{d}{d t} \hat{e}_{r}+r \hat{e}_{r}+r \& \frac{d}{d t} \hat{e}_{\theta}+\frac{d}{d t} \hat{e}_{z} \end{aligned} $$ using $\frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{r}}=\& \hat{\mathrm{e}}_{\theta}, \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\theta}=-\& \hat{\mathrm{e}}_{\mathrm{r}}, \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{z}}=0$, we get $$ \begin{aligned} & \Rightarrow \quad \rho=(r z) \hat{e}_{\mathrm{r}}+(\mathrm{r}+2 \&) \hat{\mathrm{e}}_{\theta}+\hat{\mathrm{e}}_{\mathrm{z}} \end{aligned} $$ Example: - Prove that in spherical coordinates (r, $\theta, \phi)$ $$ \begin{aligned} & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{r}}=\hat{\mathrm{e}}_{\&}=\& \hat{\mathrm{e}}_{\theta}+\sin \theta \& \hat{\mathrm{e}}_{\varphi} \\ & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\theta}=\hat{\mathrm{e}}_{\phi c}=-\& \hat{\mathrm{e}}_{\mathrm{r}}+\cos \theta \& \hat{\mathrm{e}}_{\varphi} \\ & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\varphi}=\hat{\mathrm{e}}_{\phi c}=-\sin \theta \& \hat{\mathrm{e}}_{\mathrm{r}}-\cos \theta \& \hat{\mathrm{e}}_{\theta} \end{aligned} $$ Proof: - Now $\hat{\mathrm{e}}_{\mathrm{r}}=\sin \theta \cos \varphi \mathrm{i}+\sin \theta \sin \varphi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}}$ $\therefore \quad \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{r}}=\cos \theta \cos \varphi \& \hat{\mathrm{i}}+\sin \theta(-\sin \varphi) \& \hat{\mathrm{i}}$ $$ +\cos \theta \cos \varphi \hat{i} \sin \theta \cos \varphi \phi \hat{\mathrm{j}}-(\sin \theta) \& \hat{\mathrm{k}} $$ $$ \begin{aligned} = & \phi(\cos \theta \cos \varphi \hat{i}+\cos \theta \sin \varphi \hat{j}-\sin \theta \hat{k}) \\ & +\sin \theta \phi(-\sin \varphi \hat{i}+\cos \varphi \hat{j}) \end{aligned} $$ using $\hat{\mathrm{e}}_{\theta}=\cos \theta \cos \varphi \hat{\mathrm{i}}+\cos \theta \sin \varphi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}}$ $$ \hat{\mathrm{e}}_{\varphi}=-\sin \varphi \hat{\mathrm{i}}+\cos \varphi \hat{\mathrm{j}} $$ Put (3) \\& (4) in (2), we get $$ \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\mathrm{r}}=\theta \hat{\mathrm{e}}_{\theta}+\sin \theta \& \hat{\mathrm{e}}_{\varphi} $$ Also from(3), $$ \begin{aligned} & \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\theta}=-\sin \theta \cos \varphi \theta \hat{\mathrm{i}}-\cos \theta \sin \varphi \phi \hat{\mathrm{j}} \\ & -\sin \theta \sin \varphi \& \hat{\mathrm{i}}-\cos \theta \sin \varphi \phi \hat{\mathrm{j}}-\cos \theta \hat{\mathrm{k}} \& \\ & =\& \sin \theta \cos \varphi \hat{i}+\operatorname{in} \theta \sin \varphi \hat{j}+\cos \theta \hat{k})+\cos \theta \phi(-\sin \varphi \hat{i}+\cos \varphi \hat{j}) \\ & \Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\theta}=-\hat{\mathrm{e}}_{\mathrm{r}}+\cos \theta \hat{\mathrm{e}}_{\varphi} \& \end{aligned} $$ Also from (4), $\frac{d}{d t} \hat{e}_{\varphi}=-\cos \varphi \phi \hat{\mathrm{i}}-\sin \varphi \&-\sin \varphi \phi \hat{\mathrm{j}}$ Taking R.H.S of equation $(*)$ R.H.S $=-\sin \theta \& \hat{e}_{r}-\cos g q \& \hat{e}_{\theta}$ $$ \begin{aligned} & =-\sin \theta \phi(\sin \theta \cos \varphi \hat{\mathrm{i}}+\sin \theta \sin \varphi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}}) \\ & \begin{array}{ll} -\cos \theta \phi(\cos \theta \cos \varphi \hat{i}+\cos \theta \sin \varphi \hat{j}-\sin \theta \hat{k}) \quad[u \operatorname{sing}(1) \&(3)] \end{array} \\ & =-\sin ^{2} \cos \phi \phi \hat{\mathrm{j}}-\phi \sin ^{2} \theta \sin \varphi \hat{\mathrm{j}}-\phi \& \sin \theta \cos \theta \hat{\mathrm{k}} \\ & -\cos ^{2} \theta \cos \phi \& \hat{\mathrm{i}}-\cos ^{2} \theta \sin \phi \& \hat{\mathrm{j}}+\sin \theta \cos \theta \& \hat{\mathrm{k}} \\ & =-\phi \hat{i} \cos \varphi\left(\sin ^{2} \theta+\cos ^{2} \theta\right)-\phi \hat{j} \sin \phi\left(\sin ^{2} \theta+\cos ^{2} \theta\right) \\ & =-\phi \cos \phi \hat{i}-\sin \varphi \hat{j}=\text { L.H.S. } \quad \text { [from (5)] } \\ & \Rightarrow \quad \text { L.H.S. }=\text { R.H.S. } \end{aligned} $$ So $\frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\varphi}=-\sin \theta \& \hat{\mathrm{e}}_{\mathrm{r}}-\cos \theta \& \hat{\mathrm{e}}_{\theta}$ Example:- Express the velocity $\vdash$ and acceleration $\stackrel{a}{a}$ of a particle in spherical coordinates. Solution : Position vector of a particle $\mathrm{P}$ in rectangular coordinates $\rho=x \hat{i}+y \hat{j}+z \hat{k}$ vector $r$ in spherical coordinate system is $\mathrm{r}=\mathrm{r} \cos \phi \sin \theta\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \cos \varphi+\hat{\mathrm{e}}_{\theta} \cos \theta \cos \varphi-\hat{\mathrm{e}}_{\varphi} \sin \phi\right)$ $+\mathrm{r} \sin \theta \sin \phi\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \sin \varphi+\hat{\mathrm{e}}_{\theta} \cos \theta \sin \varphi+\hat{\mathrm{e}}_{\varphi} \cos \varphi\right)$ $+r \cos \theta\left(\hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta\right)$ $\Rightarrow \quad \mathrm{r}=\mathrm{r} \hat{\mathrm{e}}_{\mathrm{r}}\left(\sin ^{2} \theta \cos ^{2} \varphi+\sin ^{2} \theta \sin ^{2} \varphi+\cos ^{2} \theta\right)$ $+\mathrm{r} \hat{\mathrm{e}}_{\theta}\left(\sin \theta \cos \theta \cos ^{2} \varphi+\sin \theta \sin ^{2} \varphi \cos \theta-\cos \theta \sin \theta\right)$ $+\mathrm{r} \hat{\mathrm{e}}_{\varphi}(-\cos \varphi \sin \theta \sin \varphi+\sin \theta \sin \varphi \cos \varphi)$ $\Rightarrow \quad \mathrm{r}=\mathrm{r} \hat{\mathrm{e}}_{\mathrm{r}}\left[\sin ^{2} \theta\left(\sin ^{2} \phi+\cos ^{2} \phi\right)+\cos ^{2} \theta\right]$ $+\mathrm{r} \hat{\mathrm{e}}_{\theta}\left[\sin \theta \cos \theta\left(\sin ^{2} \phi+\cos ^{2} \phi\right)-\cos \theta \sin \theta\right]$ $=\mathrm{r} \hat{\mathrm{e}}_{\mathrm{r}}\left(\sin ^{2} \theta+\cos ^{2} \theta\right)+\mathrm{r} \hat{\mathrm{e}}_{\theta}(\sin \theta \cos \theta-\sin \theta \cos \theta)$ $\Rightarrow \quad \stackrel{\mu}{r}=r \hat{e}_{r}(1)=r \hat{e}_{r}$ Then velocity $\rho$ is $\rho=\frac{d \mu}{d t}=\frac{d}{d t}\left(r \hat{e}_{r}\right)$ $\Rightarrow \quad \rho=\frac{d r}{d t} \hat{e}_{r}+r \frac{d}{d t}\left(\hat{e}_{r}\right)$ $=\& \hat{e}_{r}+r\left(\& \hat{e}_{\theta}+\sin \theta \& \hat{e}_{\varphi}\right) \quad$ [from previous eg.] $=\& \hat{\mathrm{e}}_{\mathrm{r}}+\mathrm{r} \& \hat{\mathrm{e}}_{\theta}+\mathrm{r} \sin \theta \& \hat{\mathrm{e}}_{\varphi}$ Differentiating (1) again, we obtain acceleration $$ \begin{aligned} \rho & \frac{\mathrm{d} \rho}{\mathrm{dt}}=\frac{\mathrm{d}^{2} \rho}{\mathrm{dt}^{2}} \\ \Rightarrow \quad \rho & =\frac{\mathrm{d}}{\mathrm{dt}}\left(\& \hat{\mathrm{e}}_{\mathrm{r}}+\mathrm{r} \& \hat{\mathrm{e}}_{\theta}+\mathrm{r} \sin \theta \& \hat{\mathrm{e}}_{\varphi}\right) \\ = & \& \hat{\mathrm{e}}_{\mathrm{r}}+\& \frac{\mathrm{d}}{\mathrm{dt}}\left(\hat{\mathrm{e}}_{\mathrm{r}}\right)+\& \& \hat{\mathrm{e}}_{\theta}+\mathrm{r} \hat{\mathrm{e}}_{\theta}+\mathrm{r} \& \frac{\mathrm{d}}{\mathrm{dt}}\left(\hat{\mathrm{e}}_{\theta}\right) \\ & +\& \sin \theta \& \hat{\mathrm{e}}_{\varphi}+\mathrm{r} \cos \theta \& \& \hat{\mathrm{e}}_{\varphi}+\mathrm{r} \sin \theta \& \hat{\mathrm{e}}_{\varphi}+\mathrm{r} \sin \theta \& \frac{\mathrm{d}}{\mathrm{dt}}\left(\hat{\mathrm{e}}_{\varphi}\right) \\ = & \left.\& \hat{\mathrm{e}}_{\mathrm{r}}+\& \& \hat{\mathrm{e}}_{\theta}+\sin \theta \& \hat{\mathrm{e}}_{\varphi}\right)+\& \& \hat{\mathrm{e}}_{\theta}+\mathrm{r} \& \hat{\mathrm{e}}_{\theta} \\ & \left.+\mathrm{r} \&-\& \hat{\mathrm{e}}_{\mathrm{r}}+\cos \theta \& \hat{\mathrm{e}}_{\varphi}\right)+\& \sin \theta \& \hat{\mathrm{e}}_{\varphi} \end{aligned} $$ $$ \begin{aligned} & +r \cos \theta \& \hat{e}_{\varphi} \&+r \sin \theta \& \hat{e}_{\varphi} \\ & +r \sin \theta \phi\left(-\sin \theta \& \hat{\mathbf{e}}_{\mathrm{r}}-\cos \theta \phi \hat{\mathrm{e}}_{\theta}\right) \\ & \left.\Rightarrow \quad \rho=\hat{\mathrm{e}}_{\mathrm{r}} \mathrm{r}-\phi^{2} \mathrm{r} \sin ^{2} \theta\right) \\ & +\hat{\mathrm{e}}_{\theta}\left[2 \& \&+\mathrm{r} \operatorname{rin} \theta \cos \theta \&^{2}\right] \\ & +\hat{\mathrm{e}}_{\varphi}[2 \& \& \sin \theta+2 \mathrm{r} \& \& \cos \theta+\mathrm{r} \sin \theta \&] \\ & \left.\Rightarrow \quad \rho \mathrm{a}=\hat{\mathrm{e}}_{\mathrm{r}} \mathrm{r} \theta^{Z}-\mathrm{r} \sin ^{2} \theta \phi^{2}\right) \\ & +\hat{\mathrm{e}}_{\theta}\left[\frac{1}{\mathrm{r}} \frac{\mathrm{d}}{\mathrm{dt}}\left(\mathrm{r}^{2} \Leftrightarrow-\mathrm{r} \sin \theta \cos \theta \phi^{2}\right]\right. \\ & +\hat{\mathrm{e}}_{\varphi}\left[\frac{1}{\mathrm{r} \sin \theta} \frac{\mathrm{d}}{\mathrm{dt}}\left(\mathrm{r}^{2} \operatorname{sn}^{2} \theta \phi\right)\right] \end{aligned} $$ $\left[\Theta \frac{1}{\mathrm{r}} \frac{\mathrm{d}}{\mathrm{dt}}\left(\mathrm{r}^{2} \Leftrightarrow\right)=\frac{1}{\mathrm{r}}\left(\mathrm{r}^{2}+\mathrm{r} \&\right)=\mathrm{r}+2 \&\right.$ and $\frac{1}{\mathrm{r} \sin \theta} \frac{\mathrm{d}}{\mathrm{dt}}\left(\mathrm{r}^{2} \sin ^{2} \theta \phi\right)$ $$ \begin{aligned} & =\frac{1}{r \sin \theta}\left[r^{2} \sin ^{2} \theta \&+2 r \& \sin ^{2} \theta \&+\& r^{2} 2 \sin \theta(\cos \theta) \& f\right. \\ & =r \sin \theta \& 2 \& \& \sin \theta+2 r \& \& \cos \theta] \end{aligned} $$ So (2) is required expression for acceleration of a particle in spherical coordinate. Example : $\frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\varphi}=\hat{\mathrm{e}}_{\&}=-\sin \theta \& \hat{\mathrm{e}}_{\mathrm{r}}-\cos \theta \& \hat{\mathrm{e}}_{\theta}$ Solution :- Now $\hat{\mathrm{e}}_{\varphi}=-\sin \varphi \hat{\mathrm{i}}+\cos \varphi \hat{\mathrm{j}}$ $\Rightarrow \quad \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\varphi}=-\cos \varphi \& \hat{\mathrm{i}}-\sin \varphi \& \hat{\mathrm{j}}$ Put values of $\hat{i}$ and $\hat{j}$ in(5), we get $$ \begin{aligned} \frac{\mathrm{d}}{\mathrm{dt}} \hat{\mathrm{e}}_{\varphi}= & -\cos \varphi \phi\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \cos \varphi+\hat{\mathrm{e}}_{\varphi} \cos \theta \cos \varphi-\hat{\mathrm{e}}_{\varphi} \sin \varphi\right) \\ & -\sin \phi \&\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \sin \varphi+\hat{\mathrm{e}}_{\theta} \cdot \cos \theta \sin \varphi+\hat{\mathrm{e}}_{\varphi} \cos \varphi\right) \\ = & -\sin \theta \& \hat{\mathrm{e}}_{\mathrm{r}}\left(\cos ^{2} \varphi+\sin ^{2} \varphi\right)-\cos \theta \& \hat{\mathrm{e}}_{\theta}\left(\cos ^{2} \varphi+\sin ^{2} \varphi\right) \\ & +\hat{\mathrm{e}}_{\varphi} \cos \varphi \sin \varphi \&-\hat{\mathrm{e}}_{\varphi} \cos \varphi \sin \varphi \& \end{aligned} $$ $$ =-\sin \theta \& \hat{\mathrm{e}}_{\mathrm{r}}-\cos \theta \& \hat{\mathrm{e}}_{\theta} . \text { Hence Proved. } $$ Example:- Represent the vector $\hat{A}=z \hat{i}-2 x \hat{j}+y \hat{k}$ in cylindrical coordinates. Solution :- In cylindrical coordinates, $$ \mathrm{x}=\mathrm{r} \cos \theta, \quad \mathrm{y}=\mathrm{r} \sin \theta, \mathrm{z}=\mathrm{z} $$ and $$ \begin{aligned} & \hat{\mathrm{e}}_{\mathrm{r}}=\cos \theta \hat{\mathrm{i}}+\sin \theta \hat{\mathrm{j}} \\ & \hat{\mathrm{e}}_{\theta}=-\sin \theta \hat{\mathrm{i}}+\cos \theta \\ & \hat{\mathrm{e}}_{\mathrm{z}}=\hat{\mathrm{k}} \end{aligned} $$ The position vector of a particle $\mathrm{P}$ in rectangular coordinates is $$ \begin{aligned} & \hat{A}=z \hat{i}+(-2 x) \hat{j}+y \hat{k} \\ \Rightarrow \quad & \stackrel{\mu}{A}=z \hat{i}-2 r \cos \theta \hat{j}+r \sin \theta \hat{k} \end{aligned} $$ Solve (2) for $\hat{i}$ and $\hat{j}$, we get on operating $\hat{e}_{r} \times \sin \theta+\hat{e}_{\theta} \times \cos \theta$ $$ \begin{aligned} & =\cos \theta \sin \theta \hat{\mathrm{i}}+\sin ^{2} \theta \hat{\mathrm{j}}-\sin \theta \cos \theta \hat{\mathrm{i}}+\cos ^{2} \theta \hat{\mathrm{j}} \\ \Rightarrow \quad \hat{\mathrm{e}}_{\mathrm{r}} & \sin \theta+\hat{\mathrm{e}}_{\theta} \cos \theta=\left(\sin ^{2} \theta+\cos ^{2} \theta\right) \mathrm{j}=\hat{\mathrm{j}} \end{aligned} $$ Similarly $\hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta=\cos ^{2} \theta \hat{\mathrm{i}}+\sin \theta \cos \theta \hat{\mathrm{j}}$ $$ \begin{gathered} -\left(-\sin ^{2} \theta \hat{\mathrm{i}}+\cos \theta \sin \theta \hat{\mathrm{j}}\right) \\ \Rightarrow \quad \hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta=\left(\sin ^{2} \theta+\cos ^{2} \theta\right) \hat{\mathrm{i}}+\sin \theta \cos \theta \hat{\mathrm{j}}-\sin \theta \cos \theta \hat{\mathrm{j}} \\ \Rightarrow \quad \hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta=\hat{\mathrm{i}} \end{gathered} $$ Put the value of $\hat{i}, \hat{j}, \hat{k}$ from (3), (5), (6) in equation (4), we get $$ \begin{aligned} & \stackrel{\mu}{\mathrm{A}}=\mathrm{z}\left(\hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta\right)-2 \mathrm{r} \cos \theta\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta+\hat{\mathrm{e}}_{\theta} \cos \theta\right)+\mathrm{r} \sin \theta \hat{\mathrm{e}}_{\mathrm{z}} \\ & =\hat{\mathrm{e}}_{\mathrm{r}}(\mathrm{z} \cos \theta-2 \mathrm{r} \cos \theta \sin \theta) \\ & -\hat{\mathrm{e}}_{\theta}\left(\mathrm{z} \sin \theta+2 \mathrm{r} \cos ^{2} \theta\right)+\hat{\mathrm{e}}_{\mathrm{z}}(\mathrm{r} \sin \theta) \\ & \Rightarrow \quad \stackrel{\mu}{\mathrm{A}}=\mathrm{A}_{\mathrm{r}} \hat{\mathrm{e}}_{\mathrm{r}}+\mathrm{A}_{\theta} \hat{\mathrm{e}}_{\theta}+\mathrm{A}_{\mathrm{z}} \hat{\mathrm{e}}_{\mathrm{z}} \end{aligned} $$ where $$ \begin{aligned} & A_{r}=z \cos \theta-2 r \cos \theta \sin \theta \\ & A_{\theta}=-z \sin \theta-2 r \cos \theta \sin \theta \\ & A_{z}=r \sin \theta \end{aligned} $$ Equation (1) is required vector $\hat{\mathrm{A}}$ in cylindrical coordinates. Example:- Represent the vector $\hat{A}=z \hat{i}-2 x \hat{j}+y \hat{k}$ in spherical coordinates $\quad(r, \theta, \phi)$ Solution :- In spherical coordinates, $$ \begin{aligned} & x=r \sin \theta \cos \phi \\ & y=r \sin \theta \sin \phi \\ & z=r \cos \theta \end{aligned} $$ and $$ \begin{aligned} & \hat{\mathrm{e}}_{\mathrm{r}}=\sin \theta \cos \phi \hat{\mathrm{i}}+\sin \theta \sin \phi \hat{\mathrm{j}}+\cos \theta \hat{\mathrm{k}} \\ & \hat{\mathrm{e}}_{\theta}=\cos \theta \cos \phi \hat{\mathrm{i}}+\cos \theta \sin \phi \hat{\mathrm{j}}-\sin \theta \hat{\mathrm{k}} \\ & \hat{\mathrm{e}}_{\varphi}=-\sin \phi \hat{\mathrm{i}}+\cos \phi \hat{\mathrm{j}} \end{aligned} $$ Solving system (3) for $\hat{i}, \hat{j}, \hat{k}$ by Cramer's rule, for this we have $$ \begin{aligned} \mathrm{D}= & \left|\begin{array}{ccc} \sin \theta \cos \varphi & \sin \theta \sin \varphi & \cos \theta \\ \cos \theta \cos \varphi & \cos \theta \sin \varphi & -\sin \theta \\ -\sin \varphi & \cos \varphi & 0 \end{array}\right| \\ = & \sin \theta \cos \phi(\sin \theta \cos \phi)+\sin \theta \sin \phi(\sin \theta \sin \phi) \\ & \quad+\cos \theta\left(\cos \theta \cos ^{2} \phi+\cos \theta \sin ^{2} \phi\right) \\ = & \sin ^{2} \theta \cos ^{2} \phi+\sin ^{2} \theta \sin ^{2} \phi+\cos ^{2} \theta\left(\cos ^{2} \phi+\sin ^{2} \phi\right) \\ = & \sin ^{2} \theta\left(\cos ^{2} \phi+\sin ^{2} \phi\right)+\cos ^{2} \theta(1) \\ = & \sin ^{2} \theta+\cos ^{2} \theta=1 \\ \Rightarrow \quad \mathrm{D}= & 1 \end{aligned} $$ and $$ \begin{aligned} \mathrm{D}_{1}= & \left|\begin{array}{ccc} \hat{\mathrm{e}}_{\mathrm{r}} & \sin \theta \sin \varphi & \cos \theta \\ \hat{\mathrm{e}}_{\theta} & \cos \theta \sin \varphi & -\sin \theta \\ \hat{\mathrm{e}}_{\varphi} & \cos \varphi & 0 \end{array}\right| \\ = & \hat{\mathrm{e}}_{\mathrm{r}}(\sin \theta \cos \phi)+\sin \theta \sin \phi\left(-\sin \theta \hat{\mathrm{e}}_{\varphi}\right) \\ & \quad+\cos \theta\left(\hat{\mathrm{e}}_{\theta} \cos \varphi-\hat{\mathrm{e}}_{\varphi} \cos \theta \sin \varphi\right) \\ = & \hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \cos \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \cos \phi-\hat{\mathrm{e}}_{\varphi} \sin \phi\left(\sin ^{2} \theta+\cos ^{2} \theta\right) \\ \Rightarrow \quad \mathrm{D}_{1}= & \hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \cos \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \cos \phi-\hat{\mathrm{e}}_{\varphi} \sin \phi \end{aligned} $$ $$ \begin{aligned} \mathrm{D}_{2}= & \left|\begin{array}{ccc} \sin \theta \cos \varphi & \hat{\mathrm{e}}_{\mathrm{r}} & \cos \theta \\ \cos \theta \cos \varphi & \hat{\mathrm{e}}_{\theta} & -\sin \theta \\ -\sin \varphi & \hat{\mathrm{e}}_{\varphi} & 0 \end{array}\right| \\ = & \sin \theta \cos \phi\left(\sin \theta \hat{\mathrm{e}}_{\varphi}\right)+\hat{\mathrm{e}}_{\mathrm{r}}(\sin \theta \sin \phi-0) \\ & \quad+\cos \theta\left(\hat{\mathrm{e}}_{\varphi} \cos \theta \cos \theta+\hat{\mathrm{e}}_{\theta} \sin \phi\right) \\ = & \sin \theta \sin \phi \hat{\mathrm{e}}_{\mathrm{r}}+\cos \theta \sin \phi \hat{\mathrm{e}}_{\theta}+\cos \phi \hat{\mathrm{e}}_{\varphi}\left(\sin ^{2} \theta+\cos ^{2} \theta\right) \end{aligned} $$ $\Rightarrow \quad \mathrm{D}_{2}=\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \sin \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \sin \phi+\cos \phi \hat{\mathrm{e}}_{\varphi}$ $$ \begin{aligned} \mathrm{D}_{3}= & \left|\begin{array}{ccc} \sin \theta \cos \varphi & \sin \theta \sin \varphi & \hat{\mathrm{e}}_{\mathrm{r}} \\ \cos \theta \cos \varphi & \cos \theta \sin \varphi & \hat{\mathrm{e}}_{\theta} \\ -\sin \varphi & \cos \varphi & \hat{\mathrm{e}}_{\varphi} \end{array}\right| \\ = & \sin \theta \cos \phi\left(\cos \theta \sin \phi \hat{\mathrm{e}}_{\varphi}-\cos \phi \hat{\mathrm{e}}_{\theta}\right) \\ & +\sin \theta \sin \phi\left(-\sin \phi \hat{\mathrm{e}}_{\theta}-\hat{\mathrm{e}}_{\varphi} \cos \theta \cos \phi\right) \\ & +\hat{\mathrm{e}}_{\mathrm{r}}\left(\cos \theta \cos ^{2} \phi+\cos \theta \sin ^{2} \phi\right) \\ = & \hat{\mathrm{e}}_{\mathrm{r}} \cos \theta\left(\sin \phi+\cos ^{2} \phi\right)+ \\ & \hat{\mathrm{e}}_{\theta}\left(-\sin \theta \cos ^{2} \phi-\sin \theta \sin ^{2} \phi\right) \\ & +\hat{\mathrm{e}}_{\varphi}(\sin \theta \cos \theta \sin \phi \cos \phi-\sin \theta \cos \theta \sin \phi \cos \phi) \\ \Rightarrow \quad \mathrm{D}_{3}= & \hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\sin \theta \hat{\mathrm{e}}_{\theta} \end{aligned} $$ Then $\hat{i}=\frac{D_{1}}{D}=\hat{e}_{r} \sin \theta \cos \phi+\hat{e}_{\theta} \cos \theta \cos \phi-\hat{e}_{\varphi} \sin \phi$ $$ \begin{aligned} & \hat{\mathrm{j}}=\frac{\mathrm{D}_{2}}{\mathrm{D}}=\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \sin \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \sin \phi+\cos \phi \hat{\mathrm{e}}_{\varphi} \\ & \hat{\mathrm{k}}=\frac{\mathrm{D}_{3}}{\mathrm{D}}=\hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta \end{aligned} $$ from (1), (2) and (4), we get $$ \begin{aligned} & \mathrm{A}(\mathrm{r} \cos \theta)\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \cos \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \cos \phi-\hat{\mathrm{e}}_{\varphi} \sin \phi\right) \\ &-2 \mathrm{r} \sin \theta \cos \phi\left(\hat{\mathrm{e}}_{\mathrm{r}} \sin \theta \sin \phi+\hat{\mathrm{e}}_{\theta} \cos \theta \sin \phi+\hat{\mathrm{e}}_{\varphi} \cos \phi\right) \\ &+\mathrm{r} \sin \theta \sin \phi\left(\hat{\mathrm{e}}_{\mathrm{r}} \cos \theta-\hat{\mathrm{e}}_{\theta} \sin \theta\right) \end{aligned} $$ $\Rightarrow \quad \stackrel{\mu}{\mathrm{A}}=\mathrm{r} \hat{\mathrm{e}}_{\mathrm{r}}\left(\cos \theta \sin \theta \cos \phi-2 \sin ^{2} \theta \cos \phi \sin \phi+\sin \theta \cos \theta \sin \phi\right)$ $+\mathrm{r} \hat{\mathrm{e}}_{\theta}\left(\cos ^{2} \theta \cos \phi-2 \sin \theta \cos \theta \sin \phi \cos \phi\right)$ $+\mathrm{r} \hat{\mathrm{e}}_{\varphi}\left(-\cos \theta \sin \phi-2 \sin \theta \cos ^{2} \phi\right)$ $=\mathrm{r} \hat{\mathrm{e}}_{\mathrm{r}}\left[\sin \theta \cos \theta(\cos \phi+\sin \phi)-2 \sin ^{2} \theta \cos \phi \sin \phi\right]$ $+\mathrm{r} \hat{\mathrm{e}}_{\theta}\left(\cos ^{2} \theta \cos \phi-2 \sin \theta \cos \theta \sin \phi \cos \phi\right)$ $+\mathrm{r} \hat{\mathrm{e}}_{\varphi}\left(-\cos \theta \sin \phi-2 \sin \theta \cos ^{2} \phi\right)$ $\Rightarrow \quad \hat{\mathrm{A}}=\mathrm{A}_{\mathrm{r}} \hat{\mathrm{e}}_{\mathrm{r}}+\mathrm{A}_{\theta} \hat{\mathrm{e}}_{\theta}+\mathrm{A}_{\phi} \hat{\mathrm{e}}_{\varphi}$ which is required expression. Example:- Find the squre of the element of arc length in cylindrical \\& spherical coordinates Solution: - In cylindrical coordinates, $\mathrm{x}=\mathrm{r} \cos \theta, \mathrm{y}=\mathrm{r} \sin \theta, \mathrm{z}=\mathrm{z}$ Now $(\mathrm{ds})^{2}=(\mathrm{dx})^{2}+(\mathrm{dy})^{2}+(\mathrm{dz})^{2}$ as $\quad \mathrm{dx}=\mathrm{dr} \cos \theta-\mathrm{r} \sin \theta \mathrm{d} \theta$ $d y=d r \sin \theta+r \cos \theta d \theta$ $\mathrm{dz}=\mathrm{dz}$ Put (2) in (1), we get $$ \begin{aligned} (\mathrm{ds})^{2}= & (\cos \theta \mathrm{dr}-\mathrm{r} \sin \theta \mathrm{dz} \theta)^{2} \\ & +(\sin \theta \mathrm{dr}+\mathrm{r} \cos \theta \mathrm{d} \theta)^{2}+(\mathrm{dz})^{2} \\ = & (\mathrm{dr})^{2}\left(\sin ^{2} \theta+\cos ^{2} \theta\right)+(\mathrm{d} \theta)^{2}\left(\mathrm{r}^{2} \sin ^{2} \theta+\mathrm{r}^{2} \cos ^{2} \theta\right) \\ & -2 \mathrm{r} \sin \theta \cos \theta \mathrm{dr} \mathrm{d} \theta+2 \mathrm{r} \sin \theta \cos \theta \mathrm{dr} \mathrm{d} \theta+(\mathrm{dz})^{2} \\ \Rightarrow \quad(\mathrm{ds})^{2}= & (\mathrm{dr})^{2}+\mathrm{r}^{2}(\mathrm{~d} \theta)^{2}+(\mathrm{dz})^{2}=\mathrm{h}_{1}^{2}(\mathrm{dr})^{2}+\mathrm{h}_{2}^{2}(\mathrm{~d} \theta)^{2}+\mathrm{h}_{3}^{2}(\mathrm{dz})^{2} \end{aligned} $$ In spherical coordinates $(\mathrm{r}, \theta, \phi)$ $$ \mathrm{x}=\mathrm{r} \sin \theta \cos \phi, \mathrm{y}=\mathrm{r} \sin \theta \sin \phi, \mathrm{z}=\mathrm{r} \cos \theta $$ as $\quad(d s)^{2}=(d x)^{2}+(d y)^{2}+(d z)^{2}$ using (1), we get $$ (d s)^{2}=(d r)^{2}+r^{2}(d \theta)^{2}+r^{2} \sin ^{2} \theta(d \phi)^{2} $$ Camparing it with, $$ (d s)^{2}=h_{1}^{2}\left(d u_{1}\right)^{2}+h_{2}^{2}\left(d u_{2}\right)^{2}+h_{3}^{2}\left(d u_{3}\right)^{2} \text {, we get } $$ $$ \begin{aligned} & \mathrm{h}_{1}=1, \mathrm{~h}_{2}=\mathrm{r}_{1}, \mathrm{~h}_{3}=\mathrm{r} \sin \theta \\ & \mathrm{u}_{1}=\mathrm{r}, \mathrm{u}_{2}=\theta, \mathrm{u}_{3}=\phi \end{aligned} $$ we get $(\mathrm{ds})^{2}=\mathrm{h}_{1}^{2}(\mathrm{dr})^{2}+\mathrm{h}_{2}^{2}(\mathrm{~d} \theta)^{2}+\mathrm{h}_{3}^{2}(\mathrm{~d} \phi)^{2}$ Example : - Find expression for the elements of area in orthogonal curvilinear coordinates. ## Solution: - Now area element is given by $$ \begin{aligned} & \mathrm{dA}_{1}=\left|\mathrm{h}_{2} \mathrm{du}_{2} \hat{\mathrm{e}}_{2} \times \mathrm{h}_{3} \mathrm{du}_{3} \hat{\mathrm{e}}_{3}\right| \\ & =\mathrm{h}_{2} \mathrm{~h}_{3}\left|\hat{\mathrm{e}}_{2} \times \hat{\mathrm{e}}_{3}\right| \mathrm{du}_{2} d \mathrm{u}_{3}=\mathrm{h}_{2} \mathrm{~h}_{3}\left|\hat{\mathrm{e}}_{1}\right| \mathrm{du}_{2} \mathrm{du}_{3} \\ & \begin{array}{ll} =\mathrm{h}_{2} \mathrm{~h}_{3} \mathrm{du}_{2} \mathrm{du}_{3} & {\left[\Theta\left|\hat{\mathrm{e}}_{1}\right|=1\right]} \end{array} \\ & \mathrm{dA}_{2}=\left|\mathrm{h}_{3} \mathrm{du}_{3} \hat{\mathrm{e}}_{3} \times \mathrm{h}_{1} \mathrm{du}_{1} \hat{\mathrm{e}}_{1}\right| \\ & =\mathrm{h}_{3} \mathrm{~h}_{1}\left|\hat{\mathrm{e}}_{3} \times \hat{\mathrm{e}}_{1}\right| \mathrm{du}_{3} \mathrm{du}_{1}=\left|\hat{\mathrm{e}}_{2}\right| \mathrm{du}_{3} \mathrm{du}=1 \quad\left[\Theta\left|\hat{\mathrm{e}}_{2}\right|=1\right] \\ & \mathrm{dA}_{3}=\left|\mathrm{h}_{1} \mathrm{du}_{1} \hat{\mathrm{e}}_{1} \times \mathrm{h}_{2} \mathrm{du}_{2} \hat{\mathrm{e}}_{2}\right| \\ & =\mathrm{h}_{1} \mathrm{~h}_{2}\left|\hat{\mathrm{e}}_{1} \times \hat{\mathrm{e}}_{2}\right| \mathrm{du}_{1} \mathrm{du}_{2} \\ & =\mathrm{h}_{1} \mathrm{~h}_{2}\left|\hat{\mathrm{e}}_{1} \times \hat{\mathrm{e}}_{2}\right| \mathrm{du}_{1} \mathrm{du}_{2} \\ & =\mathrm{h}_{1} \mathrm{~h}_{2}\left|\hat{\mathrm{e}}_{3}\right| d \mathrm{u}_{1} d \mathrm{u}_{2}=\mathrm{h}_{1} \mathrm{~h}_{2} d \mathrm{u}_{1} d \mathrm{u}_{2} \\ & {\left[\Theta\left|\hat{\mathrm{e}}_{3}\right|=1 \mid\right.} \end{aligned} $$ Example:- Find the volume element dv in cylenderical \\& spherical coordinate system. Solution: - We know that volume element $\mathrm{dV}$ in orthogonal curvilinear coordinates is $$ \mathrm{dV}=\mathrm{h}_{1} \mathrm{~h}_{2} \mathrm{~h}_{3} \mathrm{du}_{1} \mathrm{du}_{2} \mathrm{du}_{3} $$ In cylindrical coordinates $(\mathrm{r}, \theta, \phi)$ $$ \begin{aligned} & \mathrm{h}_{1}=1, \mathrm{~h}_{2}=\mathrm{r}, \mathrm{h}_{3}=1 \\ & \mathrm{u}_{1}=\mathrm{r}, \mathrm{u}_{2}=\theta, \mathrm{u}_{3}=\mathrm{z} \end{aligned} $$ So (1) $\Rightarrow \mathrm{dV}=\mathrm{r} \mathrm{dr} \mathrm{d} \theta \mathrm{dz}$ In spherical coordinates $(\mathrm{r}, \theta, \phi)$ $$ \begin{aligned} & \mathrm{h}_{1}=1, \mathrm{~h}_{2}=\mathrm{r}, \mathrm{h}_{3}=\mathrm{r} \sin \theta \\ & \mathrm{u}_{1}=\mathrm{r}, \mathrm{u}_{2}=\theta, \mathrm{u}_{3}=\phi \end{aligned} $$ $\therefore \quad$ So (1) $\Rightarrow \mathrm{dV} \mathrm{r} \sin ^{2} \mathrm{dr} \mathrm{d} \theta \mathrm{d} \phi$ Example:- If $\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}$ are general coordinates, show that $\frac{\partial \tilde{r}}{\partial \mathrm{u}_{1}}, \frac{\partial \mu}{\partial \mathrm{u}_{2}}, \frac{\partial \tilde{r}}{\partial \mathrm{u}_{3}} \& \nabla \mathrm{u}_{1}, \nabla \mathrm{u}_{2}, \nabla \mathrm{u}_{3}$ are reciprocail system of vectors. Solution: - We know that if $\phi=\phi(\mathrm{x}, \mathrm{y}, \mathrm{z})$ $$ \begin{aligned} d \phi & =\frac{\partial \varphi}{d x} d x+\frac{\partial \varphi}{\partial y} d y+\frac{\partial \varphi}{\partial z} d z \\ & =\left(\frac{\partial \varphi}{\partial x} \hat{i}+\frac{\partial \varphi}{\partial y} \hat{j}+\frac{\partial \varphi}{\partial z} \hat{k}\right) \cdot(d x \hat{i}+d y \hat{j}+d z \hat{k}) \end{aligned} $$ $\Rightarrow \quad \mathrm{d} \phi=\nabla \phi \cdot \mathrm{d} \mathrm{r}$ Replacing $\phi$ with $\nabla$ up, we get $$ \begin{aligned} & \mathrm{du}_{\mathrm{p}}=\nabla \mathrm{u}_{\mathrm{p}} \cdot \mathrm{d} r \end{aligned} $$ $$ \begin{aligned} & \mathrm{du}_{3}=\nabla \mathrm{u}_{3} \cdot \mathrm{d} \text { r } \end{aligned} $$ Now $\mathrm{r}=\mathrm{r}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ $\Rightarrow \quad \mathrm{d} \rho=\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \mathrm{du}_{1}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \mathrm{du}_{2}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \mathrm{du}_{3}$ Taking dot product with $\nabla \mathrm{u}_{1}$, we get $$ \begin{aligned} & \nabla \mathrm{u}_{1} \cdot \mathrm{dr}=\left(\nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right) \mathrm{du}_{1}+\left(\nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}\right) \mathrm{du}_{2}+\left(\nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right) \mathrm{du}_{3} \\ & \Rightarrow \quad \mathrm{du}_{1}=\left(\nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right) \mathrm{du}_{1}+\left(\nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{\rho}}{\partial \mathrm{u}_{2}}\right) \mathrm{du}_{2}+\left(\nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right) \end{aligned} $$ Comparing like coefficients on both sides, $$ \nabla \mathrm{u}_{1} \cdot \frac{\partial \tilde{\mathrm{r}}}{\partial \mathrm{u}_{1}}=1, \quad \nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}=0, \quad \nabla \mathrm{u}_{1} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}=0 $$ Similarly taking dot product of (2) with $\nabla \mathrm{u}_{2}$ and $\nabla \mathrm{u}_{3}$, we get $$ \begin{aligned} & \nabla \mathrm{u}_{2}, \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}=1, \nabla \mathrm{u}_{2} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}=0 \nabla \mathrm{u}_{2} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}=0 \\ & \nabla \mathrm{u}_{3} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}=1, \end{aligned} $$ So we get the required result i.e. $$ \nabla u_{p} \cdot \frac{\partial \rho}{\partial u_{q}}=\left\{\begin{array}{lll} 1 & \text { if } & p=q \\ 0 & \text { if } & p \neq q \end{array}\right. $$ Where $(p, q)=(1,2,3)$ Example: - Prove that $\hat{\mathrm{e}}_{1}=\mathrm{h}_{2} \mathrm{~h}_{3} \nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}$ with similar equation for $\hat{\mathrm{e}}_{2}$ and $\hat{\mathrm{e}}_{3}$ where $\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}$ are orthogonal coordinates. Solution: - We know that $$ \nabla \mathrm{u}_{1}=\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{1}}, \quad \nabla \mathrm{u}_{2}=\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{2}}, \quad \nabla \mathrm{u}_{3}=\frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{3}} $$ Then $\nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}=\frac{\hat{\mathrm{e}}_{2} \times \hat{\mathrm{e}}_{3}}{\mathrm{~h}_{2} \mathrm{~h}_{3}}=\frac{\hat{\mathrm{e}}_{1}}{\mathrm{~h}_{2} \mathrm{~h}_{3}}$ $\Rightarrow \quad \hat{\mathrm{e}}_{1}=\mathrm{h}_{2} \mathrm{~h}_{3} \nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{2} \times \nabla \mathrm{u}_{3}$ Similarly $\quad \nabla \mathrm{u}_{3} \times \nabla \mathrm{u}_{1}=\frac{\hat{\mathrm{e}}_{3} \mathrm{ke}_{1}}{\mathrm{~h}_{3} \mathrm{~h}_{1}}=\frac{\hat{\mathrm{e}}_{2}}{\mathrm{~h}_{3} \mathrm{~h}_{1}}$ $\Rightarrow \quad \hat{\mathrm{e}}_{2}=\mathrm{h}_{3} \mathrm{~h}_{1} \nabla \mathrm{u}_{3} \times \nabla \mathrm{u}_{1}$ and $\quad \nabla \mathrm{u}_{1} \times \nabla \mathrm{u}_{2}=\frac{\hat{\mathrm{e}}_{1} \times \hat{\mathrm{e}}_{2}}{\mathrm{~h}_{1} \mathrm{~h}_{2}}=\frac{\hat{\mathrm{e}}_{3}}{\mathrm{~h}_{1} \mathrm{~h}_{2}}$ $\Rightarrow \quad \hat{\mathrm{e}}_{3}=\mathrm{h}_{1} \mathrm{~h}_{2} \nabla \mathrm{u}_{1} \times \nabla \mathrm{u}_{2}$ Hence proved. Example: - If $\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}$ are orthogonal curvilinear coordinates, show that the jacobian of $\mathrm{x}, \mathrm{y}, \mathrm{z}$ w.r.t. $\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}$ is $$ \begin{aligned} J\left(\frac{x, y, z}{u_{1} u_{2} u_{3}}\right)=\frac{\partial(x, y, z)}{\partial\left(u_{1}, u_{2}, u_{3}\right)} & =\left|\begin{array}{lll} \frac{\partial x}{\partial u_{1}} & \frac{\partial y}{\partial u_{1}} & \frac{\partial z}{\partial u_{1}} \\ \frac{\partial x}{\partial u_{2}} & \frac{\partial y}{\partial u_{2}} & \frac{\partial z}{\partial u_{2}} \\ \frac{\partial x}{\partial u_{3}} & \frac{\partial y}{\partial u_{3}} & \frac{\partial z}{\partial u_{3}} \end{array}\right| \\ & =h_{1} h_{2} h_{3} \end{aligned} $$ Proof: - we know that $\hat{A}=A_{1} \hat{i}+A_{2} \hat{j}+A_{3} \hat{k}$ $$ \stackrel{\mu}{\mathrm{B}}=\left(\mathrm{B}_{1}, \mathrm{~B}_{2}, \mathrm{~B}_{3}\right), \quad \stackrel{\mu}{\mathrm{C}}=\mathrm{C}\left(\mathrm{C}_{1}, \mathrm{C}_{2}, \mathrm{C}_{3}\right) $$ Then $$ \begin{aligned} & \stackrel{\rho}{ } \rho(\mathrm{B} \times \mathrm{C})=\stackrel{\rho}{\mathrm{A}} \cdot\left|\begin{array}{ccc} \hat{\mathrm{i}} & \hat{\mathrm{j}} & \hat{\mathrm{k}} \\ \mathrm{B}_{1} & \mathrm{~B}_{2} & \mathrm{~B}_{3} \\ \mathrm{C}_{1} & \mathrm{C}_{2} & \mathrm{C}_{3} \end{array}\right| \\ & =\left(A_{1} \hat{i}+A_{2} \hat{j}+A_{3} \hat{k}\right) \cdot\left[\hat{i}\left(B_{2} C_{3}-B_{3} C_{2}\right)+\hat{j}\left(B_{3} C_{1}-B_{1} C_{3}+\hat{k}\left(B_{1} C_{2}-B_{2} C_{1}\right)\right]\right. \end{aligned} $$ $$ \begin{aligned} & \Rightarrow \quad \stackrel{\rho}{ }(\rho, \rho \times C)=\left|\begin{array}{lll} A_{1} & A_{2} & A_{3} \\ B_{1} & B_{2} & B_{3} \\ C_{1} & C_{2} & C_{3} \end{array}\right| \end{aligned} $$ Therefore $$ \begin{aligned} & \left|\frac{\partial \mathrm{x}}{\partial \mathrm{u}_{1}} \quad \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{1}} \quad \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{1}}\right| \\ & \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{2}} \quad \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{2}} \quad \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{2}}=\left(\frac{\partial \mathrm{x}}{\partial \mathrm{u}_{1}} \hat{\mathrm{i}}+\frac{\partial \mathrm{y}}{\partial \mathrm{u}_{1}} \hat{\mathrm{y}}+\frac{\partial \mathrm{z}}{\partial \mathrm{u}_{1}} \hat{\mathrm{k}}\right) \cdot\left[\left(\frac{\partial \mathrm{x}}{\partial \mathrm{u}_{2}} \hat{\mathrm{i}}+\frac{\partial \mathrm{y}}{\partial \mathrm{u}_{2}} \hat{\mathrm{j}}+\frac{\partial \mathrm{z}}{\partial \mathrm{u}_{2}} \hat{\mathrm{k}}\right)\right. \\ & \left|\begin{array}{lll} \frac{\partial x}{\partial u_{3}} & \frac{\partial y}{\partial u_{3}} & \frac{\partial z}{\partial u_{3}} \end{array}\right| \\ & \left.\left.\times\left(\frac{\partial \mathrm{x}}{\partial \mathrm{u}_{3}} \hat{\mathrm{i}}+\frac{\partial \mathrm{y}}{\partial \mathrm{u}_{3}} \hat{\mathrm{j}}+\frac{\partial \mathrm{z}}{\partial \mathrm{u}_{3}} \hat{\mathrm{k}}\right)\right]\right] \\ & =\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}\right) \cdot\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \times \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right) \end{aligned} $$ $$ \begin{aligned} & =h_{1} h_{2} h_{3} \mu_{1} \cdot \mu_{1} \\ & =h_{1} h_{2} h_{3} \mu_{1} \cdot \mu_{1} \\ & =h_{1} h_{2} h_{3} \quad\left[\Theta \quad e_{1} \cdot \mu_{1}=1\right] \end{aligned} $$ Hence proved. ## Contravariant Components of $\stackrel{\mu}{\mathrm{A}}$ and covariant components of $\AA^{\mu}$ $\hat{\mathrm{A}}$ in terms of unit base vectors $\hat{\mathrm{e}}_{1}, \hat{\mathrm{e}}_{2}, \hat{\mathrm{e}}_{3}$ or $\hat{\mathrm{E}}_{1}, \hat{\mathrm{E}}_{2}, \hat{\mathrm{E}}_{3}$ can be written as $$ \tilde{\mathrm{A}}=\mathrm{A}_{1} \hat{\mathrm{e}}_{1}+\mathrm{A}_{2} \hat{\mathrm{e}}_{2}+\mathrm{A}_{3} \hat{\mathrm{e}}_{3}=\mathrm{a}_{1} \hat{\mathrm{E}}_{1}+\mathrm{a}_{2} \hat{\mathrm{E}}_{2}+\mathrm{a}_{3} \hat{\mathrm{E}}_{3} $$ $\nabla \mathrm{u}_{3}$ constitite reciprocal system of vectors. We can also represent $\stackrel{\AA}{\mathrm{A}}$ in terms of base vectors $\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}, \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}}, \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}$ or $\nabla \mathrm{u}_{1}, \nabla \mathrm{u}_{2}, \nabla \mathrm{u}_{3}$ which are called unitary base vectors \\& are not unit vectors. In general, $$ \begin{aligned} & \tilde{\mathrm{A}}=\mathrm{C}_{1} \frac{\partial \mathrm{F}}{\partial \mathrm{u}_{1}}+\mathrm{C}_{2} \frac{\partial \mathrm{F}}{\partial \mathrm{u}_{2}}+\mathrm{C}_{3} \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \\ & \Rightarrow \quad \stackrel{\tilde{A}}{\mathrm{~A}}=\mathrm{C}_{1} \boldsymbol{\alpha}_{1}+\mathrm{C}_{2} \mathrm{Q}_{2}+\mathrm{C}_{3} \mathrm{Q}_{3} \quad \text { where } \boldsymbol{\alpha}_{\mathrm{p}}=\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{\mathrm{p}}}, \mathrm{p}=1,2,3 \\ & \text { \& } \quad \stackrel{\mu}{\mathrm{A}}=\mathrm{C}_{1} \nabla \mathrm{u}_{1}+\mathrm{C}_{2} \nabla \mathrm{u}_{2}+\mathrm{C}_{3} \nabla \mathrm{u}_{3} \\ & =\mathrm{C}_{1} \tilde{\beta}_{1}+\mathrm{C}_{2} \tilde{\beta}_{2}+\mathrm{C}_{3} \tilde{\beta}_{3} \quad \text { where } \tilde{\beta}_{\mathrm{p}}=\nabla \mathrm{u}_{\mathrm{p}}, \mathrm{p}=1,2,3 \end{aligned} $$ where $\mathrm{C}_{1}, \mathrm{C}_{2}, \mathrm{C}_{3}$ are called contravariant component of $\mathrm{A}^{\mathrm{a}}$ and $\mathrm{C}_{1}, \mathrm{C}_{2}, \mathrm{C}_{3}$ are called covariant components of $\AA$. Example: - Let $\stackrel{\mu}{ }$ be a given vector defined w.r.t. two gereral curvilinear coordinates system $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right) \&\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right)$. Find the relation between the contravariant components of the vector in the two coordinate system. (Find relation between $\mathrm{C}_{\mathrm{p}}$ and $\overline{\mathrm{C}}_{\mathrm{p}}$ ) Solution: - Suppose the transformation equation from rectangular $(\mathrm{x}, \mathrm{y}, \mathrm{z})$ system to the two general curvilinear coordinations systems $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ and $\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right)$ are given by $$ \mathrm{x}=\mathrm{x}_{1}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right), \quad \mathrm{y}=\mathrm{y}_{1}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right), \quad \mathrm{z}=\mathrm{z}_{1}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right) $$ $$ \mathrm{x}=\mathrm{x}_{2}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right), \quad \mathrm{y}=\mathrm{y}_{2}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right), \quad \mathrm{z}=\mathrm{z}_{2}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right) $$ Then I a transformation directly from $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ system to $\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right)$ system is defined by $$ \begin{array}{lll} \mathrm{u}_{1}=\mathrm{u}_{1}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right), & \mathrm{u}_{2}=\mathrm{u}_{2}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right), & \mathrm{u}_{3}=\mathrm{u}_{3}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right) \\ \overline{\mathrm{u}}_{1}=\overline{\mathrm{u}}_{1}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right), & \overline{\mathrm{u}}_{2}=\overline{\mathrm{u}}_{2}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right), & \overline{\mathrm{u}}_{3}=\overline{\mathrm{u}}_{3}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right) \end{array} $$ Let $\quad r=x \hat{i}+y \hat{j}+z \hat{k}$ Then use (1) $$ \begin{aligned} \mathrm{r} & =\mathrm{r}\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right), \quad \mathrm{r}=\mathrm{r}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right) \\ \mathrm{d} \stackrel{\rho}{\mathrm{r}} & =\frac{\partial \mu}{\partial \mathrm{u}_{1}} d \mathrm{u}_{1}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \mathrm{du}_{2}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \mathrm{du}_{3} \\ & =\tilde{\alpha}_{1} \mathrm{du}_{1}+\tilde{\alpha}_{2} \mathrm{du}_{2}+\tilde{\alpha}_{3} \mathrm{du}_{3} \end{aligned} $$ $\& \quad \mathrm{~d} \stackrel{\rho}{\mathrm{r}}=\frac{\partial \mathrm{r}}{\partial \overline{\mathrm{u}}_{1}} \mathrm{~d} \overline{\mathrm{u}}_{1}+\frac{\partial \mathrm{r}}{\partial \overline{\mathrm{u}}_{2}} \mathrm{~d} \overline{\mathrm{u}}_{2}+\frac{\partial \mathrm{r}}{\partial \overline{\mathrm{u}}_{3}} \mathrm{~d} \overline{\mathrm{u}}_{3}$ $$ =\frac{\nu}{\alpha}{ }_{1} d \bar{u}_{1}+\frac{\nu}{\alpha}{ }_{2} d \bar{u}_{2}+\frac{\nu}{\alpha} d u_{3} $$ where $\rho_{p}=\frac{\partial \tilde{r}}{\partial u_{p}}, \quad \frac{\rho}{\alpha_{p}}=\frac{\partial \tilde{r}}{\partial \bar{u}_{p}}, p=1,2,3$ from (3) \\& (4), we get from 2(i) since $\mathrm{u}_{\mathrm{p}}=\mathrm{u}_{\mathrm{p}}\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right)$ $$ \begin{aligned} \Rightarrow \quad \mathrm{du}_{1} & =\frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{1}} \mathrm{~d} \overline{\mathrm{u}}_{1}+\frac{\partial \mathrm{u}_{1}}{\mathrm{~d} \overline{\mathrm{u}}_{2}} \mathrm{~d} \overline{\mathrm{u}}_{2}+\frac{\partial \mathrm{u}_{1}}{\mathrm{~d} \overline{\mathrm{u}}_{3}} \mathrm{~d} \overline{\mathrm{u}}_{3} \\ \mathrm{du} & =\frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{1}} \mathrm{~d} \overline{\mathrm{u}}_{1}+\frac{\partial \mathrm{u}_{2}}{\mathrm{~d} \overline{\mathrm{u}}_{2}} \mathrm{~d} \overline{\mathrm{u}}_{2}+\frac{\partial \mathrm{u}_{21}}{\mathrm{~d} \overline{\mathrm{u}}_{3}} \mathrm{~d} \overline{\mathrm{u}}_{3} \\ \mathrm{du}_{3} & =\frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{1}} \mathrm{~d} \overline{\mathrm{u}}_{1}+\frac{\partial \mathrm{u}_{3}}{\mathrm{~d} \overline{\mathrm{u}}_{2}} \mathrm{~d} \overline{\mathrm{u}}_{2}+\frac{\partial \mathrm{u}_{3}}{\mathrm{~d} \overline{\mathrm{u}}_{3}} \mathrm{~d} \overline{\mathrm{u}}_{3} \end{aligned} $$ using these in L.H.S of (5) and equating coefficient of $\mathrm{d} \overline{\mathrm{u}}_{1}, \mathrm{~d} \overline{\mathrm{u}}_{2}, \mathrm{~d} \overline{\mathrm{u}}_{3}$, we get $$ \tilde{\alpha}_{1} \mathrm{du}_{1}+\ddot{\alpha}_{2} \mathrm{du}_{2}+\ddot{\alpha}_{3} \mathrm{du}_{3}=\left(\frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{1}} \mathrm{~d} \overline{\mathrm{u}}_{1}+\frac{\partial \mathrm{u}_{1}}{\mathrm{~d} \overline{\mathrm{u}}_{2}} \mathrm{~d} \overline{\mathrm{u}}_{2}+\frac{\partial \mathrm{u}_{1}}{\mathrm{~d} \overline{\mathrm{u}}_{3}} \mathrm{~d} \overline{\mathrm{u}}_{3}\right) \bar{\alpha}_{1} $$ $$ \begin{aligned} & +\left(\frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{1}} \mathrm{~d} \overline{\mathrm{u}}_{1}+\frac{\partial \mathrm{u}_{2}}{\mathrm{~d} \overline{\mathrm{u}}_{2}} \mathrm{~d} \overline{\mathrm{u}}_{2}+\frac{\partial \mathrm{u}_{21}}{\mathrm{~d} \overline{\mathrm{u}}_{3}} \mathrm{~d} \overline{\mathrm{u}}_{3}\right) \alpha_{2} \\ & +\left(\frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{1}} \mathrm{~d} \overline{\mathrm{u}}_{1}+\frac{\partial \mathrm{u}_{3}}{\mathrm{~d} \overline{\mathrm{u}}_{2}} \mathrm{~d} \overline{\mathrm{u}}_{2}+\frac{\partial \mathrm{u}_{3}}{\mathrm{~d}} \mathrm{~d} \overline{\mathrm{u}}_{3}\right) \alpha_{3} \\ & =\boldsymbol{\alpha}_{1} \mathrm{~d} \overline{\mathrm{u}}_{1}+\tilde{\alpha}_{2} \mathrm{~d} \overline{\mathrm{u}}_{2}+\tilde{\alpha}_{3} \mathrm{~d} \overline{\mathrm{u}}_{3} \end{aligned} $$ We get $\frac{\rho}{\alpha_{1}}=\rho_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{1}}+\rho_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{1}}+\rho_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{1}}$ $$ \begin{aligned} & \frac{\rho}{\alpha_{2}}=\rho_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{2}}+\rho_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{2}}+\rho_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{2}} \\ & \stackrel{\rho}{\alpha_{3}}=\rho_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{3}}+\rho_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{3}}+\rho_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{3}} \end{aligned} $$ Also $\stackrel{\mu}{A}=\rho_{1} C_{1}+\rho_{2} C_{2}+\rho_{3} C_{3}$ $$ =\frac{N}{\alpha_{1}} \overline{\mathrm{C}}_{1}+\frac{\mu}{\alpha_{2}} \overline{\mathrm{C}}_{2}+\frac{\nu}{\alpha_{3}} \overline{\mathrm{C}}_{3} $$ where $\mathrm{C}_{1}, \mathrm{C}_{2}, \mathrm{C}_{3}$ and $\overline{\mathrm{C}}_{1}, \overline{\mathrm{C}}_{2}, \overline{\mathrm{C}}_{3}$ are contravariant components of $\hat{\mathrm{A}}$ in the two systems $\left(\mathrm{u}_{1}, \mathrm{u}_{2}\right.$, $\left.\mathrm{u}_{3}\right)$ and $\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right)$. Substituting (6) in (7), $$ \begin{aligned} & \mathrm{C}_{1} \rho_{1}+\mathrm{C}_{2} \rho_{2}+\mathrm{C}_{3} \boldsymbol{a}_{3}=\rho_{1}\left(\rho_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{1}}+\rho_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{1}}+\rho_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{1}}\right) \\ & +\rho_{2}\left(\varrho_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{2}}+\mathfrak{\&}_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{2}}+\varrho_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{2}}\right) \\ & +\stackrel{\rho}{\mathrm{C}_{3}}\left(\mathrm{\alpha}_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{3}}+\rho_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{3}}+\rho_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{3}}\right) \\ & =\ell_{1}\left(\varrho_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{3}}\right) \\ & +\rho_{2}\left(\stackrel{\rho}{\mathrm{C}_{1}} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{3}}\right) \\ & +\rho_{3}\left(\rho_{1} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{3}}\right) \end{aligned} $$ Equating coefficients of $\alpha_{1}, \alpha_{2}, \alpha_{3}$, we get $$ \begin{aligned} \mathrm{C}_{1} & =\overline{\mathrm{C}}_{1} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \mathrm{u}_{1}}{\partial \overline{\mathrm{u}}_{3}} \\ \mathrm{C}_{2} & =\overline{\mathrm{C}}_{1} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \mathrm{u}_{2}}{\partial \overline{\mathrm{u}}_{3}} \\ \mathrm{C}_{3} & =\overline{\mathrm{C}}_{1} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \mathrm{u}_{3}}{\partial \overline{\mathrm{u}}_{3}} \\ \text { or } \quad \mathrm{C}_{\mathrm{p}} & =\overline{\mathrm{C}}_{1} \frac{\partial \mathrm{u}_{\mathrm{p}}}{\partial \overline{\mathrm{u}}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \mathrm{u}_{\mathrm{p}}}{\partial \overline{\mathrm{u}}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \mathrm{u}_{\mathrm{p}}}{\partial \overline{\mathrm{u}}_{3}}, \mathrm{p}=1,2,3 \\ \text { or } \quad \mathrm{C}_{\mathrm{p}} & =\sum_{\mathrm{q}=1}^{3} \overline{\mathrm{C}}_{\mathrm{q}} \frac{\partial \mathrm{u}_{\mathrm{p}}}{\partial \mathrm{u}_{\mathrm{q}}}, \mathrm{p}=1,2,3 \end{aligned} $$ Similarly by interchanging the coordinates, We can get $$ \overline{\mathrm{C}}_{\mathrm{p}}=\sum_{\mathrm{q}=1}^{3} \mathrm{C}_{\mathrm{q}} \frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{\mathrm{q}}}, \mathrm{p}=1,, 23 $$ Equation (8), (9), (10), (11) gives the relation between contravariant components of vector in two coordinates systems. Example: - Let $\stackrel{\mu}{\mathrm{A}}$ be given vector defined w.r.t. two curvilinear coordinates system $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ and $\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right)$. Find the relation between the covariant components of the vectors in the two co ord system. Solution: - Let the covariant component of $\hat{A}$ in the system $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ and $\left(\overline{\mathrm{u}}_{1}, \overline{\mathrm{u}}_{2}, \overline{\mathrm{u}}_{3}\right)$ are $\mathrm{C}_{1}, \mathrm{C}_{2}$, $\mathrm{C}_{3}$ and $\overline{\mathrm{C}}_{1}, \overline{\mathrm{C}}_{2}, \overline{\mathrm{C}}_{3}$ respectively $$ \therefore \quad \stackrel{\mu}{\mathrm{A}}=\mathrm{C}_{1} \nabla \mathrm{u}_{1}+\mathrm{C}_{2} \nabla \mathrm{u}_{2}+\mathrm{C}_{3} \nabla \mathrm{u}_{3}=\overline{\mathrm{C}}_{1} \nabla \overline{\mathrm{u}}_{1}+\overline{\mathrm{C}}_{2} \nabla \overline{\mathrm{u}}_{2}+\overline{\mathrm{C}}_{3} \nabla \overline{\mathrm{u}}_{3} $$ Since $\bar{u}_{p}=\bar{u}_{p}\left(u_{1}, u_{2}, u_{3}\right), p=1,2,3$ $$ \begin{aligned} & \frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{x}}=\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}} \\ & \frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{y}}=\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{y}}+\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{y}}+\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{y}} \\ & \frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{z}}=\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{z}}+\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{z}}+\frac{\partial \overline{\mathrm{u}}_{\mathrm{p}}}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{z}} \end{aligned} $$ $$ \text { Also } \begin{aligned} C_{1} \nabla u_{1}+C_{2} \nabla u_{2}+C_{3} \nabla u_{3}= & C_{1}\left(\frac{\partial u_{1}}{\partial x} \hat{i}+\frac{\partial u_{1}}{\partial y} \hat{j}+\frac{\partial u_{1}}{\partial z} \hat{k}\right) \\ & +C_{2}\left(\frac{\partial u_{2}}{\partial x} \hat{i}+\frac{\partial u_{2}}{\partial y} \hat{j}+\frac{\partial u_{2}}{\partial z} \hat{k}\right) \\ & +C_{3}\left(\frac{\partial u_{3}}{\partial x} \hat{i}+\frac{\partial u_{3}}{\partial y} \hat{j}+\frac{\partial u_{3}}{\partial z} \hat{k}\right) \\ = & \left(C_{1} \frac{\partial u_{1}}{\partial x}+C_{2} \frac{\partial u_{2}}{\partial x}+C_{3} \frac{\partial u_{3}}{\partial x}\right) \hat{i} \\ & +\left(C_{1} \frac{\partial u_{1}}{\partial y}+C_{2} \frac{\partial u_{2}}{\partial y}+C_{3} \frac{\partial u_{3}}{\partial y}\right) \hat{j} \\ & +\left(C_{1} \frac{\partial u_{1}}{\partial z}+C_{2} \frac{\partial u_{2}}{\partial z}+C_{3} \frac{\partial u_{3}}{\partial z}\right) \hat{k} \end{aligned} $$ and $\quad \overline{\mathrm{C}}_{1} \nabla \overline{\mathrm{u}}_{1}+\mathrm{C}_{2} \nabla \overline{\mathrm{u}}_{2}+\overline{\mathrm{C}}_{3} \nabla \overline{\mathrm{u}}_{3}=\left(\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{x}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{x}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{x}}\right) \hat{\mathrm{i}}$ $$ \begin{aligned} & +\left(\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{y}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{y}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{y}}\right) \hat{\mathrm{j}} \\ & +\left(\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{z}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{z}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{z}}\right) \hat{\mathrm{k}} \end{aligned} $$ Equating coefficients of $\hat{\mathrm{i}}, \hat{\mathrm{j}}, \hat{\mathrm{k}}$ in (3) \\& (4), We get $$ \begin{aligned} & \mathrm{C}_{1} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}+\mathrm{C}_{2} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}+\mathrm{C}_{3} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}}=\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{x}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{x}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{x}} \\ & \mathrm{C}_{1} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{y}}+\mathrm{C}_{2} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{y}}+\mathrm{C}_{3} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{y}}=\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{y}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{y}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{y}} \\ & \mathrm{C}_{1} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{z}}+\mathrm{C}_{2} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{z}}+\mathrm{C}_{3} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{z}}=\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{z}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{z}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{z}} \end{aligned} $$ Substituting equation (2) with $\mathrm{p}=1,2,3$ on R.H.S in the corresponding equation of (5) and equating coefficients of $\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}, \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}, \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}}$ or $\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{y}}, \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{y}}, \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{y}}$ or $\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{y}}, \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{y}}, \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{y}}$ on each side, We can get, Taking first equation of (5), $$ \begin{aligned} & \mathrm{C}_{1} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}+\mathrm{C}_{2} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}+\mathrm{C}_{3} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}}=\overline{\mathrm{C}}_{1}\left[\frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}}\right] \\ & +\overline{\mathrm{C}}_{1}\left[\frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}}\right] \\ & +\overline{\mathrm{C}}_{1}\left[\frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{u}_{1}} \frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{u}_{2}} \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}+\frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{u}_{3}} \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}}\right] \end{aligned} $$ Here equating coefficients of $\frac{\partial \mathrm{u}_{1}}{\partial \mathrm{x}}, \frac{\partial \mathrm{u}_{2}}{\partial \mathrm{x}}, \frac{\partial \mathrm{u}_{3}}{\partial \mathrm{x}}$ We get $$ \begin{aligned} & \mathrm{C}_{1}=\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{u}_{1}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{u}_{1}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{u}_{1}} \\ & \mathrm{C}_{2}=\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{u}_{2}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{u}_{2}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{u}_{2}} \\ & \mathrm{C}_{3}=\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{u}_{3}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{u}_{3}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{u}_{3}} \end{aligned} $$ or $\quad \mathrm{C}_{\mathrm{p}}=\overline{\mathrm{C}}_{1} \frac{\partial \overline{\mathrm{u}}_{1}}{\partial \mathrm{u}_{\mathrm{p}}}+\overline{\mathrm{C}}_{2} \frac{\partial \overline{\mathrm{u}}_{2}}{\partial \mathrm{u}_{\mathrm{p}}}+\overline{\mathrm{C}}_{3} \frac{\partial \overline{\mathrm{u}}_{3}}{\partial \mathrm{u}_{\mathrm{p}}}, \mathrm{p}=1,2,3$ or $\quad \mathrm{C}_{\mathrm{q}}=\sum_{\mathrm{q}=1}^{3} \overline{\mathrm{C}}_{\mathrm{q}} \frac{\partial \overline{\mathrm{u}}_{\mathrm{q}}}{\partial \mathrm{u}_{\mathrm{p}}}, \mathrm{p}=1,2,3$ Similarly, we cam show that $$ \overline{\mathrm{C}}_{\mathrm{p}}=\sum_{\mathrm{q}=1}^{3} \mathrm{C}_{\mathrm{q}} \frac{\partial \mathrm{u}_{\mathrm{q}}}{\partial \mathrm{u}_{\mathrm{q}}}, \mathrm{p}=1,2,3 $$ Equation (6) and (7) are required relation. Example: - Show that square of element of arc length in general curvilinear coordinate can by $$ \mathrm{ds}^{2}=\sum_{\mathrm{q}=1}^{3} \sum_{\mathrm{q}=1}^{3} \mathrm{~g}_{\mathrm{pq}} \mathrm{du}_{\mathrm{p}} \mathrm{du}_{\mathrm{q}} $$ Solution: $-(\mathrm{ds})^{2} \mathrm{~m}=\mathrm{d} \rho=\mathrm{dr} \cdot \mathrm{d} \rho=\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \mathrm{du}_{1}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \mathrm{du}_{2}+\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right)$ $$ \begin{gathered} \cdot\left(\frac{\partial \rho}{\partial u_{1}} d u_{1}+\frac{\partial \rho}{\partial u_{2}} d u_{2}+\frac{\partial \rho}{\partial u_{3}}\right) \\ (d s)^{2}=\left(\mu_{1} d u_{1}+\breve{\alpha}_{2} d u_{2}+\ddot{\alpha}_{3} d u_{3}\right) \cdot\left(\mu_{1} d u_{1}+\tilde{\alpha}_{2} d u_{2}+\tilde{\alpha}_{3} d u_{3}\right) \end{gathered} $$ where $\rho_{p}=\frac{\partial \mu}{\partial u_{p}}$ $$ \begin{aligned} & =\varrho_{1} \cdot \rho_{1} d u_{1}^{2}+\varrho_{1} \cdot \varrho_{2} \mathrm{du}_{1} \mathrm{du}_{2}+\rho_{1} \cdot \rho_{3} \mathrm{du}_{1} \mathrm{du} \\ & +\rho_{2} \cdot \rho_{1} d u_{2} d_{1}+\rho_{2} \cdot \rho_{2} d u_{2}^{2}+\rho_{2} \cdot \rho_{3} d_{2} u_{2} u_{3} \\ & +\rho_{3} \cdot \rho_{1} d_{3} d u_{1}+\rho_{3} \cdot \rho_{2} d_{3} d_{2}+\rho_{3} \cdot \rho_{3} d u_{2} d u_{3}^{2} \\ & (d s)^{2}=\sum_{q=1}^{3} \sum_{\mathrm{q}=1}^{3} \mathrm{~g}_{\mathrm{pq}} \mathrm{du}_{\mathrm{p}} \mathrm{du}, \quad \mathrm{g}_{\mathrm{pq}}=\boldsymbol{a}_{\mathrm{p}} \cdot \rho_{\mathrm{q}} \end{aligned} $$ Thos is called fundamental quadratic form or Metric form. The quantities $g_{p q}$ are called metric coefficients and these are symmetric i.e. $=g_{p q}=g_{q p}$ If $g_{p q}=0 p \neq q$, the coordinate system is orthogonal. and in this case $\mathrm{g}_{11}=\mathrm{h}_{1}^{2}, \mathrm{~g}_{22}=\mathrm{h}_{2}^{2}, \mathrm{~g}_{33}=\mathrm{h}_{3}^{2}$ Here also $$ \mathcal{\alpha}_{1}=\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}}=\mathrm{h}_{1} \hat{\mathrm{e}}_{1}, \mathcal{\alpha}_{2}=\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \mathrm{~h}_{2} \hat{\mathrm{e}}_{2}, \mathcal{\alpha}_{3}=\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \mathrm{~h}_{3} \hat{\mathrm{e}}_{3} $$ Example: - (a) Prove that in general coordinaye $\left(\mathrm{u}_{1}, \mathrm{u}_{2}, \mathrm{u}_{3}\right)$ $$ g=\left|\begin{array}{lll} g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \end{array}\right|=\left(\frac{\partial \mathrm{r}}{\partial u_{1}} \cdot \frac{\partial \mathrm{r}}{\partial u_{2}} \cdot \frac{\partial \mathrm{r}}{\partial u_{3}}\right)^{2} $$ where $g_{p q}$ are coefficients of $\mathrm{du}_{\mathrm{p}} \mathrm{du}_{\mathrm{q}}$ in $\mathrm{ds}^{2}$ Solution: - we know that $$ \begin{aligned} \mathrm{g}_{\mathrm{pq}} & =\rho_{\mathrm{p}} \cdot \rho_{\mathrm{q}}=\frac{\partial \mathrm{r}}{\partial \mathrm{h}_{\mathrm{p}}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{\mathrm{q}}} \\ & =\frac{\partial \mathrm{x}}{\partial \mathrm{u}_{\mathrm{p}}} \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{\mathrm{q}}}+\frac{\partial \mathrm{y}}{\partial \mathrm{u}_{\mathrm{p}}} \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{\mathrm{q}}}+\frac{\partial \mathrm{z}}{\partial \mathrm{u}_{\mathrm{p}}} \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{\mathrm{q}}}, \mathrm{p}=1,2,3 \end{aligned} $$ $$ \begin{aligned} \left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right)^{2} & =\left|\begin{array}{lll} \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{1}} \\ \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{2}} \\ \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{3}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{3}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{3}} \end{array}\right|\left|\begin{array}{lll} \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{1}} \\ \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{2}} \\ \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{3}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{3}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{3}} \end{array}\right| \\ & =\left|\begin{array}{lll} \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{1}} \\ \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{2}} \\ \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{3}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{3}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{3}} \end{array}\right|\left|\begin{array}{lll} \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{x}}{\partial \mathrm{u}_{3}} \\ \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{1}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{y}}{\partial \mathrm{u}_{3}} \\ \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{2}} & \frac{\partial \mathrm{z}}{\partial \mathrm{u}_{3}} \end{array}\right| \\ & =\left|\begin{array}{lll} \mathrm{g}_{11} & \mathrm{~g}_{12} & \mathrm{~g}_{13} \\ \mathrm{~g}_{21} & \mathrm{~g}_{22} & \mathrm{~g}_{23} \\ \mathrm{~g}_{31} & \mathrm{~g}_{32} & \mathrm{~g}_{33} \end{array}\right|=\mathrm{g} \end{aligned} $$ Example (b) show that the volume element in general coordinate is $\sqrt{\mathrm{g}} \mathrm{du}_{1} \mathrm{du}_{2} \mathrm{du}_{3}$. Solution: - The volume element is given by $$ \begin{aligned} \mathrm{dV} & =\left|\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \mathrm{du}_{1}\right) \cdot\left(\frac{\partial \mathrm{\rho}}{\partial \mathrm{u}_{2}} \mathrm{du}_{2}\right) \times\left(\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}} \mathrm{du}_{3}\right)\right| \\ & =\left|\frac{\partial \mathrm{r}}{\partial \mathrm{u}_{1}} \cdot \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{2}} \times \frac{\partial \mathrm{r}}{\partial \mathrm{u}_{3}}\right| \mathrm{du}_{1} \mathrm{du}_{2} \mathrm{du}_{3} \\ & =\sqrt{\mathrm{g}} \mathrm{du}_{1} \mathrm{du}_{2} \mathrm{du}_{3} \end{aligned} $$ ## LESSON 4 RANDOM VARIABLE AND MATHEMATICAL EXPECTATION ## Sample space The set points representing the possible outcomes of an experiment is called the sample space of the experiment. Example: - (1) In tossing one coin, the sample space is $\mathrm{S}=\{\mathrm{H}, \mathrm{T}\}$ (2) Two coins are tossed, $$ \mathrm{S}=\{\mathrm{HH}, \mathrm{HT}, \mathrm{TH}, \mathrm{TT}\} $$ (3) In throw a dice, $$ \begin{array}{r} S=\{(1,1),(1,2),(1,3),(1,4),(1,5),(1,6) \\ (2,1),(2,2),(2,3),(2,4),(2,5),(2,6) \end{array} $$ $\mathrm{M}$ $$ (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) $$ Total outcome $=36$ ## Rondom Variable A random variable is a real valued function defined on a sample space. When an experiment is performed, several outcomes are possible corresponding to each outcome, a number can be associatd. Example: - If two coins are tossed, the possible outcomes are TT, TH, HT, HH Let $\mathrm{X} \rightarrow$ denotes the number of heads The number associated with the outcome are : TT, TH, HT, HH No. of heads, $\mathrm{X}: 0,1,1,2$ The variable $\mathrm{X}$ is said to be random variable. and may be defined as "Let $\mathrm{S}$ be the sample space associated with a given experiment. A real valued function defined on $S \&$ taking values in $R(-\infty, \infty)$ ( real no.) is called a random variable (or chance variable or Stochastic variable or variate). ## Discrete and Continouse Sample Space A sample space that consists of a finite number or an infinite sequence, of points is called a discrete sample space and that consints of one or more intervals of points is called a cotinouse sample space. ## Discrete and Continouses Random Variable A random variable defined on a discrete sample space is called discrete r.v. or If a r.v. takes at most a countable no. values, it is called Discrete r.v. A r.v. is said to be Continuous if it can takes al possible value between certain limits or certain interval. ## Example: - 1) If $X$ represents the sum of points on two dice, $X$ is a discrete r.v. 2) If $X$ represents the height or weight of students in a class, then it is a continuous r.v. 3) If $X$ represents the amount of rainfall, it is a continuous r.v. ## Density function (d.f) or probability density function(p.d.f) A function associated with discrete r.v. X s.t. $f(x)=\operatorname{prob}[X=x]$ is called density function of $X$. Example: - In tossing twocoins, Outcomes $=\{\mathrm{HH}, \mathrm{TH}, \mathrm{HT}, \mathrm{TT}\}$ $$ \begin{aligned} & X=[0,1,2] \\ & f(0)=P[X=0]=\frac{1}{4} \\ & f(1)=P[X=1]=2 / 4=1 / 2 \\ & f(2)=P[X=2]=1 / 4 \end{aligned} $$ Example: - In throwing two dice, the sample space of sum of points on two dice is $$ \begin{aligned} & S=[1,2,3, \ldots \ldots, 12] \\ & f(2)=P[X=2]=(1,1)=\frac{1}{36}=f(12)(6,6) \\ & f(3)=\frac{2}{36}=f(11) \\ & f(4)=f(10)=\frac{3}{36} \end{aligned} $$ $$ \begin{aligned} & f(5)=f(9)=\frac{4}{36} \\ & f(6)=f(8)=\frac{5}{36} \\ & f(7)=P[X=7]=\frac{6}{36} \end{aligned} $$ Also $\sum \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right)=1$ $$ \mathrm{X}=\mathrm{x}_{\mathrm{i}}=2,3, \ldots \ldots 12 $$ ## Distribution function For a r.v. $\mathrm{X}$, the function $\mathrm{F}(\mathrm{x})=\mathrm{P}(\mathrm{X} \leq \mathrm{x})$ is called the distribution function of $\mathrm{X}$ or Cumulative distribution. Since $F(x)=P(X \leq x)$, then $f(x)$ the p.d.f. We have $$ \mathrm{F}(\mathrm{x})=\sum_{\mathrm{t} \leq \mathrm{x}} \mathrm{f}(\mathrm{t}) $$ If $\quad \mathrm{P}(\mathrm{X} \leq 2)$, then $$ \sum_{\mathrm{x} \leq 2} \mathrm{f}(\mathrm{x})=\mathrm{f}(0)+\mathrm{f}(1)+\mathrm{f}(2) $$ Example: - A r.v. $\mathrm{X}$ has the following distribution | $\mathrm{X}: 0$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | $\mathrm{f}(\mathrm{x}): \mathrm{k}$ | $3 \mathrm{k}$ | $5 \mathrm{k}$ | $8 \mathrm{k}$ | $9 \mathrm{k}$ | $11 \mathrm{k}$ | $12 \mathrm{k}$ | $14 \mathrm{k}$ | $17 \mathrm{k}$ | (1) Find $k$, As $\sum \mathrm{f}(\mathrm{x})=1$ $$ \begin{aligned} & \Rightarrow \quad \mathrm{k}+3 \mathrm{k}+\ldots+17 \mathrm{k}=1 \\ & \Rightarrow \quad 80 \mathrm{k}=1 \quad \Rightarrow \quad \mathrm{k}=\frac{1}{80} \end{aligned} $$ (2) Find $P(X<2)$ $$ \begin{aligned} & \begin{aligned} \mathrm{P}(\mathrm{X}=0)+\mathrm{P}(\mathrm{X}=1) & =\mathrm{P}(0)+\mathrm{P}(1) \\ & =\mathrm{k}+3 \mathrm{k}=4 \mathrm{k}=\frac{4}{80}=\frac{1}{20} \end{aligned} \\ & \mathrm{P}(\mathrm{X}<3)=\mathrm{P}(0)+\mathrm{P}(1)+\mathrm{P}(2)=\frac{9}{80} \end{aligned} $$ (3) $\quad \mathrm{P}(\mathrm{X} \geq 3)=\mathrm{P}(3)+\mathrm{P}(4)+\ldots+\mathrm{P}(8)=\frac{71}{80}$ Also $\quad \mathrm{P}(\mathrm{X} \geq 3)=1-\mathrm{P}(\mathrm{X}<3)=1-\frac{9}{80}=\frac{71}{80}$ (4) $\mathrm{P}(0<\mathrm{X}<5)=\mathrm{P}(1)+\mathrm{P}(2)+\mathrm{P}(3)+\mathrm{P}(4)=25 / 80$ Distribution function $\mathrm{F}(\mathrm{x})$ is obtained $\mathrm{F}(\mathrm{x}): \quad \frac{1}{80} \quad \frac{4}{80} \quad \frac{9}{80} \quad \frac{17}{80} \quad \frac{26}{80} \quad \frac{37}{80} \quad \frac{49}{80} \quad \frac{63}{80} \quad \frac{80}{80}=1$ ## Joint density function Let $\mathrm{X}, \mathrm{Y}$ be two r.v. , Joint d.f. gives the probability that $\mathrm{X}$ will assume $\mathrm{Y}$ will taken a value $\mathrm{y}$ i.e. $\quad f(x, y)=P(X=x, Y=y)$ Example: -52 cards and 2 cards are drawn $\mathrm{X} \rightarrow$ no. of spade in the $1^{\text {st }}$ draw $\mathrm{Y} \rightarrow$ no. of spade in the $2^{\text {nd }}$ draw Without replacing the $1^{\text {st }}$ card drawn. $$ \begin{aligned} & \mathrm{f}(0,0)=P[X=0, Y=0]=\frac{39}{52} \times \frac{38}{51} \\ & \mathrm{f}(0,1)=\mathrm{P}[\mathrm{X}=0, \mathrm{Y}=1]=\frac{39}{52} \times \frac{13}{51} \end{aligned} $$ $$ \begin{aligned} & \mathrm{f}(1,0)=\mathrm{P}[\mathrm{X}=1, \mathrm{Y}=0]=\frac{13}{52} \times \frac{39}{51} \\ & \mathrm{f}(1,1)=\mathrm{P}[\mathrm{X}=1, \mathrm{Y}=1]=\frac{13}{52} \times \frac{12}{51} \end{aligned} $$ Let A \\& B be two events, then $$ \mathrm{P}(\mathrm{B} / \mathrm{A})=\frac{\mathrm{P}(\mathrm{A} \cap \mathrm{B})}{\mathrm{P}(\mathrm{A})} $$ $$ \mathrm{P}(\mathrm{A} / \mathrm{B})=\frac{\mathrm{P}(\mathrm{A} \cap \mathrm{B})}{\mathrm{P}(\mathrm{B})} $$ ## Conditional density function $\mathrm{f}(\mathrm{y} / \mathrm{x})$ is defined as $$ \mathrm{f}(\mathrm{y} / \mathrm{x})=\mathrm{P}[\mathrm{Y}=\mathrm{y} \mid \mathrm{X}=\mathrm{x}] $$ gives distribution of $Y$ when $X$ is fixed $$ f(y / x)=\frac{f(x, y)}{f(x)} $$ and $\quad \mathrm{f}(\mathrm{x} / \mathrm{y})=\frac{\mathrm{f}(\mathrm{x}, \mathrm{y})}{\mathrm{f}(\mathrm{x})}$ ## Marginal density function Here $\mathrm{f}(\mathrm{x}, \mathrm{y})=\mathrm{f}(\mathrm{y} / \mathrm{x}) \cdot \mathrm{f}(\mathrm{x})$ Condidional distance $\mathrm{Y}=\mathrm{y}$ when $\mathrm{X}=\mathrm{x}$ is fixed. $$ \sum_{y} f(y / x)=1 $$ Summing over all possible values of $\mathrm{y}$ on bith sides of (1), we get $$ \begin{aligned} & \sum_{\mathrm{y}} \mathrm{f}(\mathrm{x}, \mathrm{y})=\sum_{\mathrm{y}} \mathrm{f}(\mathrm{y} / \mathrm{x}) \mathrm{f}(\mathrm{x}) \\ \Rightarrow \quad \sum_{\mathrm{y}} \mathrm{f}(\mathrm{x}, \mathrm{y}) & =\mathrm{f}(\mathrm{x}) \end{aligned} $$ This $\mathrm{f}(\mathrm{x})$ is known as Marginal density function of $\mathrm{X}$. Similarly $g(y)=\sum_{x} f(x, y)$ This gives Marginal function of $Y$. Example: - 2 white \\& 4 black balls find probability of having two white ball. $$ X, Y= \begin{cases}0 & \text { for black ball } \\ 1 & \text { for white black }\end{cases} $$ Solution: $-\mathrm{f}(0,0)=\mathrm{f}(0) \mathrm{f}(0 / 0)$ $$ \begin{gathered} =\frac{4}{6} \cdot \frac{3}{5}=\frac{6}{15} \\ \mathrm{f}(0,1)=\mathrm{f}(0) \mathrm{f}(1 / 0)=\frac{4}{6} \cdot \frac{2}{5} \end{gathered} $$ $$ \begin{aligned} & \mathrm{f}(1,0)=\mathrm{f}(1) \mathrm{f}(0 / 1)=\frac{2}{6} \cdot \frac{4}{5} \\ & \mathrm{f}(1,1)=\mathrm{f}(1) \mathrm{f}(1 / 1)=\frac{2}{6} \cdot \frac{1}{5} \end{aligned} $$ Marginal density function of $\mathrm{X}$ is $$ \begin{aligned} & \mathrm{f}(\mathrm{x})=\sum_{\mathrm{y}} \mathrm{f}(\mathrm{x}, \mathrm{Y}) \\ & \Rightarrow \quad \mathrm{f}(0)=\sum_{\mathrm{y}} \mathrm{f}(0, \mathrm{y})=\mathrm{f}(0,0)+\mathrm{f}(0,1) \\ &\Rightarrow \quad \mathrm{f} 0)=\frac{6}{15}+\frac{4}{15}=\frac{10}{15}=\frac{2}{3} \\ & \& \quad \mathrm{f}(1)=\sum_{\mathrm{y}} \mathrm{f}(1, \mathrm{y}) \\ & \mathrm{f}(1,0)+\mathrm{f}(1,1)=\frac{8}{30}+\frac{2}{30}=\frac{10}{30} \\ & \mathrm{f}(1)=\frac{1}{3} \end{aligned} $$ The conditional density function of $Y$ for fixed $\mathrm{x}$ can be obtained from (1), $$ \begin{aligned} \mathrm{f}(\mathrm{y} / \mathrm{x}) & =\frac{\mathrm{f}(\mathrm{x}, \mathrm{y})}{\mathrm{f}(\mathrm{x})} \\ \therefore \quad \mathrm{f}(0 / 1) & =\frac{\mathrm{f}(1,0)}{\mathrm{f}(1)}=\frac{4 / 15}{1 / 3}=\frac{4}{5} \\ \mathrm{f}(1 / 1) & =\frac{\mathrm{f}(1,1)}{\mathrm{f}(1)}=\frac{1 / 15}{1 / 3}=\frac{1}{5} \end{aligned} $$ Continouse r.v. :- $-\infty<\mathrm{X}<\infty$ ## Density function A d.f. for a continuouse r.v. $X$ is a function $f(x)$ that possesses the following properties (i) $\mathrm{f}(\mathrm{x}) \geq 0$ (ii) $\int_{-\infty}^{\infty} f(x) d x=1$ (iii) $\int_{\mathrm{a}}^{\mathrm{b}} \mathrm{f}(\mathrm{x}) \mathrm{dx}=\mathrm{P}[\mathrm{a}<\mathrm{X}<\mathrm{b}]$ where $\mathrm{a}<\mathrm{b}$ ## Distribution function $$ F(x)=\int_{-\infty}^{\infty} f(x) d x=P[X \leq x] $$ Provided integral exists. ## Marginal density function $$ \begin{aligned} & f(x)=\int_{-\infty}^{\infty} f(x, y) d y \rightarrow \text { Marginal density function of } X \\ & f(y)=\int_{-\infty}^{\infty} f(x, y) d x \rightarrow \text { Marginal density function of } Y . \end{aligned} $$ ## Independent Random Variable Two r.v. $X$ and $Y$ are said to be independent if $f(x, y)=f(x) f(y)$ Example: - Let the joint d.f. $\mathrm{f}(\mathrm{x}, \mathrm{y})$ of r.v.'s $\mathrm{X}$ and $\mathrm{Y}$ be $$ f(x, y)=\left\{\begin{array}{lll} k\left(x y+e^{x}\right) & , & 0<x, y<1 \\ 0 & , & \text { otherwise } \end{array}\right. $$ 1) Determine $\mathrm{k}$ 2) Examine whether $X \& Y$ are independent. Solution: - 1) For d.f. $\iint f(x, y) d x d y=1$ $\therefore \quad \mathrm{k} \int_{0}^{1} \int_{0}^{1}\left(\mathrm{xy}+\mathrm{e}^{\mathrm{x}}\right) \mathrm{dx} \mathrm{dy}=1$ $\Rightarrow \quad \mathrm{k} \int_{0}^{1}\left(\frac{\mathrm{x}^{2}}{2} \mathrm{y}+\mathrm{e}^{\mathrm{x}}\right)_{0}^{1} \mathrm{dy}=1$ $\Rightarrow \quad \mathrm{k} \int_{0}^{1}\left(\frac{\mathrm{y}}{2}+\mathrm{e}-1\right) \mathrm{dy}=1$ $\Rightarrow \quad \mathrm{k}\left[\frac{\mathrm{y}^{2}}{4}+\mathrm{ey}-\mathrm{y}\right]_{0}^{1}=1$ $\Rightarrow \quad \mathrm{k}\left[\frac{1}{4}+\mathrm{e}-1\right]=1$ $\Rightarrow \quad \mathrm{k}\left(\mathrm{e}-\frac{3}{4}\right)=1$ $\Rightarrow \quad \mathrm{k}=\frac{4}{4 \mathrm{e}-3}$ (2) $\quad \mathrm{f}(\mathrm{x})=\int_{0}^{1} \mathrm{f}(\mathrm{x}, \mathrm{y}) \mathrm{dy}$ $$ f(y)=\int_{0}^{1} f(x, y) d x $$ check whether $f(x, y)=f(x) f(y)$ Now $f(x)=\int_{0}^{1} k\left(x y+e^{x}\right) d y$ $$ \begin{aligned} & =\mathrm{k}\left[\frac{\mathrm{x}}{2}+\mathrm{e}^{\mathrm{x}}\right] \\ & f(y)=k \int_{0}^{1}\left(x y+e^{x}\right) d x \\ & =\mathrm{k}\left[\frac{\mathrm{x}^{2}}{2} \mathrm{y}+\mathrm{e}^{\mathrm{x}}\right]_{0}^{1} \\ & =\mathrm{k}\left[\frac{\mathrm{y}}{2}+\mathrm{e}-1\right] \end{aligned} $$ So $\quad f(x, y) \neq f(x) f(y)$ so they are not independent. Example: $-\mathrm{f}(\mathrm{x}, \mathrm{y})=\left\{\begin{array}{lll}2 & , & 0<\mathrm{x}<1,0<\mathrm{y}<\mathrm{x} \\ 0 & , & \text { otherwise }\end{array}\right.$ 1) Find marginal d.f. of $X$ and $Y$ 2) Find conditional d.f. of $Y$ given $X=x$ Find conditional d.f. of $\mathrm{X}$ given $\mathrm{Y}=\mathrm{y}$ 3) Check whether $X \& Y$ are independent or not? Solution: -(1) $\quad \mathrm{f}(\mathrm{x})=\int_{0}^{\mathrm{x}} \mathrm{f}(\mathrm{x}, \mathrm{y}) \mathrm{dy}$ $$ \begin{aligned} & =\int_{0}^{x} 2 d y=(2 y)_{0}^{x}=2 x, 0<x<1 \\ f(y) & =\int_{y}^{1} f(x, y) d x=2 \int_{y}^{1} d x=(2 y)_{y}^{1} \\ & =2(1-y), 0<y<1 \end{aligned} $$ So $\quad f(x, y) \neq f(x) f(y)$ $\Rightarrow \quad \mathrm{X}$ and $\mathrm{Y}$ are not independent. (2) $\mathrm{f}(\mathrm{Y} / \mathrm{X}=\mathrm{x})=\frac{\mathrm{f}(\mathrm{x}, \mathrm{y})}{\mathrm{f}(\mathrm{x})}=\frac{2}{2 \mathrm{x}}=\frac{1}{\mathrm{x}}, \quad 0<\mathrm{x}<1$ $$ \mathrm{f}(\mathrm{X} / \mathrm{Y}=\mathrm{y})=\frac{\mathrm{f}(\mathrm{x}, \mathrm{y})}{\mathrm{f}(\mathrm{y})}=\frac{2}{2(1-\mathrm{y})}=\frac{1}{1-\mathrm{y}}, \quad 0<\mathrm{y}<1 $$ Example: - A continoues r.v. $\mathrm{X}$ has p.d.f. $\mathrm{f}(\mathrm{x})=3 \mathrm{x}^{2}, 0 \leq \mathrm{x} \leq 1$. Find a abd b s.t. (1) $P(X \leq a)=P(X \geq a)$ (2) $\mathrm{P}(\mathrm{X} \leq \mathrm{b})=0.05$ Solution: - Since $\mathrm{P}(\mathrm{X} \leq \mathrm{a})=\mathrm{P}(\mathrm{X}>\mathrm{a})$ So, each must be equal to $1 / 2$ because total probability is always one i.e. $\int_{0}^{a}+\int_{a}^{1}=1$ $$ \begin{aligned} & \mathrm{P}(\mathrm{X} \leq \mathrm{a})=\frac{1}{2} \Rightarrow \int_{0}^{\mathrm{a}} \mathrm{f}(\mathrm{x}) \mathrm{dx}=\frac{1}{2} \\ \Rightarrow \quad & 3 \int_{0}^{\mathrm{a}} \mathrm{x}^{2} \mathrm{dx}=\frac{1}{2} \Rightarrow \mathrm{a}=\left(\frac{\mathrm{x}^{3}}{3}\right)_{0}^{\mathrm{a}}=\frac{1}{2} \\ \Rightarrow & \mathrm{a}^{3}=\frac{1}{2} \Rightarrow \mathrm{a}=\left(\frac{1}{2}\right)^{1 / 3} \end{aligned} $$ Also $\quad \int_{a}^{1} f(x) d x=\int_{a}^{1} 3 x^{2} d x=\frac{1}{2}$ $\Rightarrow \quad 3\left(\frac{\mathrm{x}^{3}}{3}\right)_{\mathrm{a}}^{1}=\frac{1}{2} \Rightarrow 1-\mathrm{a}^{3}=\frac{1}{2}$ $\Rightarrow \quad \mathrm{a}^{3}=\frac{1}{2} \Rightarrow \mathrm{a}=\left(\frac{1}{2}\right)^{1 / 3}$ (2) $\mathrm{P}(\mathrm{X}>\mathrm{b})=0.05 \Rightarrow \int_{\mathrm{b}}^{1} \mathrm{f}(\mathrm{x}) \mathrm{dx}=0.05$ $\Rightarrow \quad 3 \int_{\mathrm{b}}^{1} \mathrm{x}^{2} \mathrm{dx}=0.5 \quad \Rightarrow \quad 3\left(\frac{\mathrm{x}^{3}}{3}\right)_{\mathrm{b}}^{1}=0.05$ $\Rightarrow \quad\left(1-b^{3}\right)=\frac{1}{20} \quad \Rightarrow \quad b^{3}=\frac{19}{20}$ $\Rightarrow \quad \mathrm{b}=\left(\frac{19}{20}\right)^{1 / 3}$ Example: - If $\mathrm{X}$ be a continuous $\mathrm{r}$. v. with $$ \mathrm{f}(\mathrm{x})= \begin{cases}\mathrm{ax}, & 0 \leq \mathrm{x} \leq 1 \\ \mathrm{a}, & 1 \leq \mathrm{x} \leq 2 \\ -\mathrm{ax}+3 \mathrm{a}, & 2 \leq \mathrm{x} \leq 3 \\ 0, & \text { elsewhere }\end{cases} $$ (1) Find constant a (2) Find $P(X \leq 1.5)$ Solution: - (1) $\int_{-\infty}^{0} f(x) d x+\int_{0}^{1} f(x) d x+\int_{1}^{2} f(x) d x+\int_{2}^{3} f(x) d x+\int_{3}^{\infty} f(x) d x=1$ $\Rightarrow \quad 0+\int_{0}^{1} \mathrm{f}(\mathrm{x}) \mathrm{dx}+\int_{1}^{2} \mathrm{f}(\mathrm{x}) \mathrm{dx}+\int_{2}^{3} \mathrm{f}(\mathrm{x}) \mathrm{dx}+0=1$ $\Rightarrow \quad \int_{0}^{1} \mathrm{ax} d \mathrm{dx}+\int_{1}^{2} \mathrm{adx}+\int_{2}^{3}(3 \mathrm{a}-\mathrm{ax}) \mathrm{dx}=1$ $\Rightarrow \quad \mathrm{a}\left(\frac{\mathrm{x}^{2}}{2}\right)_{0}^{1}+\mathrm{a}(\mathrm{x})_{1}^{2}+\mathrm{a}\left(3 \mathrm{x}-\frac{\mathrm{x}^{2}}{2}\right)_{2}^{3}=1$ $$ \begin{array}{ll} \Rightarrow & \frac{\mathrm{a}}{2}+\mathrm{a}+\mathrm{a}+\left(9-\frac{9}{2}-6+2\right)=1 \\ \Rightarrow & \frac{3 \mathrm{a}}{2}+\frac{\mathrm{a}}{2}=1 \\ \Rightarrow & \mathrm{a}=\frac{1}{2} \end{array} $$ (2) Now $\quad P(X \leq 1.5)=\int_{-\infty}^{1.5} f(x) d x$ $$ \begin{aligned} & =\int_{-\infty}^{0} f d x+\int_{0}^{1} f(x) d x+\int_{1}^{1.5} f(x) d x \\ & =0+\int_{0}^{1} a x d x+\int_{1}^{3 / 2} a d x \\ & =a\left(\frac{1}{2}\right)+a(x)_{1}^{3 / 2}=\frac{a}{2}+a\left(\frac{3}{2}-1\right) \end{aligned} $$ Example: - From the given bivariate probability distribution (1) Obtain marginal distribution of $X \& Y$. (2) The conditional distribution of $\mathrm{X}$ given $\mathrm{Y}=2$ | 0 | -1 | 0 | 1 | $\sum_{x} f(x, y)=f(y)$ | | :---: | :--- | :--- | :--- | :--- | | 1 | $3 / 15$ | $2 / 15$ | $1 / 15$ | $\mathbf{4 / 1 5}$ | | 2 | $2 / 15$ | $1 / 15$ | $2 / 15$ | $\mathbf{6 / 1 5}$ | | $\mathbf{f}(\mathbf{x})=\sum_{\mathrm{y}} f(x, y)$ | $\mathbf{6 / 1 5}$ | $\mathbf{5 / 1 5}$ | $\mathbf{4} / \mathbf{1 5}$ | | Solution: (1) Marginal distribution of $\mathrm{X}$ From above table $\quad \mathrm{f}(\mathrm{x})=\sum_{\mathrm{y}} \mathrm{f}(\mathrm{x}, \mathrm{y})$ Therefore $\quad f(-1)=P(X=-1)=6 / 15$ $$ \begin{aligned} & \mathrm{f}(0)=\mathrm{P}(\mathrm{X}=0)=5 / 15 \\ & \mathrm{f}(1)=\mathrm{P}(\mathrm{X}=1)=4 / 15 \end{aligned} $$ Marginal distribution of $\mathrm{Y}$ From above table $\mathrm{f}(\mathrm{y})=\sum_{\mathrm{x}} \mathrm{f}(\mathrm{x}, \mathrm{y})$ Therefore $\quad \mathrm{P}(\mathrm{Y}=0)=4 / 15$ $$ \mathrm{P}(\mathrm{Y}=1)=6 / 15 $$ $\mathrm{P}(\mathrm{Y}=2)=5 / 15$ (2) Conditional distribution of $\mathrm{X}$ given $\mathrm{Y}=2$, we get $$ P(X=x / y=2)=\frac{P(X=x, Y=2)}{P(Y=2)} $$ For $\mathrm{X}=-1, \mathrm{P}(\mathrm{X}=-1 / \mathrm{Y}=2)=\frac{2 / 15}{5 / 15}=\frac{2}{5}$ Similarly $\quad \mathrm{P}(\mathrm{X}=0 / \mathrm{Y}=2)=\frac{1 / 15}{5 / 15}=\frac{1}{5}$ $$ \mathrm{P}(\mathrm{X}=1 / \mathrm{Y}=2)=\frac{2 / 15}{5 / 15}=\frac{2}{5} $$ ## MATHEMATICAL EXPECTATION:- Let $\mathrm{X}$ be a r.v. with p.d.f. $\mathrm{f}(\mathrm{x})$, then its mathematical expectation (or its mean value) is defined as $$ \mathrm{E}(\mathrm{X})=\sum x f(x) $$ It $\mathrm{X}$ assumes values $\quad \mathrm{x}_{1}, \mathrm{x}_{2} \ldots \ldots \mathrm{x}_{\mathrm{n}} \cdots$ With probalities $\quad \mathrm{f}\left(\mathrm{x}_{1}\right), \mathrm{f}\left(\mathrm{x}_{2}\right) \ldots . \mathrm{f}\left(\mathrm{x}_{\mathrm{n}}\right) \ldots \ldots$ then $\mathrm{E}(\mathrm{X})=\sum_{i=1}^{\infty} x_{i} f\left(x_{i}\right)$ Also $\mathrm{E}(\mathrm{X})=$ Mean of distribution and $\sum \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right)=1$ The expected value or Mathematical expectation of the function $\mathrm{g}(\mathrm{x})$ of discrete r.v. X, whose p.d.f. is $f(x)$ is given by $$ E[g(X)]=\sum_{i=1}^{\infty} g\left(x_{i}\right) f\left(x_{i}\right) $$ and $\mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right)=\mathrm{P}\left(\mathrm{X}=\mathrm{x}_{\mathrm{i}}\right)$ Example: - When 3 coins are tossed and $\mathrm{X} \rightarrow$ represents the number of heads is a r. v., then total outcomes are $\{$ HHH, HTH, HHT, THH, THT, TTH, HTT, TTT\} Here $\mathrm{X}$ can take values $\quad \mathrm{X}=0,1,2,3$ with $\quad \mathrm{f}(\mathrm{x})=\frac{1}{8}, \frac{3}{8}, \frac{3}{8}, \frac{1}{8}$ Then $\quad \mathrm{E}(\mathrm{X})=\sum \mathrm{xf}(\mathrm{x})=0 \cdot \frac{1}{8}+1 \cdot \frac{3}{8}+2 \cdot \frac{3}{8}+3 \cdot \frac{1}{8}$ $$ =\frac{12}{8}=\frac{3}{2} \times 1 \text { Rs. }=\text { Rs. } 1.50 $$ If $g(x)=x^{2}$ Then $g(x)=0^{2}, 1^{2}, 2^{2}, 3^{2}$ $\therefore \quad \mathrm{E}[\mathrm{g}(\mathrm{x})]=\sum \mathrm{g}\left(\mathrm{x}_{\mathrm{i}}\right) \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right)$ $$ \begin{aligned} & =0^{2} \times \frac{1}{8}+1^{2} \times \frac{3}{8}+2^{2} \times \frac{3}{8}+3^{2} \times \frac{1}{8} \\ & =\frac{3}{8}+\frac{12}{8}+\frac{9}{8}=\text { Rs. } 3.0 \end{aligned} $$ ## Mathematical Expectation for Continuous r.v. Let $\mathrm{X}$ be a continuous r.v. with p.d.f. $\mathrm{f}(\mathrm{x})$, then its mathematical expectation is $$ E(x)=\int_{-\infty}^{\infty} x f(x) d x $$ For function $g(x), \quad E[g(x)]=\int_{-\infty}^{\infty} g(x) f(x) d x$ Theorem: - If $\mathrm{C}$ is a finite real number \\& if $\mathrm{E}(\mathrm{X})$ exists, then Proof: - for continuous r.v. $$ \begin{aligned} & \text { Let } E(C X)=\int_{-\infty}^{\infty} C x f(x) d x \\ & =C \int_{-\infty}^{\infty} x f(x) d x=C E(X) \end{aligned} $$ for discrete r.v.; $$ \begin{aligned} \mathrm{E}(\mathrm{CX}) & =\sum \operatorname{Cxf}(\mathrm{x})=\mathrm{C} \sum \mathrm{xf}(\mathrm{x}) \\ & =\mathrm{CE}(\mathrm{X}) \end{aligned} $$ Result: $-\mathrm{E}[\mathrm{a}+\mathrm{CX}]=\mathrm{E}(\mathrm{a})+\mathrm{CE}(\mathrm{X})=\mathrm{a}+\mathrm{CE}(\mathrm{X})$ Proof: - Now $E(a)=\sum a f(x)=a \sum f(x)$ $$ =\mathrm{a} .1=\mathrm{a} $$ for continuous r.v., $E(a)=\int_{-\infty}^{\infty} a f(x) d x$ $\Rightarrow \quad E(a)=a \int_{-\infty}^{\infty} f(x) d x=a .1=a$ Now By definition, $\mathrm{E}[\mathrm{a}+\mathrm{CX}]=\sum(\mathrm{a}+\mathrm{Cx}) \mathrm{f}(\mathrm{x})$ $\Rightarrow \quad \mathrm{E}[\mathrm{a}+\mathrm{CX}]=\sum[\mathrm{af}(\mathrm{x})+\operatorname{Cxf}(\mathrm{x})]=\sum \mathrm{af}(\mathrm{x})+\sum \operatorname{Cxf}(\mathrm{x})$ $$ =\sum \mathrm{af}(\mathrm{x})+\mathrm{CE}(\mathrm{X})=\mathrm{E}(\mathrm{a})+\mathrm{CE}(\mathrm{X}) $$ $\Rightarrow \quad \mathrm{E}(\mathrm{a}+\mathrm{CX})=\mathrm{a}+\mathrm{CE}(\mathrm{X})$ ...using (1) Theorem: - The expectation of the sum of two r.v.'s is equal to the sum of their expectations, i.e., $\mathrm{E}(\mathrm{X}+\mathrm{Y})=\mathrm{E}(\mathrm{X})+\mathrm{E}(\mathrm{Y})$ Proof: - For discrete case Let $\mathrm{X} \& \mathrm{Y}$ be two discrete r.v., $\mathrm{f}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{y}_{\mathrm{i}}\right)$ is the joint p.d.f. of $\mathrm{X}$ and $\mathrm{Y}$. then $(\mathrm{X}+\mathrm{Y})$ is also a r.v. $$ \mathrm{f}\left(\mathrm{x}_{\mathrm{I}}, \mathrm{y}_{\mathrm{i}}\right)=\mathrm{P}\left(\mathrm{X}=\mathrm{x}_{\mathrm{i}}, \mathrm{Y}=\mathrm{y}_{\mathrm{j}}\right) $$ Now by definition, $$ \begin{aligned} \mathrm{E}(\mathrm{X}+\mathrm{Y}) & =\sum_{\mathrm{i}} \sum_{\mathrm{j}}\left(\mathrm{x}_{\mathrm{i}}+\mathrm{y}_{\mathrm{j}}\right) \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{y}_{\mathrm{j}}\right) \\ & =\sum_{\mathrm{i}} \sum_{\mathrm{j}} \mathrm{x}_{\mathrm{i}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{y}_{\mathrm{j}}\right)+\sum_{\mathrm{i}} \sum_{\mathrm{j}} \mathrm{y}_{\mathrm{j}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{y}_{\mathrm{j}}\right) \\ & =\sum_{\mathrm{i}} \mathrm{x}_{\mathrm{i}} \sum_{\mathrm{j}}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{y}_{\mathrm{j}}\right)+\sum_{\mathrm{j}} \mathrm{y}_{\mathrm{j}} \sum_{\mathrm{i}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{y}_{\mathrm{j}}\right) \\ & =\sum_{\mathrm{i}} \mathrm{x}_{\mathrm{i}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right)+\sum_{\mathrm{j}} \mathrm{y}_{\mathrm{j}} \mathrm{f}\left(\mathrm{y}_{\mathrm{j}}\right) \\ \Rightarrow \quad \mathrm{E}(\mathrm{X}+\mathrm{Y}) & =\mathrm{E}(\mathrm{X})+\mathrm{E}(\mathrm{Y}) \end{aligned} $$ For continuous r.v. $$ E(X+Y)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}(x+y) f(x, y) d x d y $$ $$ \begin{aligned} & =\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x f(x, y) d x d y+\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} y f(x, y) d x d y \\ & =\int_{-\infty}^{\infty} x\left[\int_{-\infty}^{\infty} f(x, y) d y\right] d x+\int_{-\infty}^{\infty} y\left[\int_{-\infty}^{\infty} f(x, y) d x\right] d y \\ & =\int_{-\infty}^{\infty} x f(x) d x+\int_{-\infty}^{\infty} y f(y) d y \\ \Rightarrow \quad E(X+Y) & =E(X)+E(Y) \end{aligned} $$ In general, $$ \mathrm{E}\left(\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots+\mathrm{X}_{\mathrm{n}}\right)=\mathrm{E}\left(\mathrm{X}_{1}\right)+\mathrm{E}\left(\mathrm{X}_{2}\right)+\ldots \ldots .+\mathrm{E}\left(\mathrm{X}_{\mathrm{n}}\right) $$ Theorem: - If $Y=a_{1} X_{1}+a_{2} X_{2}+\ldots .+a_{n} X_{n}$, where $a$ 's are constants, then $$ E(Y)=a_{1} E\left(X_{1}\right)+a_{2} E\left(X_{2}\right)+\ldots .+a_{n} E\left(X_{n}\right) $$ Proof: - As $E(Y)=E\left(a_{1} X_{1}+a_{2} X_{2}+\ldots \ldots+a_{n} X_{n}\right)$ $$ \begin{aligned} & =\mathrm{E}\left(\mathrm{a}_{1} X_{1}\right)+\mathrm{E}\left(\mathrm{a}_{2} \mathrm{X}_{2}\right)+\ldots \ldots+\mathrm{E}\left(\mathrm{a}_{\mathrm{n}} \mathrm{X}_{\mathrm{n}}\right) \\ & =\mathrm{a}_{1} \mathrm{E}\left(\mathrm{X}_{1}\right)+\mathrm{a}_{2} \mathrm{E}\left(\mathrm{X}_{2}\right)+\ldots \ldots+\mathrm{a}_{\mathrm{n}} \mathrm{E}\left(\mathrm{X}_{\mathrm{n}}\right) \\ & \quad \quad[\Theta \mathrm{E}(\mathrm{aX})=\mathrm{aE}(\mathrm{X})] \end{aligned} $$ Theorem: - If $\mathrm{X}$ is a continuous r.v. and $\mathrm{a}$ and $\mathrm{b}$ are constants, then $+\mathrm{b})=\mathrm{a} \mathrm{E}(\mathrm{X})+\mathrm{b}, \quad$ provided all expectation exists. Proof: - By definition, we have $$ \begin{aligned} E(a X+B) & =\int_{-\infty}^{\infty}(a x+b) f(x) d x \\ & =\int_{-\infty}^{\infty} a x f(x) d x+\int_{-\infty}^{\infty} b f(x) d x \\ & =a \int_{-\infty}^{\infty} x f(x) d x+b \int_{-\infty}^{\infty} f(x) d x \\ \Rightarrow \quad E(a X+b) & =a E(X)+b \end{aligned} $$$$ \left[\Theta \int_{-\infty}^{\infty} f(x) d x=1\right] $$ Case1: - If $b=0, E(a X)=a E(X)$ Case2: - If $a=1, b=-\bar{X}=-E(X)$, we get $E(X-\bar{X})=0$ Example: - If $\mathrm{f}(\mathrm{x}, \mathrm{y})=\mathrm{e}^{-(\mathrm{x}+\mathrm{y})}$ is the joint p.d.f. of r.v. $\mathrm{X}$ and $\mathrm{Y}$, then find $\mathrm{P}=(1<\mathrm{X}<2,0<\mathrm{Y}<2)$ Solution: - First we check for:- $$ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x, y) d x d y=1 $$ L.H.S $=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-(x+y)} d x d y=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x} e^{-y} d x d y$ $=\int_{-\infty}^{\infty}\left[-\mathrm{e}^{-\mathrm{x}} \mathrm{e}^{-\mathrm{y}}\right]_{-\infty}^{p_{\infty}} \mathrm{dy}$ $=\int_{-\infty}^{\infty} \mathrm{P}(1<\mathrm{X}<2,0<\mathrm{Y}<2)$ $=\int_{1}^{2} \int_{0}^{2} e^{-(x+y)} d y d x$ $=\int_{1}^{2} \int_{0}^{2} e^{-x} e^{-y} d y d x$ $=\int_{1}^{2} e^{-x} d x \int_{0}^{2} e^{-y} d y=\left(1-e^{-2}\right) \int_{1}^{2} e^{-x} d x$ $=\left(1-\mathrm{e}^{-2}\right)\left(\mathrm{e}^{-1}-\mathrm{e}^{-2}\right)=\left(1-\frac{1}{\mathrm{e}^{2}}\right)\left(\frac{1}{\mathrm{e}}-\frac{1}{\mathrm{e}^{2}}\right)$ $=\left(\frac{\mathrm{e}^{2}-1}{\mathrm{e}^{2}}\right)\left(\frac{\mathrm{e}-1}{\mathrm{e}^{2}}\right)$ $=\frac{\left(\mathrm{e}^{2}-1\right)(\mathrm{e}-1)}{\mathrm{e}^{4}}=\frac{\left(\mathrm{e}^{3}-\mathrm{e}^{2}-\mathrm{e}+1\right)}{\mathrm{e}^{4}}$ Example: - Let $\mathrm{f}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right)= \begin{cases}\mathrm{C}\left(\mathrm{x}_{1} \mathrm{x}_{2}+\mathrm{e}^{\mathrm{x}_{1}}\right), 0<\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right)<1 \\ 0 & , \text { otherwise }\end{cases}$ (i) Determine C (ii) Examine whether $X_{1}$ and $X_{2}$ are independent or not. Solution: - For density function $\mathrm{F}\left(\mathrm{x}_{1}, \mathrm{x}_{2}\right)$ we must have $$ \begin{aligned} & \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x, y) d x d y=1 \\ & \Rightarrow \quad \mathrm{C} \int_{0}^{1}\left[\frac{\mathrm{x}_{1}^{2} \mathrm{x}_{2}}{2}+\mathrm{e}^{\mathrm{x}_{1}}\right]_{0}^{1} \mathrm{dx_{2 } = 1} \\ & \Rightarrow \quad \mathrm{C} \int_{0}^{1}\left[\frac{1}{2} \mathrm{x}_{2}+\mathrm{e}-1\right] \mathrm{dx}_{2}=1 \\ & \Rightarrow \quad \mathrm{C}\left[\frac{1}{2} \times \frac{\mathrm{x}_{2}^{2}}{2}+\mathrm{ex}_{2}-\mathrm{x}_{2}\right]_{0}^{1}=1 \\ & \Rightarrow \quad \mathrm{C}\left[\frac{1}{4}+\mathrm{e}-1\right]=1 \\ & \Rightarrow \quad \mathrm{C}[1+4 \mathrm{e}-4]=4 \\ & \text { or } \quad \mathrm{C}=\frac{4}{4 \mathrm{e}-3} \end{aligned} $$ Theorem: - Show that the expectation of the product of independent r.v. is equal to product of their expectations. i.e. $\mathrm{E}(\mathrm{XY})=\mathrm{E}(\mathrm{X}) \cdot \mathrm{E}(\mathrm{Y})$ Proof: - Let $\mathrm{X}$ and $\mathrm{Y}$ are two independent random variables, then $$ \mathrm{X}: \mathrm{x}_{1}, \quad \mathrm{x}_{2}, \ldots \ldots, \mathrm{x}_{\mathrm{n}} $$ With d.f: $-\mathrm{f}\left(\mathrm{x}_{1}\right), \mathrm{f}\left(\mathrm{x}_{2}\right), \ldots . \mathrm{f}\left(\mathrm{x}_{\mathrm{n}}\right)$ $$ \begin{aligned} & \text { and } \quad \mathrm{Y}: \mathrm{y}_{1}, \mathrm{y}_{2}, \ldots \ldots, \mathrm{y}_{\mathrm{m}} \\ & \text { with d.f: } \mathrm{f}\left(\mathrm{y}_{1}\right), \mathrm{f}\left(\mathrm{y}_{2}\right), \ldots \ldots \mathrm{f}\left(\mathrm{y}_{\mathrm{m}}\right) \\ & \therefore \quad \mathrm{E}(\mathrm{X})=\sum_{\mathrm{i}=1}^{\mathrm{n}} \mathrm{x}_{\mathrm{i}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right), \quad \mathrm{E}(\mathrm{Y})=\sum_{\mathrm{j}=1}^{\mathrm{m}} \mathrm{y}_{\mathrm{j}} \mathrm{f}\left(\mathrm{y}_{\mathrm{j}}\right) \end{aligned} $$ Let $\mathrm{f}\left(\mathrm{x}_{\mathrm{i}}, \mathrm{y}_{\mathrm{j}}\right)$ is the joint p.d.f. of $\mathrm{X} \& \mathrm{Y}$. and $\quad$ Since $\mathrm{X} \& \mathrm{Y}$ are independent, so $$ \begin{aligned} f\left(x_{i}, y_{j}\right) & =f\left(x_{i}\right) f\left(y_{j}\right) \\ \text { Now } \quad E(X Y) & =\sum_{i=1}^{n} \sum_{j=1}^{m} x_{i} y_{j} f\left(x_{i}, y_{j}\right) \end{aligned} $$ $$ \begin{array}{r} =\sum_{\mathrm{i}=1}^{\mathrm{n}} \sum_{\mathrm{j}=1}^{\mathrm{m}} \mathrm{x}_{\mathrm{i}} \mathrm{y}_{\mathrm{j}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right) \mathrm{f}\left(\mathrm{y}_{\mathrm{j}}\right) \\ =\sum_{\mathrm{i}} \mathrm{x}_{\mathrm{i}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right) \sum_{\mathrm{j}} \mathrm{y}_{\mathrm{j}} \mathrm{f}\left(\mathrm{y}_{\mathrm{j}}\right) \\ \mathrm{E}(\mathrm{XY})=\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y}) \end{array} $$ Similarly for continuous $\quad$ r.v.: $-\mathrm{f}(\mathrm{x}, \mathrm{y})=\mathrm{f}(\mathrm{x}) \mathrm{f}(\mathrm{y})$, since $\mathrm{X}$ and $\mathrm{Y}$ are independent. Now $E(X Y)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x y f(x, y) d x d y$ $$ \begin{aligned} E(X Y) & =\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x y f(x) f(y) d x d y \\ & =\int_{-\infty}^{\infty} x f(x) d x \int_{-\infty}^{\infty} y f(y) d y=E(X) E(Y) \end{aligned} $$ In general, we get $$ \mathrm{E}\left(\mathrm{X}_{1} \mathrm{X}_{2} \mathrm{X}_{3} \ldots \ldots \mathrm{X}_{\mathrm{n}}\right)=\mathrm{E}\left(\mathrm{X}_{1}\right) \mathrm{E}\left(\mathrm{X}_{2}\right) \ldots \ldots \mathrm{E}\left(\mathrm{X}_{\mathrm{n}}\right) $$ Example: - Let $X$ represents the number on the face of dice then $$ \begin{array}{lcccccc} \mathrm{X}: & 1 & 2 & 3 & 4 & 5 & 6 \\ \mathrm{f}(\mathrm{x}): & \frac{1}{6} & \frac{1}{6} & \frac{1}{6} & \frac{1}{6} & \frac{1}{6} & \frac{1}{6} \end{array} $$ Now $\quad \mathrm{E}(\mathrm{X})=\sum \mathrm{xf}(\mathrm{x})$ $$ =\frac{1}{6}(1+2+\ldots \ldots+6)=\frac{21}{6}=\frac{7}{2} $$ And when $\mathrm{X}$ is sum of points when two dies are thrown, i.e. $$ \begin{array}{llllllllllll} \mathrm{X}: & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \mathrm{f}(\mathrm{x}): & \frac{1}{36} & \frac{2}{36} & \frac{3}{36} & \frac{4}{36} & \frac{5}{36} & \frac{6}{36} & \frac{5}{36} & \frac{4}{36} & \frac{2}{36} & \frac{2}{36} & \frac{1}{36} \\ \therefore & \mathrm{E}(\mathrm{X})=\sum \mathrm{xf}(\mathrm{x})=\frac{1}{36}[2+6+12+20+30+42+40+36+30+22+12] \\ \Rightarrow & \mathrm{E}(\mathrm{X})=\frac{1}{36} .252=7 \end{array} $$ Example: - Let $\mathrm{X}$ be the profit that a person makes in a business. He may earn Rs. 2800 with a probability 0.5 , he may lose Rs 5500 with probability 0.3 and he may neither earn nor lose with a probability 0.2 . Calculate $\mathrm{E}(\mathrm{X})$ Solution: - Here $\mathrm{P}(\mathrm{X}=2800)=0.5$ $$ \begin{aligned} & \mathrm{P}(\mathrm{X}=-5500)=0.3 \\ & \mathrm{P}(\mathrm{X}=0)=0.2 \end{aligned} $$ Then $\mathrm{E}(\mathrm{X})=\sum \mathrm{xf}(\mathrm{x})$ $$ \begin{aligned} & =2800(0.5)+(-5500)(0.3)+(0)(0.2) \\ & =1400-1650=-250, \text { he may lose Rs } 250 . \end{aligned} $$ Example: - A and B in turns throw an ordinary dice for a price of Rs. 44. The first to throw a "six" wins. If A has first throw, what is his expectation?. Also calculate B's expectation Solution: - The problem of getting a "six" on dice is $p(x)=\frac{1}{6}$ A has $1^{\text {st }}$ throw, so he can win in the $1^{\text {st }}, 3^{\text {rd }}, 5^{\text {th }} \ldots \ldots$ Hence $A^{\prime}$ chance (probability) of winning is $$ \begin{aligned} & =\frac{1}{6}+\frac{5}{6} \times \frac{5}{6} \times \frac{1}{6}+\frac{5}{6} \times \frac{5}{6} \times \frac{5}{6} \times \frac{5}{6} \times \frac{1}{6} \ldots \ldots \\ & =\frac{1}{6}\left[1+\left(\frac{5}{6}\right)^{2}+\left(\frac{5}{6}\right)^{4}+\ldots \ldots\right] \\ & =\frac{1}{6} \cdot \frac{1}{1-\left(\frac{5}{6}\right)^{2}}=\frac{1}{6} \cdot \frac{36}{11}=\frac{6}{11} \end{aligned} $$ $\therefore \quad$ Amount of $\mathrm{A}=44 \times \frac{6}{11}=$ Rs. 24 Similarly B can win in $2^{\text {nd }}, 4^{\text {th }}, 6^{\text {th }}, \ldots$. Hence $\mathrm{B}^{\prime}$ (chance) of winning are $$ \begin{aligned} & =\frac{5}{6} \cdot \frac{1}{6}+\frac{5}{6} \cdot \frac{5}{6} \cdot \frac{5}{6} \cdot \frac{1}{6}+\ldots \ldots \\ & =\frac{5}{6} \cdot \frac{1}{6}\left[1+\left(\frac{5}{6}\right)^{2}+\ldots \ldots\right] \end{aligned} $$ $$ =\frac{1}{6} \cdot \frac{5}{6} \cdot \frac{36}{11}=\frac{5}{11} $$ $\therefore \quad$ Amount of $\mathrm{B}=\frac{5}{11} \times 49=$ Rs. 20 Example: - A bag contains a coin of value $M$ and a number of other coins whose aggregate value is $\mathrm{m}$. a person draws one at a time till he draws the coin $\mathrm{M}$, find the value of his expectation. Solution: - Let there be $\mathrm{K}$ other coins each of value $\mathrm{m} / \mathrm{K}$, so that their aggregate value is $\mathrm{m}$. he may draw the coin $\mathrm{M}$ at $1^{\text {st }}$ draw or $2^{\text {nd }}$ or $3^{\text {rd }}$ or....or $(\mathrm{K}+1)^{\text {th }}$ draw with probability $\frac{1}{\mathrm{~K}+1}, \frac{1}{\mathrm{~K}+1}, \frac{1}{\mathrm{~K}+1}, \ldots \cdots$ $\Theta \quad \frac{1}{\mathrm{~K}+1},\left[1-\frac{1}{\mathrm{~K}-1}\right] \frac{1}{\mathrm{~K}},\left[1-\frac{1}{\mathrm{~K}+1}\right]\left[1-\frac{1}{\mathrm{~K}}\right] \frac{1}{\mathrm{~K}-1}, \ldots \ldots$. $$ =\frac{1}{\mathrm{~K}+1}, \frac{1}{\mathrm{~K}+1}, \frac{1}{\mathrm{~K}+1}, \ldots \ldots \ldots $$ The corresponding amount drawn $\mathrm{X}$ is $$ \begin{aligned} & \mathrm{M}, \frac{\mathrm{m}}{\mathrm{K}}+\mathrm{M}, \frac{2 \mathrm{~m}}{\mathrm{~K}}+\mathrm{M}, \ldots \ldots ., \frac{(\mathrm{k}-1) \mathrm{m}}{\mathrm{K}}+\mathrm{M}, \frac{\mathrm{Km}}{\mathrm{k}}+\mathrm{M} \\ & \therefore \quad \mathrm{E}(\mathrm{X})=\frac{1}{\mathrm{~K}+1}\left[\mathrm{M}+\frac{\mathrm{m}}{\mathrm{K}}+\mathrm{M}+\frac{2 \mathrm{~m}}{\mathrm{~K}}+\mathrm{M}+\ldots \ldots+\frac{\mathrm{Km}}{\mathrm{K}}+\mathrm{M}\right] \\ &=\frac{1}{\mathrm{~K}+1}\left[\mathrm{M}+(\mathrm{K}+1)+\frac{\mathrm{m}}{\mathrm{K}}(1+2+3+\ldots \ldots+\mathrm{K})\right] \\ &=\mathrm{M}+\frac{1}{\mathrm{~K}+1} \cdot \frac{\mathrm{m}}{\mathrm{K}} \frac{\mathrm{K}(\mathrm{K}+1)}{2}=\mathrm{M}+\frac{\mathrm{m}}{2} \end{aligned} $$ ## LESSON 5 MOMENTS AND MOMENT GENERATING FUNCTIONS Moments: - Let $\mathrm{X}$ is a r. v., then $\mathrm{E}\left[\mathrm{X}^{\mathrm{r}}\right]$, if exists is called the $\mathrm{r}^{\mathrm{th}}$ moment of $\mathrm{X}$ about origin and is denoted by $\mu_{\mathrm{r}}^{\prime}$ $$ \text { i.e } \mu_{\mathrm{r}}^{\prime}=\mathrm{E}\left[\mathrm{X}^{\mathrm{r}}\right] $$ and about some point ' $a$ ', it is defined as $$ \mu_{\mathrm{r}}^{\prime}(\mathrm{a})=\mathrm{E}\left[(\mathrm{X}-\mathrm{a})^{\mathrm{r}}\right] $$ and $\mathrm{r}^{\text {th }}$ moment about mean is $$ \mu_{\mathrm{r}}=\mathrm{E}\left[\left(\mathrm{X}-\mathrm{E}(\mathrm{X})^{\mathrm{r}}\right]=\mathrm{E}\left[(\mathrm{X}-\mu)^{\mathrm{r}}\right]\right. $$ Moments about Mean are called Central Moments. In case of discrete r.v.:- $$ \begin{gathered} \mu_{\mathrm{r}}^{\prime}=\mathrm{E}\left[\mathrm{X}^{\mathrm{r}}\right]=\sum \mathrm{x}^{\mathrm{r}} \mathrm{f}(\mathrm{x}) \\ \& \quad \mu_{\mathrm{r}}=\mathrm{E}\left[(\mathrm{X}-\mu)^{\mathrm{r}}\right]=\sum_{\mathrm{i}=1}^{\mathrm{n}}\left(\mathrm{x}_{\mathrm{i}}-\mu\right)^{\mathrm{r}} \mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right) \end{gathered} $$ Where $\mu=\mathrm{E}(\mathrm{X})$ and $\mathrm{f}\left(\mathrm{x}_{\mathrm{i}}\right)=\mathrm{P}\left(\mathrm{X}=\mathrm{x}_{\mathrm{i}}\right)$ when $r=1$ from (1), we get $$ \mu_{\mathrm{r}}^{\prime}=\mathrm{E}(\mathrm{X})=\sum \mathrm{xf}(\mathrm{x})=\text { Mean } $$ and from (2), $$ \begin{aligned} & \mu_{1}=\mathrm{E}(\mathrm{X}-\mu)=\mathrm{E}(\mathrm{X})-\mathrm{E}(\mu) \\ \Rightarrow \quad & \mu_{1}=\mu-\mu=0 \end{aligned} $$$$ [\Theta \mathrm{E}(\mu)=\mu, \mathrm{E}(\mathrm{X})=\mu] $$ Moments about Mean $\left(\mu_{\mathrm{r}}\right)$ in terms of moments about any point ' $a$ ':- Let $\mathrm{X} \rightarrow$ r. v., $$ \mathrm{E}(\mathrm{X})=\overline{\mathrm{X}}=\mu $$ then $\mu_{\mathrm{r}}=\mathrm{E}[\mathrm{X}-\mu]^{\mathrm{r}}=\mathrm{E}[\mathrm{X}-\mathrm{a}-\mu+\mathrm{a}]^{\mathrm{r}}$ $\Rightarrow \quad \mu^{\mathrm{r}}=\mathrm{E}(\mathrm{X}-\mathrm{a}-\mathrm{d})^{\mathrm{r}}$ Where $\mathrm{d}=\mu-\mathrm{a}$ $$ \begin{array}{ll} \text { or } & \mu_{\mathrm{r}}=\mathrm{E}\left[(\mathrm{X}-\mathrm{a})^{\mathrm{r}}-{ }^{\mathrm{r}} \mathrm{C}_{1} \mathrm{~d}(\mathrm{X}-\mathrm{a})^{\mathrm{r}-1}+\ldots . .+{ }^{\mathrm{r}} \mathrm{C}_{\mathrm{r}-1}(-1)^{\mathrm{r}-1} \mathrm{~d}^{\mathrm{r}-1}(\mathrm{X}-\mathrm{a})+\mathrm{d}^{\mathrm{r}}(-1)^{\mathrm{r}}\right] \\ \Rightarrow & \mu_{\mathrm{r}}=\mu_{\mathrm{r}}^{\prime}(\mathrm{a})-{ }^{\mathrm{r}} \mathrm{C}_{1} \mathrm{~d} \mu_{\mathrm{r}-1}^{\prime}(\mathrm{a})+\ldots \ldots+{ }^{\mathrm{r}} \mathrm{C}_{\mathrm{r}-1}(-1)^{\mathrm{r}-1} \mathrm{~d}^{\mathrm{r}-1} \mathrm{E}(\mathrm{X}-\mathrm{a})+\mathrm{d}^{\mathrm{r}}(-1)^{\mathrm{r}} \end{array} $$ and $\quad \mu_{2}=\mu_{2}^{\prime}(a)-2 d \mu_{1}^{\prime}(a)+d^{2}$ $\Rightarrow \quad \mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2} \quad\left[\Theta \mathrm{d}=\mathrm{E}(\mathrm{X})-\mathrm{a}=\mathrm{E}(\mathrm{X}-\mathrm{a}), \mathrm{d}=\mu_{1}^{\prime}(\mathrm{a})\right]$ This $\mu_{2}$ is called Variance. Similarly, $$ \mu_{3}=\mu_{3}^{\prime}-3 \mu_{2}^{\prime} \mu_{1}^{\prime}+2 \mu_{1}^{\prime 3} $$ and $\mu_{4}=\mu_{4}^{\prime}-4 \mu_{3}^{\prime}+6 \mu_{2}^{\prime} \mu_{1}^{\prime 2}-3 \mu_{1}^{\prime 4}$ If $\mathrm{X}$ is a continuous r.v. $$ \begin{aligned} & \mu_{r}^{\prime}=\int_{-\infty}^{\infty}(x-a)^{r} f(x) d x \\ & \mu_{r}=\int_{-\infty}^{\infty}(x-\mu)^{r} f(x) d x \end{aligned} $$ and $\quad \mathrm{E}(\mathrm{X})=\mu_{1}^{\prime} \rightarrow$ Mean Also $\mu_{2}=E(X-\mu)^{2}$. This $E(X-\mu)^{2}$ is called Variance and is denoted by $\sigma^{2}$. Covariance between $X$ and $Y$ $$ \begin{aligned} \operatorname{Cov}(\mathrm{X}, \mathrm{Y}) & =\mathrm{E}[(\mathrm{X}-\overline{\mathrm{X}})(\mathrm{Y}-\overline{\mathrm{Y}})] \\ & =\mathrm{E}[(\mathrm{X}-\mathrm{E}(\mathrm{X}))(\mathrm{Y}-\mathrm{E}(\mathrm{y}))] \end{aligned} $$ Example: - Find $\mathrm{E}(\mathrm{X}), \mathrm{E}\left(\mathrm{X}^{2}\right), \mathrm{E}(\mathrm{X}-\mathrm{E}(\mathrm{X}))^{2}$ from the following:- $$ \begin{array}{lccccc} \mathrm{X}: & 8 & 12 & 16 & 20 & 2 \\ \mathrm{f}(\mathrm{x}): & \frac{1}{8} & \frac{1}{6} & \frac{3}{8} & \frac{1}{4} & \frac{1}{12} \end{array} $$ Solution: $-\mathrm{E}(\mathbf{X})=\sum \mathrm{xf}(\mathrm{x})$ $$ =8 \times \frac{1}{8}+12 \times \frac{1}{6}+\ldots \ldots .+24 \times \frac{1}{12} $$ $\Rightarrow \quad \mathrm{E}(\mathrm{X})=16=$ Mean and $\mathrm{E}\left(\mathrm{X}^{2}\right)=\sum \mathrm{x}^{2} \mathrm{f}(\mathrm{x})=64 \times \frac{1}{8}+144 \times \frac{1}{6}+256 \times \frac{3}{8}+\ldots \ldots+576 \times \frac{1}{12}$ $\Rightarrow \quad \mathrm{E}\left(\mathrm{X}^{2}\right)=276$ $$ \mu_{2}=\mathrm{E}[\mathrm{X}-\mathrm{E}(\mathrm{X})]^{2}=\mathrm{E}\left(\mathrm{X}^{2}\right)-[\mathrm{E}(\mathrm{X})]^{2} \quad\left[\Theta \mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}\right] $$ $\Rightarrow \quad \mu_{2}=276-(16)^{2}=20$ This $\mu_{2}=20$ is the variance. Example: - Find $\mathrm{E}(\mathrm{X}), \mathrm{E}\left(\mathrm{X}^{2}\right), \mathrm{E}(\mathrm{X}-\mathrm{E}(\mathrm{X}))^{2}$ $\begin{array}{llllllllllll}\mathrm{X}: & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12\end{array}$ $f(x): \quad \frac{1}{36} \quad \frac{2}{36} \quad \frac{3}{36} \quad \frac{4}{36} \quad \frac{5}{36} \quad \frac{6}{36} \quad \frac{5}{36} \quad \frac{4}{36} \quad \frac{2}{36} \quad \frac{2}{36} \quad \frac{1}{36}$ Solution: $-\mathrm{E}(\mathrm{X})=\sum \mathrm{xf}(\mathrm{x})=\frac{1}{36}[2+6+12+20+30+42+40+36+30+22+12]$ $$ =7=\text { Mean } $$ $$ \begin{aligned} & \mathrm{E}\left(\mathrm{X}^{2}\right)=\sum \mathrm{Ex}^{2} \mathrm{f}(\mathrm{X})=\frac{1}{36}[4+18+48+10+305+49+64+81+100+121+144] \\ & \quad=\frac{1}{36}(1974)=\frac{329}{6} \\ & \mu_{2}=\mathrm{E}[\mathrm{X}-\mathrm{E}(\mathrm{X})]^{2}=\mathrm{E}\left(\mathrm{X}^{2}\right)-[\mathrm{E}(\mathrm{X})]^{2} \\ & =\frac{1974}{36}-49=\frac{329}{6}-49 \\ & =\frac{329-294}{6}=\frac{35}{6} \end{aligned} $$ Let $\mathrm{m}$ represents median, then $$ \mathrm{P}(\mathrm{X}<\mathrm{m})=\mathrm{P}(\mathrm{X}>\mathrm{m}) $$ In this eg, $\mathrm{m}=7$ $$ \text { mode }=7 $$ Also mean $=7$ So it is a very good distribution. Example: - For distribution, $$ f(x)=\left\{\begin{array}{lll} \frac{1}{16}\left(3-x^{2}\right. & , & -3<x \leq-1 \\ \frac{1}{16}\left(6-2 x^{2}\right) & , & -1<x \leq 1 \\ \frac{1}{16}(3-x)^{2} & , & 1<x \leq 3 \\ 0 & , & \text { otherwise } \end{array}\right. $$ Check whether it represents a probability distribution or not. Also find its mean and variance. Solution: - First we prove that $$ \int_{-\infty}^{\infty} f(x) d x=1 \quad \text { or } \quad \int_{-3}^{3} f(x) d x=1 $$ Now $\quad \int_{-\infty}^{\infty} f d x=\left(\int_{-\infty}^{-3}+\int_{-3}^{-1}+\int_{-1}^{1}+\int_{1}^{3}+\int_{3}^{\infty}\right) f d x$ $$ \begin{aligned} \Rightarrow \quad \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{dx}) & =\int_{-3}^{-1} \frac{1}{16}\left(3+\mathrm{x}^{2}\right) \mathrm{dx}+\int_{-1}^{1} \frac{1}{16}\left(6-2 \mathrm{x}^{2}\right) \mathrm{dx}+\int_{1}^{3} \frac{1}{16}\left(3-\mathrm{x}^{2}\right) \mathrm{dx} \\ & =\frac{1}{16}\left[\frac{(3+\mathrm{x})^{3}}{3}\right]_{-3}^{-1}+\frac{1}{16}\left[\mathrm{x}^{6}-\frac{2 \mathrm{x}^{3}}{3}\right]_{-1}^{1}+\frac{1}{16}\left[\frac{(3-\mathrm{x})^{3}}{-3}\right]_{1}^{3} \\ & =\frac{1}{48}[8-0]+\frac{1}{16}\left[6-\frac{2}{3}+6-\frac{2}{3}\right]-\frac{1}{48}[0-8] \\ & =\frac{1}{6}+\frac{1}{16}\left(12-\frac{4}{3}\right)+\frac{1}{6} \\ \Rightarrow \quad \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{x}) \mathrm{dx} & =\frac{1}{3}+\frac{1}{16} \cdot \frac{32}{3}=\frac{1}{3}+\frac{2}{3}=1 \end{aligned} $$ Also Mean $=E(X)=\int_{-\infty}^{\infty} x f(x) d x$ $\Rightarrow \quad \mathrm{E}(\mathrm{X})=\int_{-3}^{3} \mathrm{xf}(\mathrm{x}) \mathrm{dx}$ $$ \begin{gathered} =\int_{-3}^{-1} \frac{x}{16}(3+x)^{2} d x+\int_{-1}^{1} \frac{x}{16}\left(6-2 x^{2}\right) d x+\int_{1}^{3} \frac{x}{16}(3-x)^{2} d x \\ =\frac{1}{16}\left[x \frac{(3+x)^{3}}{3}\right]_{-3}^{-1}-\int_{-3}^{-1} \frac{1}{16}(1) \frac{(3+x)^{3}}{3} \mathrm{dx}+0 \\ +\frac{1}{16}\left\{\left[\frac{x(3-x)^{3}}{-3}\right]_{1}^{3}-\int_{1}^{3}(1) \frac{(3-x)^{3}}{-3} d x\right\} \end{gathered} $$ [Integrand in second integral is an odd function of $\mathrm{x}$, so its value is zero] $$ \begin{aligned} & =\frac{1}{48}[(-1) 8+0]-\frac{1}{48}\left[\frac{(3+\mathrm{x})^{4}}{4}\right]_{-3}^{-1}-\frac{1}{48}[0-1,8]+\frac{1}{48}\left[\frac{(3-\mathrm{x})^{4}}{-4}\right]_{1}^{3} \\ & =-\frac{1}{6}-\frac{1}{48} \cdot \frac{1}{4}(16-0)+\frac{1}{6}-\frac{1}{48} \cdot \frac{1}{4}(0-16) \\ \Rightarrow \quad \mathrm{E}(\mathrm{X}) & =-\frac{1}{48} \cdot \frac{1}{4} \cdot 16+\frac{1}{48} \cdot \frac{1}{4} \cdot 16=-\frac{1}{12}+\frac{1}{12}=0 \end{aligned} $$ Also Variance $=\mathrm{V}(\mathrm{X})=\mathrm{E}(\mathrm{X}-\mathrm{E}(\mathrm{X}))^{2}=\mathrm{E}\left(\mathrm{X}^{2}\right)-[\mathrm{E}(\mathrm{X})]^{2}$ Now $$ \begin{aligned} & E\left(X^{2}\right)=\int_{-3}^{3} x^{2} f(x) d x \\ & =\int_{-3}^{-1} \frac{x^{2}}{16}(3+x)^{2} d x+\int_{-1}^{1} \frac{x^{2}}{16}\left(6-2 x^{2}\right) d x+\int_{1}^{3} \frac{x^{2}}{16}(3-x)^{2} d x \\ & \Rightarrow \quad \mathrm{E}\left(\mathrm{X}^{2}\right)=\int_{-3}^{-1} \frac{\mathrm{x}^{2}}{16}\left(9+\mathrm{x}^{2}+6 \mathrm{x}\right) \mathrm{dx}+\int_{-1}^{1} \frac{1}{16}\left(6 \mathrm{x}^{2}-2 \mathrm{x}^{4}\right) \mathrm{dx}+\int_{1}^{3} \frac{\mathrm{x}^{2}}{16}\left(9+\mathrm{x}^{2}-6 \mathrm{x}\right) \mathrm{x} \\ & =\frac{1}{16}\left[\frac{9 x^{3}}{3}+\frac{x^{5}}{5}+\frac{6 x^{4}}{4}\right]_{-3}^{-1}+\frac{1}{16}\left[\frac{6 x^{3}}{3}-\frac{2 x^{5}}{5}\right]_{-1}^{1} \\ & +\frac{1}{16}\left[\frac{9 x^{3}}{3}+\frac{x^{5}}{5}-\frac{6 x^{4}}{4}\right]_{1}^{3} \\ & =\frac{1}{16}\left(-3-\frac{1}{5}+\frac{3}{2}+81+\frac{243}{5}-\frac{3}{2} \times 81\right)+\frac{1}{16}\left(2-\frac{2}{5}+2-\frac{2}{5}\right) \\ & +\frac{1}{16}\left[81+\frac{243}{5}-\frac{3}{2} \times 81-3-\frac{1}{5} \times 81\right] \\ & \text { =2. } \frac{1}{16}\left[-3-\frac{1}{5}+\frac{3}{2}+81+\frac{243}{5}-\frac{243}{2}\right]+\frac{1}{16}\left(4-\frac{4}{5}\right) \\ & =\frac{1}{8}\left[78-\frac{240}{2}+\frac{242}{5}\right]+\frac{1}{16} \cdot \frac{16}{5} \\ & =\frac{1}{8}\left[-42+\frac{242}{5}\right]+\frac{1}{5} \end{aligned} $$ $\Rightarrow \quad \mathrm{E}\left(\mathrm{X}^{2}\right)=\frac{1}{8} \cdot \frac{32}{5}+\frac{1}{5}=\frac{4}{5}+\frac{1}{5}=1$ Hence Variance $=\mathrm{E}\left(\mathrm{X}^{2}\right)-[\mathrm{E}(\mathrm{X})]^{2}$ $$ =1-0^{2}=1 $$ Example: - Show that the value of $\operatorname{cov}(\mathrm{X}, \mathrm{Y})$ for probability distribution $$ f(x, y)=\left\{\begin{array}{lll} \frac{1}{9} e^{-y / 3}, & 0 \leq x \leq y \leq \infty \\ 0, & \text { otherwise } \end{array} \quad \text { is } 9 .\right. $$ Solution: - We Know that Cov. $(X, Y)=E(X Y)-E(X) E(Y)$ Thus we have to find $\mathrm{E}(\mathrm{X}), \mathrm{E}(\mathrm{Y})$ and $\mathrm{E}(\mathrm{XY})$ But $\quad E(X)=\int_{-\infty}^{\infty} x f(x) d x$ $$ E(Y)=\int_{-\infty}^{\infty} y f(y) d y $$ $\therefore \quad$ First we have to find $\mathrm{f}(\mathrm{x})$ and $\mathrm{f}(\mathrm{y})$. Now marginal density function of $\mathrm{X}$ is given by $$ \begin{aligned} g(x) & =\int_{-\infty}^{\infty} f(x, y) d y=\int_{-\infty}^{\infty} f(x, y) d y+\int_{x}^{\infty} f(x, y) d y \\ & =\int_{x}^{\infty} \frac{1}{9} e^{-y / 3} d y=\left[\frac{1}{9} e^{-y / 3}(-3)\right]_{x}^{\infty} \\ \Rightarrow \quad g(x) & =\frac{1}{3} e^{-y / 3}, 0 \leq x \leq \infty \end{aligned} $$ Similarly Marginal density function of Y will be $$ \begin{aligned} \mathrm{h}(\mathrm{y}) & =\int_{0}^{\mathrm{y}} \frac{1}{9} \mathrm{e}^{-\mathrm{y} / 3} \mathrm{dx}=\frac{\mathrm{y}}{9} \mathrm{e}^{-\mathrm{y} / 3}, 0 \leq \mathrm{y} \leq \infty \\ \therefore \quad \mathrm{E}(\mathrm{X}) & =\int_{0}^{\infty} \mathrm{xf}(\mathrm{x}) \mathrm{dx} \\ & =\int_{0}^{\infty} \mathrm{x} \cdot \frac{1}{3} \mathrm{e}^{-\mathrm{x} / 3} \mathrm{dx}=\frac{1}{3} \int_{0}^{\infty} \mathrm{x} \mathrm{e}^{-\mathrm{x} / 3} \mathrm{dx} \end{aligned} $$ $$ \begin{aligned} & =\frac{1}{3}\left[\left[\mathrm{x} \cdot \mathrm{e}^{-\mathrm{x} / 3}(-3)\right]_{0}^{\infty}+3 \int_{0}^{\infty} \mathrm{e}^{-\mathrm{x} / 3} \mathrm{dx}\right] \\ & =\frac{1}{3}\left[0+3\left[\mathrm{e}^{-\mathrm{x} / 3}(-3)\right]\right]_{0}^{\infty}=3 \\ & E(Y)=\int_{0}^{\infty} y h(y) d y=\frac{1}{9} \int_{0}^{\infty} y^{2} e^{-y / 3} d y \\ & =\frac{1}{9}\left[\left\{\mathrm{y}^{2} \mathrm{e}^{-\mathrm{y} / 3}(-3)\right\}_{0}^{\infty}+6 \int_{0}^{\infty} \mathrm{ye} \mathrm{e}^{-\mathrm{t} / 3} \mathrm{dy}\right] \\ & =\frac{2}{3}\left[\left[\mathrm{ye}^{-\mathrm{y} / 3}(-3)\right]_{0}^{\infty}+3 \int_{0}^{\infty} \mathrm{e}^{-\mathrm{y} / 3} \mathrm{dy}\right] \\ & =\frac{2}{3} \times 3\left[\mathrm{e}^{-\mathrm{y} / 3}(-3)\right]_{0}^{\infty}=2 \times 3=6 \\ & E(X Y)=\int_{x=0}^{\infty} \int_{y=x}^{\infty} x y f(x, y) d x d y \\ & =\int_{x=0}^{\infty} \int_{y=x}^{\infty} x y \cdot \frac{1}{9} e^{-y / 3} d x d y \\ & =\frac{1}{9} \int_{x=0}^{\infty} x\left[\int_{y=x}^{\infty} y e^{-y / 3} d y\right] d x \\ & =\frac{1}{9} \int_{x=0}^{\infty} x\left\{\left[y e^{-y / 3}(-3)\right]_{x}^{\infty}+3 \int_{x}^{\infty} e^{-y / 3} d y\right\} d x \\ & =\frac{1}{9} \int_{x=0}^{\infty} x\left(3 x e^{-x / 3}+3(3) e^{-x / 3}\right) d x \\ & =\frac{1}{9} \int_{0}^{\infty}\left[3 x^{2} e^{-x / 3}+9 e^{-x / 3} x\right] d x \\ & =\frac{1}{3}\left[\int_{0}^{\infty} \mathrm{x}_{\mathrm{I}}^{2} \mathrm{e}_{\mathrm{II}}^{-\mathrm{x} / 3} \mathrm{dx}+3 \int_{0}^{\infty} \mathrm{e}_{\mathrm{II}}^{-\mathrm{x} / 3} \mathrm{x}_{\mathrm{I}} \mathrm{dx}\right] \\ & =\frac{1}{3}[54+3 \times 9] \end{aligned} $$ $$ \begin{aligned} & =\frac{54+27}{3}=\frac{81}{3}=27 \\ & \therefore \quad \operatorname{Cov}(\mathrm{X}, \mathrm{Y})=\mathrm{E}(\mathrm{XY})-\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y}) \\ & =27-3 \times 6=27-18=9 \end{aligned} $$ Hence proved Example: - If $\mathrm{f}(\mathrm{x})=\frac{1}{\mathrm{a}}\left[1-\frac{|\mathrm{x}-\mathrm{b}|}{\mathrm{a}}\right],|\mathrm{x}-\mathrm{b}|<\mathrm{a}$. Find mean and variance. Solution: $-\mathrm{f}(\mathrm{x})=\frac{1}{\mathrm{a}}\left[1-\frac{|\mathrm{x}-\mathrm{b}|}{\mathrm{a}}\right]$ Now $|x-b|<a \quad \Rightarrow-a<x-b<a$ $\Rightarrow \quad \mathrm{b}-\mathrm{a}<\mathrm{x}<\mathrm{a}+\mathrm{b}$ $\therefore \quad \mathrm{f}(\mathrm{x})=\frac{1}{\mathrm{a}}\left[1+\frac{(\mathrm{x}-\mathrm{b})}{\mathrm{a}}\right]$ for $\mathrm{b}-\mathrm{a}<\mathrm{x}<\mathrm{b} \therefore|\mathrm{x}-\mathrm{b}|=-(\mathrm{x}-\mathrm{b})$ for $(\mathrm{b}-\mathrm{a})<\mathrm{x}<\mathrm{b}$ Similarly $$ \begin{aligned} f(x)= & \frac{1}{a}\left[1-\frac{(x-b)}{a}\right] \text { for } b<x<b+a \text { as }|x-b|=x-b \text { for } b<x<b+a \\ \therefore \quad \text { Mean }= & E(X)=\int_{-\infty}^{\infty} x f(x) d x \\ = & \int_{b-a}^{b} x \cdot \frac{1}{a}\left[1+\frac{(x-b)}{a}\right] d x+\int_{b}^{b+a} x \cdot \frac{1}{a}\left[1-\frac{(x-b)}{a}\right] d x \\ = & \frac{1}{a} \int_{b-a}^{b}\left(x+\frac{x^{2}-b x}{a}\right) d x+\frac{1}{a} \int_{b}^{b+a}\left(x-\frac{x^{2}}{a}+\frac{b x}{a}\right) d x \\ = & \frac{1}{a}\left[\frac{x^{2}}{2}+\frac{x^{3}}{3 a}-\frac{b x^{2}}{2 a}\right]_{b-a}^{b}+\frac{1}{a}\left[\frac{x^{2}}{2}-\frac{x^{3}}{3 a}+\frac{b x^{2}}{2 a}\right]_{b+a}^{b} \\ = & \frac{1}{a} \frac{b^{2}}{2}+\frac{b^{3}}{3 a^{2}}-\frac{b^{3}}{2 a^{2}}-\left(\frac{b^{2}+a^{2}-2 a b}{2 a}\right)-\frac{(b-a)^{2}}{3 a^{2}}+\frac{b}{2 a^{2}}(b-a)^{2} \\ & +\frac{1}{a}\left(\frac{b^{2}+a^{2}+2 a b}{2}\right)-\frac{(b+a)^{3}}{3 a^{2}}+\frac{b(b+a)^{2}}{2 a^{2}}-\frac{b^{2}}{2 a}+\frac{b^{3}}{3 a^{2}}-\frac{b^{3}}{2 a^{2}} \end{aligned} $$ $$ \begin{gathered} =\frac{b^{2}}{2 a}+\frac{b^{3}}{3 a^{2}}-\frac{b^{3}}{2 a^{2}}-\frac{b^{2}}{2 a}-\frac{a^{2}}{2 a}+\frac{2 a b}{2 a}+\frac{b^{2}}{2 a}+\frac{a^{2}}{2 a}+\frac{2 a b}{2 a} \\ +\frac{\left(-b^{3}+a^{3}+3 a b^{2}-3 a^{2} b\right)}{3 a^{2}}+\frac{b^{3}+a^{2} b-2 a b^{2}}{2 a^{2}} \\ +\frac{\left(-b^{3}-a^{3}-3 a b^{2}-3 a^{2} b\right)}{3 a^{2}}+\frac{b^{3}+a^{2} b+2 a b^{2}}{2 a^{2}} \\ -\frac{b^{2}}{2 a}+\frac{b^{3}}{3 a^{2}}-\frac{b^{3}}{2 a^{2}}=b \end{gathered} $$ Variance $=\mathrm{V}(\mathrm{X})=\mathrm{E}[\mathrm{X}-\mathrm{E}(\mathrm{X})]^{2}$ $$ \begin{aligned} & =E(X-b)^{2}=\int_{-\infty}^{\infty}(x-b)^{2} f(x) d x \\ & =\int_{b-a}^{b}(x-b)^{2} \cdot \frac{1}{a}\left(\frac{1+x-b}{a}\right) d x+\int_{b}^{b+a}(x-b)^{2} \frac{1}{a}\left[1-\frac{(a-b)}{a}\right] d x \\ & =\frac{1}{a} \int_{b-a}^{b}\left[(x-b)^{2}+\frac{(x-b)^{3}}{a}\right] d x+\frac{1}{a} \int_{b}^{b+a}\left[(x-b)^{2}-\frac{(x-b)^{3}}{a}\right] d x \\ & =\frac{1}{a}\left[\frac{(x-b)^{3}}{3}+\frac{(x-b)^{4}}{4 a}\right]_{b-a}^{b}+\frac{1}{a}\left[\frac{(x-b)^{3}}{3}-\frac{(x-b)^{4}}{4 a}\right]_{b}^{b+a} \\ & =\frac{1}{a} \cdot \frac{a^{3}}{3}-\frac{a^{4}}{4 a^{2}}+\frac{a^{3}}{3 a}-\frac{a^{4}}{4 a^{2}} \\ & =\frac{2 a^{3}}{3 a}-\frac{a^{4}}{2 a^{2}}=\frac{2 a^{2}}{3}-\frac{a^{2}}{2}=\frac{a^{2}}{6} \end{aligned} $$ Example: - Find mean and Variance for the following distribution $$ f(x)= \begin{cases}\frac{(x-1)^{3}}{4}, & 1 \leq x \leq 3 \\ 0, & \text { otherwise }\end{cases} $$ Solution: - Check for $\int_{-\infty}^{\infty} f(x) d x=1$ $$ \text { As mean }=E(X)=\int_{-\infty}^{\infty} x f(x) d x=\int_{1}^{3} \frac{(x-1)^{3}}{4} \cdot x d x $$ $$ =\frac{1}{4} \int_{1}^{3}\left(x^{4}-x-3 x^{3}+3 x^{2}\right) d x=\frac{13}{5} $$ $$ \begin{aligned} \text { Variance }=V(X) & =E\left[X-E(X)^{2}\right]=E\left(X-\frac{13}{6}\right)^{2} \\ & =\int_{-\infty}^{\infty}\left(x-\frac{13}{5}\right)^{2} f(x) d x \\ & =\frac{1}{4} \int_{1}^{3}\left(x-\frac{13}{5}\right)^{2} \frac{(x-1)^{3}}{4} \mathrm{dx} \\ & =\frac{1}{4}\left\{\left[\left(x-\frac{13}{5}\right)^{2} \frac{(x-1)^{4}}{4}\right]_{1}^{3}-\frac{1}{2} \int_{1}^{2}\left(x-\frac{13}{5}\right)(x-1)^{4} d x\right\} \\ & =\frac{1}{4}\left\{4 \times \frac{4}{25}-\frac{1}{2}\left[\left(x-\frac{13}{5}\right) \frac{(x-1)^{5}}{5}\right]^{3}+\frac{1}{2} \times \frac{1}{5}\left[\frac{(x-1)^{6}}{6}\right]_{1}^{3}\right\} \\ & =\frac{4}{25}-\frac{4}{5} \times \frac{2}{5}+\frac{4}{15} \\ & =\frac{4}{25}-\frac{8}{25}+\frac{4}{15}=-\frac{4}{25}+\frac{4}{15}=\frac{8}{75} \end{aligned} $$ Theorem: - If $\mathrm{X}$ is a r.v., then $\mathrm{V}(\mathrm{aX}+\mathrm{b})=\mathrm{a}^{2} \mathrm{~V}(\mathrm{X})$, where $\mathrm{a}$ and $\mathrm{b}$ are constants. Proof: - Let $Y=a X+b$, then $$ \mathrm{E}(\mathrm{Y})=\mathrm{aE}(\mathrm{X})+\mathrm{b} $$ $$ \text { Then } \begin{aligned} V(a X+b) & =V(Y)=E[Y-E(Y)]^{2} \\ =E[a X & +b-a E(X)-b]^{2} \\ & =a^{2} E\left(X^{2}\right)+a^{2}[E(X)]^{2}-2 a^{2}[E(X)]^{2} \\ & =a^{2} E\left(X^{2}\right)-a^{2}[E(X)]^{2} \\ & =a^{2}\left[E\left(X^{2}\right)-[E(X)]^{2}\right]=a^{2} V(X) \end{aligned} $$ Cor(i) If $b=0$, then $V(a x)=a^{2} V(X)$ (ii) If $\mathrm{a}=0$, then $\mathrm{V}(\mathrm{b})=0$ (iii) If $\mathrm{a}=1$, then $\mathrm{V}(\mathrm{X}+\mathrm{b})=\mathrm{V}(\mathrm{X})$. Theorem: - Prove that $\mathrm{V}(\mathrm{X} \pm \mathrm{Y})=\mathrm{V}(\mathrm{X}) \pm \mathrm{V}(\mathrm{Y}) \pm 2 \operatorname{Cov}(\mathrm{X}, \mathrm{Y})$ and $\mathrm{V}(\mathrm{X} \pm \mathrm{Y})=\mathrm{V}(\mathrm{X})+\mathrm{V}(\mathrm{Y})$ provided $\mathrm{X}$ and $\mathrm{Y}$ are independent r.v. $$ \text { Proof: } \begin{aligned} \mathrm{V}(\mathrm{X}+\mathrm{Y}) & =\mathrm{E}[(\mathrm{X}+\mathrm{Y})-\mathrm{E}(\mathrm{X}+\mathrm{Y})]^{2} \\ & =\mathrm{E}[(\mathrm{X}+\mathrm{Y})-\mathrm{E}(\mathrm{X})-\mathrm{E}(\mathrm{Y})]^{2} \\ & =\mathrm{E}[\{\mathrm{X}-\mathrm{E}(\mathrm{X})\}+\{\mathrm{Y}-\mathrm{E}(\mathrm{Y})\}]^{2} \\ & =\mathrm{E}[\mathrm{X}-\mathrm{E}(\mathrm{X})]^{2}+[\mathrm{Y}-\mathrm{E}(\mathrm{Y})]^{2}+2 \mathrm{E}[(\mathrm{X}-\mathrm{E}(\mathrm{X})][\mathrm{Y}-\mathrm{E}(\mathrm{Y})] \\ & =\mathrm{E}[\mathrm{X}-\mathrm{E}(\mathrm{X})]^{2}+\mathrm{E}[\mathrm{Y}-\mathrm{E}(\mathrm{Y})]^{2}+2 \mathrm{E}[\{\mathrm{X}-\mathrm{E}(\mathrm{X})\}\{\mathrm{Y}-\mathrm{E}(\mathrm{Y})\}] \\ & =\mathrm{V}(\mathrm{X})+\mathrm{V}(\mathrm{Y})+2 \operatorname{cov}(\mathrm{X}, \mathrm{Y}) \end{aligned} $$ Similarly $\mathrm{V}(\mathrm{X}-\mathrm{Y})=\mathrm{V}(\mathrm{X})+\mathrm{V}(\mathrm{Y})-2 \operatorname{cov}(\mathrm{X}, \mathrm{Y})$ $$ \mathrm{V}(\mathrm{X} \pm \mathrm{Y})=\mathrm{V}(\mathrm{X})+\mathrm{V}(\mathrm{Y}) \pm 2 \operatorname{cov}(\mathrm{X}, \mathrm{Y}) $$ If $\mathrm{X}$ and $\mathrm{Y}$ are independent, then $\operatorname{cov}(\mathrm{X}, \mathrm{Y})=0$ $$ \begin{aligned} \therefore \quad \mathrm{V}(\mathrm{X} \pm \mathrm{Y}) & =\mathrm{V}(\mathrm{X})+\mathrm{V}(\mathrm{Y}) \pm 2.0 \\ & =\mathrm{V}(\mathrm{X})+\mathrm{V}(\mathrm{Y}) . \end{aligned} $$ Covariance: - If $\mathrm{X}$ and $\mathrm{Y}$ are two r.v., then cov between then is defined as $$ \begin{aligned} \operatorname{Cov}(\mathrm{X}, \mathrm{Y}) & =\mathrm{E}[[\mathrm{X}-\mathrm{E}(\mathrm{X})[\mathrm{Y}-\mathrm{E}(\mathrm{Y})]] \\ & =\mathrm{E}[\mathrm{XY}-\mathrm{XE}(\mathrm{Y})-\mathrm{YE}(\mathrm{X})+\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y})] \\ & =\mathrm{E}(\mathrm{XY})-\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y})-\mathrm{E}(\mathrm{Y}) \mathrm{E}(\mathrm{X})+\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y}) \\ & =\mathrm{E}(\mathrm{XY})-\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y}) \end{aligned} $$ We can also express it as $$ \begin{aligned} \operatorname{Cov}(X, Y) & =E[[X-E(X)][Y-E(Y)]] \\ & =\sum_{i} \sum_{j}\left(x_{i}-\bar{X}\right)\left(y_{j}-\bar{Y}\right) f\left(x_{i}, y_{j}\right) \text { for discrete case } \\ \operatorname{Cov}(X, Y) & =E[X Y-\bar{X} Y-X \bar{Y}+\bar{X} \bar{Y}] \\ & =E(X Y)-\bar{X} E(Y)-\bar{Y} E(X)+\bar{X} \bar{Y} \\ & =E(X Y)-\bar{X} \bar{Y}-\bar{Y} \bar{X}+\bar{X} \bar{Y} \\ & =E(X Y)-\bar{X} \bar{Y} \end{aligned} $$ If $\mathrm{X}$ and $\mathrm{Y}$ are independent r.v., then $$ \begin{aligned} & \mathrm{E}(\mathrm{XY})=\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y}) \\ & \operatorname{Cov}(\mathrm{X}, \mathrm{Y})=\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y})-\overline{\mathrm{X}} \overline{\mathrm{Y}}=\overline{\mathrm{X}} \overline{\mathrm{Y}}-\overline{\mathrm{X}} \overline{\mathrm{Y}}=0 \end{aligned} $$ $\Rightarrow \quad \operatorname{Cov}(\mathrm{X}, \mathrm{Y})=0$ if $\mathrm{X}$ and $\mathrm{Y}$ are independent. ## Remark i) $\operatorname{Cov}(\mathrm{ax}, \mathrm{by})=\mathrm{E}[(\mathrm{aX}-\mathrm{E}(\mathrm{ax}))(\mathrm{bY}-\mathrm{E}(\mathrm{bY}))]$ $$ \begin{aligned} & =\mathrm{E}[(\mathrm{aX}-\mathrm{aE}(\mathrm{X}))(\mathrm{bY}-\mathrm{BE}(\mathrm{Y}))] \\ & =\mathrm{E}[\mathrm{a}[\mathrm{X}-\mathrm{E}(\mathrm{X})] \cdot \mathrm{b}[\mathrm{Y}-\mathrm{E}(\mathrm{Y})]] \\ & =\mathrm{ab}[\mathrm{E}(\mathrm{X}-\mathrm{E}(\mathrm{X}))(\mathrm{Y}-\mathrm{E}(\mathrm{Y}))] \\ & =\mathrm{ab} \operatorname{Cov}(\mathrm{X}, \mathrm{Y}) \end{aligned} $$ $\Rightarrow$ Covariance is not independent of change of scale. ii) $\operatorname{Cov}(\mathrm{X}+\mathrm{a}, \mathrm{Y}+\mathrm{b})=\mathrm{E}[\{(\mathrm{X}+\mathrm{a})-\mathrm{E}(\mathrm{X}-\mathrm{a})\}\{\mathrm{Y}+\mathrm{b}-\mathrm{E}(\mathrm{Y}+\mathrm{b})\}]$ $$ \begin{aligned} & =\mathrm{E}[\{(\mathrm{X}+\mathrm{a})-\mathrm{E}(\mathrm{X})-\mathrm{a}\}\{\mathrm{Y}+\mathrm{b}-\mathrm{E}(\mathrm{Y})-\mathrm{b}\} \\ & =\mathrm{E}[\{\mathrm{X}-\mathrm{E}(\mathrm{X})\}\{\mathrm{Y}-\mathrm{E}(\mathrm{Y})\}] \\ & =\operatorname{Cov}(\mathrm{X}, \mathrm{Y}) \end{aligned} $$ Thus $\operatorname{Cov}(\mathrm{X}, \mathrm{Y})$ is independent of change of origin but is not independent of change of scale. ## MEAN DERIVATION FOR CONTINUOUS CASE $$ E[|X-a|]=\int_{-\infty}^{\infty}|x-a| f(x) d x $$ Variance $=\sigma_{x}^{2}=E[X-E(X)]^{2}=\int_{-\infty}^{\infty}(x-\bar{X})^{2} f(x) d x$ ## Absolute moment Let $\mathrm{X}$ be $\mathrm{r}$. v. with p.d.f $\mathrm{f}(\mathrm{x})$, then its $\mathrm{r}^{\text {th }}$ absolute moment about any point a is given by $$ E\left[|X-a|^{r}\right]=\int_{-\infty}^{\infty}|x-a|^{r} f(x) d x $$ For Variance i.e. $\mu_{2}=\mathrm{E}[\mathrm{X}-\mathrm{E}(\mathrm{x})]^{2}$ $$ \begin{aligned} & =E\left[X^{2}-2 X E(X)+E^{2}(X)\right] \\ & =E\left(X^{2}\right)-2 E(X) E(X)+E^{2}(X) \quad\left[\Theta E(X)=\mu_{2}\right] \\ \mu_{2}= & E\left(X^{2}\right)-2[E(X)]^{2}+E^{2}(X) \\ = & E\left(X^{2}\right)-[E(X)]^{2} \\ \mu_{2} & =\mu_{2}^{\prime}-\mu_{1}^{\prime 2} \end{aligned} $$ Effect of change of origin \\& scale on moment $$ \mathrm{U}=\frac{\mathrm{X}-\mathrm{a}}{\mathrm{h}} $$ Now $E[U]=\bar{U}=E\left(\frac{X-a}{h}\right)=\frac{1}{h}[E(X)-E(a)]$ $$ \overline{\mathrm{U}}=\frac{1}{\mathrm{~h}}(\overline{\mathrm{X}}-\mathrm{a}) \Rightarrow \overline{\mathrm{X}}=\mathrm{a}+\mathrm{h} \overline{\mathrm{U}} $$ Then $\mu_{\mathrm{r}}=\mathrm{E}[\mathrm{X}-\overline{\mathrm{X}}]^{\mathrm{r}}=\mathrm{E}[\mathrm{a}+\mathrm{hU}-\mathrm{a}-\mathrm{h} \overline{\mathrm{U}}]^{\mathrm{r}}$ $$ =E\left[h^{\mathrm{r}}(U-\bar{U})^{\mathrm{r}}\right]=h^{\mathrm{r}} \mathrm{E}[\mathrm{U}-\overline{\mathrm{U}}]^{\mathrm{r}} $$ Where $E[U-\bar{U}]^{r}=r^{\text {th }}$ moment of $U$ about mean ## MOMENT GENERATING FUNCTION (M.G.F) M.G.F of a r.v. X with p.d.f. $\mathrm{f}(\mathrm{x})$ is given by $$ \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\mathrm{E}\left[\mathrm{e}^{\mathrm{tx}}\right] $$ where $\mathrm{t}$ is real number. Then $\mathrm{M}_{\mathrm{X}}(\mathrm{t})$ is known as Moment generating function about origin. Now, M.G.F about Mean is given by $$ \mathrm{M}_{\mathrm{X}-\mu}(\mathrm{t})=\mathrm{E}\left[\mathrm{e}^{\mathrm{t}(\mathrm{X}-\mu)}\right] $$ If $\mathrm{X}$ is a discrete r.v.: - MGF about origin is given by $$ M_{x}(t)=E\left[e^{t X}\right]=\sum_{x=0}^{\infty} e^{t x} f(x) $$ or $\quad \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\mathrm{E}\left[1+\frac{\mathrm{tX}}{1 !}+\frac{\mathrm{t}^{2} \mathrm{X}^{2}}{2 !}+\ldots \ldots+\frac{\mathrm{t}^{\mathrm{r}} \mathrm{X}^{\mathrm{r}}}{\mathrm{r} !}+\ldots.\right]$ $$ =1+\frac{t}{1 !} E(X)+\frac{t^{2}}{2 !} E\left(X^{2}\right)+\ldots \ldots+\frac{t^{r}}{r !} R\left(X^{r}\right)+\ldots . $$ $$ \mathrm{M}_{\mathrm{X}}(\mathrm{t})=1+\mathrm{t} \mu_{1}^{\prime}+\frac{\mathrm{t}^{2}}{2 !} \mu_{2}^{\prime}+\ldots \ldots \frac{\mathrm{t}^{\mathrm{r}}}{\mathrm{r!}} \mu_{\mathrm{r}}^{\prime}+\ldots $$ where $\mu_{r}^{\prime}=$ coefficient of $\frac{t^{r}}{r !}$ in $M_{X}(t)$, is the $r^{\text {th }}$ moment about origin. The moments can also be obtained from the M.G.F through following relations: $$ \begin{aligned} & \mu_{r}^{\prime}=\frac{d^{r}}{d t^{r}} M_{X}(t) / t=0 \\ & \mu_{1}^{\prime}=\frac{d}{d t} M_{X}(t) / t=0 \\ & \mu_{2}^{\prime}=\frac{d^{2}}{d t^{2}} M_{X}(t) / t=0 \quad \text { etc. } \end{aligned} $$ when $\mathrm{X}$ is continuous r.v., then MGF about origin is given by $$ M_{X}(t)=\int_{-\infty}^{\infty} e^{t x} f(x) d x $$ \\& MGF about mean is given by $$ M_{X}(t)=\int_{-\infty}^{\infty} e^{t(x-\mu)} f(x) d x $$ and MGF about any point ' $a$ ' is $$ M_{X}(t)=\int_{-\infty}^{\infty} e^{t(x-a)} f(x) d x $$ As $\quad M_{X}(t)=\sum_{x} e^{t x} f(x)=\sum_{x} f(x)\left[1+\frac{t x}{1 !}+\frac{t^{2} x^{2}}{2 !}+\ldots \ldots+\frac{t^{r} x^{r}}{r !}\right]$ $\therefore \quad \frac{\mathrm{d}}{\mathrm{dt}} \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\sum_{\mathrm{x}=0}^{\infty} \mathrm{f}(\mathrm{x})\left[\mathrm{x}+\frac{2 \mathrm{tx}^{2}}{2 !}+\frac{3 \mathrm{t}^{2} \mathrm{x}^{3}}{3 !}+\ldots \ldots+\frac{\mathrm{rt}^{\mathrm{r}-1} \mathrm{x}^{\mathrm{r}-1}}{\mathrm{r}}+\ldots . ..\right]$ $\therefore \quad \frac{\mathrm{d}}{\mathrm{dt}} \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\sum_{\mathrm{x}=0}^{\infty} \mathrm{xf}(\mathrm{x})=\sum_{\mathrm{x}=1}^{\infty} \mathrm{xf}(\mathrm{x})=\mu_{1}^{\prime}=1^{\mathrm{st}}$ moment about origin. Example: - Let $x$ be a r.v. with p.d.f $$ f(x)=\left\{\begin{array}{lll} x & , & 0 \leq x \leq 1 \\ 2-x & , & 1 \leq x \leq 2 \\ 0 & , & \text { otherwise } \end{array}\right. $$ Find MGF for this distribution. Also determine mean \\& variance. Solution: - First we shall prove that given distribution is probability distribution or not i.e. $\quad \int_{-\infty}^{\infty} f(x) d x=1$ or not Now $\quad \int_{-\infty}^{\infty} f(x) d x=\left(\int_{-\infty}^{0}+\int_{0}^{1}+\int_{1}^{2}+\int_{2}^{\infty}\right) f(x) d x$ $$ \begin{aligned} &= 0+\int_{0}^{1} \mathrm{xdx}+\int_{1}^{2}(2-\mathrm{x})+0 \\ &=\left[\frac{\mathrm{x}^{2}}{2}\right]_{0}^{1}+\left[2 \mathrm{x}-\frac{\mathrm{x}^{2}}{2}\right]_{1}^{2} \\ &=\frac{1}{2}-0+4-2-2+\frac{1}{2} \\ & \Rightarrow \quad \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{x}) \mathrm{dx}=\frac{1}{2}+\frac{1}{2}=1 \end{aligned} $$ $\Rightarrow \quad$ Given distribution is a probability distribution Mean $=E(X)=\int_{0}^{1} x f(x) d x+\int_{1}^{2} x f(x) d x$ $$ \begin{aligned} & =\int_{0}^{1} x \cdot x d x+\int_{1}^{2} x(2-x) d x \\ & =\left[\frac{x^{3}}{3}\right]_{0}^{1}+\left[\frac{2 x^{2}}{2}-\frac{x^{3}}{3}\right]_{1}^{2} \\ & =\frac{1}{3}+4-\frac{8}{3}-1-\frac{1}{3}=\frac{2}{3}+3-\frac{8}{3}=1 \\ \Rightarrow \quad \mathrm{E}\left(X^{2}\right) & =\int_{0}^{1} x^{2} f(x) d x+\int_{1}^{2} x^{2} f(x) d x \\ & =\int_{0}^{1} x^{2} \cdot x d x+\int_{1}^{2} x^{2}(2-x) d x \\ & =\left[\frac{x^{4}}{4}\right]_{0}^{1}+\left[\frac{2 x^{3}}{3}-\frac{x^{4}}{4}\right]_{1}^{2} \end{aligned} $$ $$ \begin{aligned} & =\frac{1}{4}+\frac{2}{3}(8)-4-\frac{2}{3}+\frac{1}{4} \\ & =\frac{1}{2}-4+\frac{14}{3}=\frac{3-24+28}{6}=\frac{7}{6} \end{aligned} $$ $\therefore \quad$ Variance $=\frac{7}{6}-1^{2}=\frac{1}{6}$ Now $\quad M_{X}(t)=E\left[e^{t x}\right]=\int_{-\infty}^{\infty} e^{t x} f(x) d x$ $$ \begin{aligned} & =\int_{0}^{1} e^{t x} x d x+\int_{1}^{2}(2-x) e^{t x} d x \\ & =\left[x \frac{e^{t x}}{t}\right]_{0}^{1}-\int_{0}^{1}(1) \frac{e^{t x}}{t} d x+\left[(2-x) \frac{e^{t x}}{t}\right]_{1}^{2}-\int_{1}^{2}(-1) \frac{e^{t x}}{t} d x \\ & =(1) \frac{\mathrm{e}^{\mathrm{t}}}{\mathrm{t}}-\frac{1}{\mathrm{t}^{2}}\left[\mathrm{e}^{\mathrm{tx}}\right]_{0}^{1}+\left[0-1 \cdot \frac{\mathrm{e}^{\mathrm{t}}}{\mathrm{t}}\right]+\frac{1}{\mathrm{t}^{2}}\left[\mathrm{e}^{\mathrm{tx}}\right]_{1}^{2} \\ & =\frac{\mathrm{e}^{\mathrm{t}}}{\mathrm{t}}-\frac{1}{\mathrm{t}^{2}}\left[\mathrm{e}^{\mathrm{t}}-1\right]-\frac{\mathrm{e}^{\mathrm{t}}}{\mathrm{t}}+\frac{1}{\mathrm{t}^{2}}\left[\mathrm{e}^{2 \mathrm{t}}-\mathrm{e}^{\mathrm{t}}\right] \\ & =-\frac{1}{t^{2}} e^{t}+\frac{1}{t^{2}}+\frac{e^{2 t}}{t^{2}}-\frac{e^{t}}{t^{2}} \\ & =\frac{1}{\mathrm{t}^{2}}+\frac{\mathrm{e}^{2 \mathrm{t}}}{\mathrm{t}^{2}}-\frac{2 \mathrm{e}^{\mathrm{t}}}{\mathrm{t}^{2}}=\frac{1}{\mathrm{t}^{2}}\left[\mathrm{e}^{2 \mathrm{t}}-2 \mathrm{e}^{\mathrm{t}}+1\right] \\ & M_{X}(t)=\left(\frac{e^{t}-1}{t}\right)^{2} \\ & \text { Now } \quad \mu_{1}^{\prime}=\frac{d}{d t} M_{X}(t) / t=0 \\ & {\left[\Theta \mu_{r}^{\prime}=\frac{d^{r}}{d t^{r}} M_{X}(t) / t\right.} \\ & \mu_{2}^{\prime}=\frac{d}{d t} M_{X}(t) / t=0 \end{aligned} $$ then variance, $\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}$ also $\mathrm{M}_{\mathrm{X}}(\mathrm{t})=\frac{1}{\mathrm{t}^{2}}\left[\left(1+2 \mathrm{t}+\frac{(2 \mathrm{t})^{2}}{2 !}+\frac{(2 \mathrm{t})^{3}}{3 !}+\frac{(2 \mathrm{t})^{4}}{4 !}+\ldots ..\right)-2\left(1+\mathrm{t}+\frac{\mathrm{t}^{2}}{2 !}+\frac{\mathrm{t}^{3}}{3 !}+\ldots.\right)+1\right]$ $$ \begin{aligned} & =\frac{1}{\mathrm{t}^{2}}\left[(4-2) \frac{\mathrm{t}^{2}}{2 !}+(8-2) \frac{\mathrm{t}^{3}}{3 !}+(16-2) \frac{\mathrm{t}^{4}}{4 !}+\ldots \ldots .\right] \\ & =\frac{1}{\mathrm{t}^{2}}\left[\frac{2 \mathrm{t}^{2}}{2}+\frac{6 \mathrm{t}^{3}}{6}+\frac{7 \mathrm{t}^{4}}{12}+\ldots \ldots .\right] \end{aligned} $$ $\mathrm{M}_{\mathrm{X}}(\mathrm{t})=\left[1+\frac{\mathrm{t}}{1 !}+\left(\frac{7}{6}\right) \frac{\mathrm{t}^{2}}{2 !}+\ldots \ldots\right]$ $\mu_{1}^{\prime}=$ coefficient of $\frac{t}{1 !} \rightarrow$ Mean $\mu_{2}^{\prime}=$ coefficient of $\frac{\mathrm{t}^{2}}{2 !}=\frac{7}{6}$ $\therefore \quad$ Variance $=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}=\frac{7}{6}-1^{2}=\frac{1}{6}$ Property: - (1) $\mathrm{M}_{\mathrm{CX}}(\mathrm{t})=\mathrm{M}_{\mathrm{X}}(\mathrm{Ct})$ Proof: - By definition, $$ \begin{aligned} & \mathrm{M}_{\mathrm{cX}}(\mathrm{t})=\mathrm{E}\left[\mathrm{e}^{\mathrm{tcX}}\right]=\mathrm{E}\left[\mathrm{e}^{(\mathrm{ct}) \mathrm{X}}\right] \\ & \mathrm{M}_{\mathrm{cX}}(\mathrm{t})=\mathrm{M}_{\mathrm{X}}(\mathrm{ct}) \end{aligned} $$ (2) MGF of the sum of a number of independent r. v. is equal to the product of heir respective MGF's i.e. $$ M_{X_{1}+X_{2}+\ldots+x_{n}}(t)=M_{X_{1}}(t) \cdot M_{X_{2}}(t) \ldots \ldots \cdot M_{X_{n}}(t) $$ Proof: - By definition, $$ \begin{aligned} \mathrm{M}_{\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots .+\mathrm{x}_{\mathrm{n}}}(\mathrm{t}) & =\mathrm{E}\left[\mathrm{e}^{\mathrm{t}\left(\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots .+\mathrm{X}_{\mathrm{n}}\right)}\right] \\ & =\mathrm{E}\left[\mathrm{e}^{\mathrm{t} \mathrm{X}_{1}} \cdot \mathrm{e}^{\mathrm{t} \mathrm{X}_{2}} \ldots \ldots . . \mathrm{e}^{\mathrm{t} \mathrm{X}_{\mathrm{n}}}\right] \end{aligned} $$ If $\mathrm{X}_{1}, \mathrm{X}_{2}, \ldots ., \mathrm{X}_{\mathrm{n}}$ are independent r.v., then function $\mathrm{e}^{\mathrm{tx}}, \mathrm{e}^{\mathrm{tx} 2}, \ldots . \mathrm{e}^{\mathrm{tx}}$ are also independent. $$ \begin{aligned} \therefore \quad \mathrm{M}_{\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots . \mathrm{x}_{\mathrm{n}}}(\mathrm{t}) & =\mathrm{E}\left\lfloor\left(\mathrm{e}^{\mathrm{t} \mathrm{X}_{1}}\right) \cdot \mathrm{E}\left(\mathrm{e}^{\mathrm{t} \mathrm{X}_{2}}\right) \ldots \ldots \mathrm{E}\left(\mathrm{e}^{\mathrm{tX}} \mathrm{n}\right)\right\rfloor \\ & =\mathrm{M}_{\mathrm{X}_{1}}(\mathrm{t}) \cdot \mathrm{M}_{\mathrm{X}_{2}}(\mathrm{t}) \ldots \ldots \mathrm{M}_{\mathrm{X}_{\mathrm{n}}}(\mathrm{t}) \end{aligned} $$ (3) Effect of change of origin and scale on M.G.F Let us transform $\mathrm{X}$ to the new variable $\mathrm{U}$ by changing both the origin and scale in $\mathrm{X}$ as follows: - Proof: - If $\mathrm{X}$ is a r.v., then $$ \mathrm{U}=\frac{\mathrm{X}-\mathrm{a}}{\mathrm{h}} \Rightarrow \mathrm{X}=\mathrm{a}+\mathrm{hU} \quad \text { where } \mathrm{a} \& \mathrm{~h} \text { are constants } $$ Now $\quad \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\mathrm{E}\left[\mathrm{e}^{\mathrm{tX}}\right]=\mathrm{E}\left[\mathrm{e}^{\mathrm{t}(\mathrm{a}+\mathrm{hU})}\right]$ $$ \begin{aligned} & =\mathrm{E}\left[\mathrm{e}^{\mathrm{at}}, \mathrm{e}^{\mathrm{thU}}\right] \\ & =\mathrm{e}^{\mathrm{at}} \mathrm{E}\left[\mathrm{e}^{(\mathrm{th}) \mathrm{U}}\right] \end{aligned} $$ where $M_{X}(t)$ is the m.g.f. of $X$ about origin. $\Rightarrow \quad \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\mathrm{e}^{\mathrm{at}} \mathrm{M}_{\mathrm{U}}(\mathrm{th})$ $\Rightarrow \quad \mathrm{M}_{\mathrm{U}}(\mathrm{th})=\mathrm{e}^{-\mathrm{at}} \mathrm{M}_{\mathrm{X}}(\mathrm{t})$ Put $\quad \mathrm{th}=\mathrm{t}^{\prime} \quad \Rightarrow \quad \mathrm{t}=\frac{\mathrm{t}^{\prime}}{\mathrm{h}}$ then $M_{U}\left(t^{\prime}\right)=e^{\frac{-a t^{\prime}}{h}} M_{X}\left(\frac{t^{\prime}}{h}\right)=$ M.G.F of $U$ (about origin) Standard normal variate If $\mathrm{X}$ is a r.v., then the variable $\mathrm{Z}$ defined by $$ \mathrm{Z}=\frac{\mathrm{X}-\mu}{\sigma} \text { is called standard normal variate } $$ where $\mu, \sigma$ are Mean \\& standard derivation, respectively Also $\mathrm{E}(\mathrm{Z})=0$ Variance $(Z)=1$ Now $\quad \mathrm{E}(\mathrm{Z})=\mathrm{E}\left(\frac{\mathrm{X}-\mu}{\sigma}\right)=\frac{1}{\sigma} \mathrm{E}(\mathrm{X}-\mu)$ $$ E(Z)=\frac{1}{\sigma}[E(X)-\mu]=\frac{1}{\sigma}[\mu-\mu]=0 $$ and $\quad V(Z)=E\left(\frac{X-\mu}{\sigma}\right)^{2}=\frac{1}{\sigma^{2}}\left[\mathrm{E}(X-\mu)^{2}\right]$ $\Rightarrow \quad \mathrm{V}(\mathrm{Z})=\frac{1}{\sigma^{2}} \cdot \sigma^{2}=1$ $\left[\Theta \sigma^{2}=\mathrm{E}(\mathrm{X}-\mu)^{2}\right]$ If $\mathrm{X} \sim \mathrm{N}\left(\mu, \sigma^{2}\right)$, then $\mathrm{Z} \sim \mathrm{N}(0,1)$ The m.g.f of standard variate $Z$ is $M_{Z}(t)=e^{-\mu t / \sigma} M_{X}\left(\frac{t}{\sigma}\right)$ Example: - Let $\mathrm{X}$ is a discrete r.v. $$ \mathrm{P}(\mathrm{X}=\mathrm{r})=\mathrm{pq}^{\mathrm{r}-1}, \mathrm{r}=1,2, \ldots \ldots $$ where $\mathrm{p}$ is the probability of success in one trial and $\mathrm{q}$ is the probability of failure in one trial. Then find MGF, Mean \\& variance. Solution: - $\quad M_{X}(t)=E\left[e^{t X}\right]$ $$ =\sum_{\mathrm{r}=1}^{\infty} \mathrm{e}^{\mathrm{tr}} \mathrm{pq} \mathrm{r}^{\mathrm{r}-1} $$ $\Rightarrow \quad M_{X}(t)=\frac{p}{q} \sum_{r=1}^{\infty} e^{t r} q^{r}=\frac{p}{q}\left(q e^{t}\right) \sum_{r=1}^{\infty}\left(q e^{t}\right)^{r-1}$ $$ =\frac{\mathrm{p}}{\mathrm{q}} \cdot \mathrm{q} \mathrm{e}^{\mathrm{t}}\left[1+\mathrm{qe}^{\mathrm{t}}+\left(\mathrm{q} \mathrm{e}^{\mathrm{t}}\right)^{2}+\ldots \ldots .\right] $$ $\Rightarrow \quad \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\mathrm{pe}^{\mathrm{t}} \frac{1}{\left(1-\mathrm{qe}^{\mathrm{t}}\right)}$ Now $\quad \mu_{1}^{\prime}=\frac{\mathrm{d}}{\mathrm{dt}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) / \mathrm{t}=0 \mathrm{c}=\frac{\left(1-\mathrm{qe}^{\mathrm{t}}\right) \cdot \mathrm{pe}^{\mathrm{t}}-\mathrm{pe}^{\mathrm{t}} \cdot\left(-\mathrm{q \textrm {e } ^ { \mathrm { t } } )}\right.}{\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{2}} / \mathrm{t}=0$ $$ \begin{aligned} & \frac{p e^{t}-p q e^{2 t}+P Q E^{2 T}}{\left(1-q e^{t}\right)^{2}} / t=0 \\ = & \frac{p}{(1-q)^{2}} \end{aligned} $$ $\Rightarrow \quad \mu_{1}^{\prime}=\frac{p}{(1-q)^{2}}=\frac{p}{p^{2}}$ $\Rightarrow \quad \mu_{1}^{\prime}=\frac{1}{\mathrm{p}}=$ Mean and $\quad \mu_{2}^{\prime}=\frac{\mathrm{d}^{2}}{\mathrm{dt}^{2}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) / \mathrm{t}=0$ $$ =\frac{\mathrm{d}}{\mathrm{dt}}\left(\frac{\mathrm{pe}^{\mathrm{t}}}{\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{2}}\right) / \mathrm{t}=0 $$ $$ \begin{aligned} & =\frac{\left(1-q e^{t}\right) \mathrm{pe}^{\mathrm{t}}-\mathrm{pe}^{\mathrm{t}} \cdot 2\left(1-\mathrm{qe}^{\mathrm{t}}\right)\left(-\mathrm{qe}^{\mathrm{t}}\right)}{\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{4}} / \mathrm{t}=0 \\ & =\frac{\left(1-q e^{t}\right) p e^{t}+2 p q e^{2 t}}{\left(1-q e^{t}\right)^{3}} / t=0 \\ & =\frac{p e^{t}+p q e^{2 t}}{\left(1-q e^{t}\right)^{3}} / t=0 \\ & =\frac{p+p q}{(1-q)^{3}}=\frac{p(1+q)}{p^{3}}=\frac{1+q}{p^{2}} \end{aligned} $$ $\therefore \quad$ Variance, $\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}$ $\Rightarrow \quad \mu_{2}=\frac{1+\mathrm{q}}{\mathrm{p}^{2}}-\frac{1}{\mathrm{p}^{2}}=\frac{\mathrm{q}}{\mathrm{p}^{2}}$ ## CUMULANT GENERATING FUNCTION AND CUMULANTS The logarithm of m.g.f about origin of a r.v. X is called second m'g.f or cumulant generating function (c.g.f.) i.e. $\quad \mathrm{K}_{\mathrm{X}}(\mathrm{t})=\log \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\sum_{\mathrm{r}} \mathrm{K}_{\mathrm{r}} \frac{\mathrm{t}^{\mathrm{r}}}{\mathrm{r} !}$ and $\quad \mathrm{K}_{\mathrm{r}}=$ coefficient of $\frac{\mathrm{t}^{\mathrm{r}}}{\mathrm{r} !}$ in $\mathrm{K}_{\mathrm{X}}(\mathrm{t})$ This $\mathrm{K}_{\mathrm{r}}$ is known as $\mathrm{r}^{\text {th }}$ cumulant Now $\quad \mathrm{K}(\mathrm{t})=\log \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\log \left[\mathrm{E}\left(\mathrm{e}^{\mathrm{tx}}\right)\right]$ $$ \begin{aligned} & =\log \left[E\left(1+t X+\frac{t^{2} X^{2}}{2 !}+\ldots \ldots .+\frac{t^{r} X^{r}}{r !}+\ldots . .\right)\right] \\ & =\log \left[1+\left(\mu_{1}^{\prime} t+\mu_{2}^{\prime} \frac{t^{2}}{2 !}+\ldots . .+\mu_{r}^{\prime} \frac{t^{r}}{r !}\right)\right] \\ \Rightarrow \quad K_{X}(t) & =\left[\left(\mu_{1}^{\prime} t+\mu_{2}^{\prime} \frac{t^{2}}{2 !}+\ldots . \mu_{r}^{\prime} \frac{t^{r}}{r !}+\ldots . .\right)-\frac{1}{2}\left(\mu_{1}^{\prime}+\mu_{2}^{\prime} \frac{t^{2}}{2 !}+\ldots .+\mu_{r}^{\prime}+\frac{t^{r}}{r !}+\ldots . .\right)\right. \end{aligned} $$ $$ \begin{aligned} &+\frac{1}{3}\left(\mu_{1}^{\prime} \mathrm{t}+\mu_{2}^{\prime} \frac{\mathrm{t}^{2}}{2 !}+\ldots .+\mu_{\mathrm{r}}^{\prime} \frac{\mathrm{t}^{\mathrm{r}}}{\mathrm{r} !}+\ldots .\right)^{3} \\ &-\frac{1}{4}\left(\mu_{1}^{\prime} \mathrm{t}+\mu_{2}^{\prime} \frac{\mathrm{t}^{2}}{2 !}+\ldots .+\mu_{\mathrm{r}}^{\prime} \frac{\mathrm{t}^{\mathrm{r}}}{\mathrm{r} !}\right)^{4}+\ldots . . \end{aligned} $$ Now $K_{X}(t)=K_{1} t+K_{2} \frac{t^{2}}{2 !}+K_{3} \frac{t^{3}}{3 !}+K_{4} \frac{t^{4}}{4 !}+\ldots .$. Equating coefficients of like terms in equation (1), (2) we get $$ \begin{aligned} \mathrm{K}_{1} & =\text { coefficient of } \frac{\mathrm{t}}{1 !}=\mu_{1}^{\prime}=\text { Mean } \\ \mathrm{K}_{2} & =\mu_{2}^{\prime}-\mu_{1}^{\prime}=\mu_{2}=(\text { variance }) \\ \mathrm{K}_{3} & =\mu_{3}^{\prime}-3 \mu_{2}^{\prime} \mu_{1}^{\prime}+2 \mu_{1}^{\prime 3}=\mu_{3} \\ \frac{\mathrm{K}_{4}}{4 !} & =\frac{\mu_{4}^{1}}{4 !}-\frac{1}{2}\left(\frac{\mu_{2}^{\prime 2}}{4}+\frac{2 \mu_{1}^{\prime} \mu_{3}^{\prime}}{3 !}\right)+\frac{1}{3} \frac{3 \mu_{1}^{\prime 2} \mu_{2}^{\prime}}{2}-\frac{\mu_{1}^{\prime 4}}{4} \\ \Rightarrow \quad \mathrm{K}_{4} & =\mu_{4}^{\prime}-3 \mu_{2}^{\prime 2}-4 \mu_{1}^{\prime} \mu_{3}^{\prime}+12 \mu_{1}^{\prime 2} \mu_{2}^{\prime}-6 \mu_{1}^{\prime 4} \\ & =\left(\mu_{4}^{\prime}-4 \mu_{3}^{\prime} \mu_{1}^{\prime}+6 \mu_{2}^{\prime} \mu_{1}^{\prime 2}-3 \mu_{1}^{\prime 4}\right)-3\left(\mu_{2}^{\prime 2}-2 \mu_{2}^{\prime} \mu_{1}^{\prime 2}+\mu_{1}^{\prime \prime}\right) \\ \Rightarrow \quad \mathrm{K}_{4} & =\mu_{4}-3 \mathrm{~K}_{2}^{2}=\mu_{4}-3 \mu_{2}^{2} \end{aligned} $$ Also $\quad K_{r}=\frac{d^{r}}{d t^{r}} K_{X}(t)$ ## Characteristic function Characteristic function of a variable $\mathrm{X}$ is defined as $\phi_{\mathrm{X}}(\mathrm{t})=\mathrm{E}\left[\mathrm{e}^{\mathrm{itX}}\right]$ If $\mathrm{X}$ is continuous $\mathrm{r}$. v., then $$ \phi_{X}(t)=\int e^{i t x} f(x) d x $$ If $\mathrm{X}$ is discrete $\mathrm{r} . \mathrm{v}$, then $$ \phi_{\mathrm{X}}(\mathrm{t})=\sum \mathrm{e}^{\mathrm{itx}} \mathrm{f}(\mathrm{x}) $$ ## LESSSON 6 ## BINOMIAL DISTRIBUTION Let $\mathrm{n}$ represent the number of trials in an event. $\mathrm{x} \rightarrow$ consecutive successes $(\mathrm{n}-\mathrm{x}) \rightarrow$ failure then $\quad$ ppp.....p qq.....q $=p^{x} q^{n-x}$ But $x$ successes in $n$ trials can occur in ${ }^{n} C_{x}$ ways. $\therefore \quad$ Probability of $\mathrm{x}$ successes in any order in $\mathrm{n}$ trials $={ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}$, here $\mathrm{n}$ and $\mathrm{p}$ are called parameters. Therefore, probability density function for a Binomial distribution is $$ \begin{aligned} \mathrm{P}(\mathrm{X}=\mathrm{x}) & =\mathrm{B}(\mathrm{n}, \mathrm{p} ; \mathrm{x}) \\ & =\mathrm{f}(\mathrm{x})={ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}, \mathrm{x}=0,1,2 \ldots, \mathrm{n} \end{aligned} $$ Also $\mathrm{p}+\mathrm{q}=1 \quad$ and $\quad \sum \mathrm{f}(\mathrm{x})=1$ Moments about origin of Binomial distribution The $\mathrm{r}^{\text {th }}$ moment of the Binomial distance about origin is $$ \begin{aligned} & \mu_{r}^{\prime}=E\left[X^{r}\right]=\sum_{x=0}^{n} x^{r} f(x) \\ \Rightarrow \quad & \mu_{r}^{\prime}=\sum_{x=0}^{n} x^{r}{ }^{n} C_{x} p^{x} q^{n-x} \end{aligned} $$ when $\mathrm{r}=1,1^{\text {st }}$ moment, $\mu_{1}^{\prime}=\sum_{\mathrm{x}=0}^{\mathrm{n}} \mathrm{x}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}$ $\Rightarrow \quad \mu_{1}^{\prime}=\sum_{\mathrm{x}=0}^{\mathrm{n}} \frac{\mathrm{x}\lfloor\underline{\mathrm{n}}}{\underline{\mathrm{n}-\mathrm{x} \mid \mathrm{x}}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}$ $$ \begin{aligned} & =n p \sum_{x=1}^{n} \frac{\lfloor n-1}{n^{n-x} \underline{x-1}} p^{x-1} q^{n-x} \end{aligned} $$ $$ \begin{aligned} & =n p \sum_{x=1}^{n}{ }^{n-1} C_{x-1} p^{x-1} q^{(n-1)-(x-1)} \\ & =n p(q+p)^{n-1} \\ & {\left[\Theta \sum \mathrm{f}(\mathrm{x})=(\mathrm{q}+\mathrm{p})^{\mathrm{n}}=1\right]} \end{aligned} $$ $\Rightarrow \quad \mu_{1}^{\prime}=\mathrm{np}=$ Mean Now Second moment about origin is $\mu_{2}^{\prime}=E\left(X^{2}\right)=\sum x^{2} f(x)$ $$ \begin{aligned} & \mu_{2}^{\prime}=\sum_{x=0}^{n}[x(x-1)+x]{ }^{n} C_{x} p^{x} q^{n-x} \\ & =\sum_{x=0}^{n} x(x-1){ }^{n} C_{x} p^{x} q^{n-x}+\sum_{x=0}^{n} x{ }^{n} C_{x} p^{x} q^{n-x} \\ & =\sum_{\mathrm{x}=2}^{\mathrm{n}} \mathrm{x}(\mathrm{x}-1){ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}+\sum_{\mathrm{x}=1}^{\mathrm{n}} \mathrm{x}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & =\sum_{\mathrm{x}=2}^{\mathrm{n}} \frac{\mathrm{x}(\mathrm{x}-1)\lfloor n}{\mathrm{x}(\mathrm{x}-1)\lfloor\mathrm{x}-2\lfloor n-\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}+\mathrm{np} \\ & \Rightarrow \quad \mu_{2}^{\prime}=\sum_{\mathrm{x}=2}^{\mathrm{n}} \frac{\mathrm{n}(\mathrm{n}-1)\lfloor\mathrm{n}-2}{\lfloor\mathrm{x}-2} \mathrm{p}^{\mathrm{n}-\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}+\mathrm{np} \\ & =n(n-1) p^{2} \sum_{x=2}^{n} \frac{\lfloor n-2}{\mid x-2\lfloor(n-2)-(x-2)} p^{x-2} q^{(n-2)-(x-2)}+n p \\ & =n(n-1) p^{2} \sum_{x=2}^{n}{ }^{n-2} C_{x-2} p^{x-2} q^{(n-2)-(x-2)}+n p \\ & =\mathrm{n}(\mathrm{n}-1) \mathrm{p}^{2} \cdot(1)+\mathrm{np} \\ & \Rightarrow \quad \mu_{2}^{\prime}=\mathrm{n}(\mathrm{n}-1) \mathrm{p}^{2}+\mathrm{np} \end{aligned} $$ Also variance, $\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}$ $$ \begin{aligned} \Rightarrow \quad \mu_{2} & =n(n-1) p^{2}+n p-n^{2} p^{2} \\ & =n^{2} p^{2}-n p^{2}+n p-n^{2} p^{2} \\ \Rightarrow \quad \mu_{2} & =n p(1-p)=n p q \end{aligned} $$ So, $\quad$ variance $=\mathrm{npq}$ Also variance $\leq$ Mean $$ \text { i.e. } n p q \leq n p \quad \text { as } q \leq 1 $$ and $\quad$ variance $=$ Mean if $q=1$ Now $\quad \mu_{3}^{\prime}=E\left(X^{3}\right)=\sum_{x=0}^{n} x^{3} f(x)$ As $\quad \mathrm{x}^{3}=\mathrm{x}(\mathrm{x}-1)(\mathrm{x}-2)+3 \mathrm{x}(\mathrm{x}-1)+\mathrm{x}$ So $\quad \mu_{3}^{\prime}=\sum_{x=0}^{n}[x(x-1)(x-2)+3 x(x-1)+x] f(x)$ $$ \begin{aligned} & =\sum_{\mathrm{x}=0}^{\mathrm{n}} \mathrm{x}(\mathrm{x}-1)(\mathrm{x}-2){ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}+3 \sum_{\mathrm{x}=0}^{\mathrm{n}} \mathrm{x}(\mathrm{x}-1){ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & +\sum_{\mathrm{x}=0}^{\mathrm{n}} \mathrm{x}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & =\sum_{\mathrm{x}=3}^{\mathrm{n}} \mathrm{x}(\mathrm{x}-1)(\mathrm{x}-2){ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}+3 \sum_{\mathrm{x}=2}^{\mathrm{n}} \mathrm{x}(\mathrm{x}-1){ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & +\sum_{\mathrm{x}=1}^{\mathrm{n}} \mathrm{x}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & =\sum_{x=3}^{n} x(x-1)(x-2) \frac{\lfloor n}{x(x-1)(x-2)\lfloor x-3\lfloor n-x} p^{x} q^{n-x} \\ & +3 n(n-1) p^{2}+n p \\ & =n(n-1)(n-2) \sum_{x=3}^{n} \frac{\lfloor n-3}{\lfloor(n-3)-(x-3)} p^{x} q^{n-x}+3 n(n-1) p^{2}+n p \\ & \mu_{3}^{\prime}=n(n-1)(n-2) p^{3} \sum_{x=3}^{n}{ }^{n-3} C_{x-3} p^{x-3} q^{(n-3)-(x-3)}+3 n(n-1) p^{2}+n p \\ & \Rightarrow \quad \mu_{3}^{\prime}=\mathrm{n}(\mathrm{n}-1)(\mathrm{n}-2) \mathrm{p}^{3}+3 \mathrm{n}(\mathrm{n}-1) \mathrm{p}^{2}+\mathrm{np} \end{aligned} $$ Also $3^{\text {rd }}$ moment about mean is $$ \begin{aligned} \mu_{3} & =\mu_{3}^{\prime}-3 \mu_{2}^{\prime} \mu_{1}^{\prime}+2 \mu_{1}^{\prime 3} \\ & =n(n-1)(n-2) p^{3}+3 n(n-1) p^{2}+n p-3\left[n(n-1) p^{2}+n p\right] n p+2 n^{3} p^{3} \\ & =\left(n^{2}-n\right)(n-2) p^{3}+3 n^{2} p^{2}-3 n p^{2}+n p-3 n^{3} p^{3}+3 n^{2} p^{3}-3 n^{2} p^{2}+2 n^{3} p^{3} \\ & =n^{3} p^{3}-n^{2} p^{3}-2 n^{2} p^{3}+2 n p^{3}-3 n p^{2}+n p-n^{3} p^{3}+3 n^{2} p^{3} \\ & =n p+2 n p^{3}-3 n p^{2} \\ & =n p\left(1-3 p+2 p^{2}\right)=n p(p-1)(2 p-1) \end{aligned} $$ $$ \begin{aligned} & =n p(-q)(2 p-p-q)=n p q(q-p) \\ & \mu_{4}^{\prime}=\sum_{\mathrm{x}=0}^{\mathrm{n}} \mathrm{x}^{4}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & \text { Now } \mathrm{x}^{4}=\mathrm{x}(\mathrm{x}-1)(\mathrm{x}-2)(\mathrm{x}-3)+6 \mathrm{x}(\mathrm{x}-1)(\mathrm{x}-2)+7 \mathrm{x}(\mathrm{x}-1)+\mathrm{x} \\ & \text { So } \quad \mu_{4}^{\prime}=n(n-1)(n-2)(n-3) p^{4}+6 n(n-1)(n-2) p^{3}+7 n(n-1) p^{2}+n p \\ & \text { and } \quad \mu_{4}=\mu_{4}^{\prime}-4 \mu_{3}^{\prime} \mu_{1}^{\prime}+6 \mu_{2}^{\prime} \mu_{1}^{\prime}-3 \mu_{1}^{\prime 4} \\ & \Rightarrow \quad \mu_{4}=\mathrm{npq}[1+3(\mathrm{n}-2) \mathrm{pq}] \\ & \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\mathrm{E}\left[\mathrm{e}^{\mathrm{tx}}\right]=\sum \mathrm{e}^{\mathrm{tx}} \mathrm{f}(\mathrm{x}) \\ & =\sum_{\mathrm{x}=0}^{\mathrm{n}} \mathrm{e}^{\mathrm{tx}}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & =\sum_{\mathrm{x}=0}^{\mathrm{n}}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}\left(\mathrm{pe}^{\mathrm{t}}\right)^{\mathrm{x}} \\ & {\left[\Theta \mathrm{e}^{\mathrm{tx}}=\left(\mathrm{e}^{\mathrm{t}}\right)^{\mathrm{x}}\right]} \\ & \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\left(\mathrm{q}+\mathrm{pe}^{\mathrm{t}}\right)^{\mathrm{n}} \\ & \text { Now } \quad \mu_{r}^{\prime}=\frac{d^{r}}{d t^{r}} M_{X}(t) / t=0 \\ & \Rightarrow \quad \mu_{1}^{\prime}=\frac{d}{d t} M_{X}(t) / t=0 \\ & \begin{aligned} \mu_{1}^{\prime} & =\frac{d}{d t}\left(q+p e^{t}\right)^{n} / t=0 \\ & =n\left(q+p e^{t}\right)^{n-1} p e^{t} / t=0 \\ & =n p(q+p)^{n-1}=n p \end{aligned} \\ & =\frac{d}{d t}\left[n\left(q+p e^{t}\right)^{n-1} p e^{t}\right] / t=0 \end{aligned} $$ $$ \begin{aligned} & =n p \frac{d}{d t}\left[e^{t}\left(q+p e^{t}\right)^{n-1}\right] / t=0 \\ & =n p\left[e^{t}\left(q+p e^{t}\right)^{n-1}+1(n-1)\left(q+p e^{t}\right)^{n-2} p e^{t}\right] t=0 \\ & =n p\left[1(q+p)^{n-1}+1(n-1)(q+p)^{n-2} p\right] \\ & =n p[1+(n-1) p]=n p+n(n-1) p^{2} \end{aligned} $$ Pearson's Coefficients $$ \begin{array}{ll} \beta_{1}=\frac{\mu_{3}^{2}}{\mu_{2}^{3}}, \beta_{2}=\frac{\mu_{4}}{\mu_{2}^{2}}, \alpha_{1}=\sqrt{\beta_{1}}, \alpha_{2}=\beta_{2}-3 \\ \beta_{1}=\frac{(1-2 p)^{2}}{n p q}, & \beta_{2}=3+\frac{(1-6 p q}{n p q} \\ \alpha_{1}=\frac{1-2 p}{\sqrt{n p q}}, & \alpha_{2}=\frac{1-6 p q}{n p q} \end{array} $$ ## Characteristic function $$ \phi_{X}(t)=E\left[e^{i t x}\right]=\left(q+p e^{i t}\right)^{n} $$ ## Cumulants Cumulant generating function (c.g.f) of a Binomial distribution is $$ \begin{aligned} K_{X}(t) & =\log M_{X}(t) \\ & =\log \left(q+p e^{t}\right)^{n}=n \log \left(q+p e^{t}\right) \\ & =n \log \left[q+p\left(1+t+\frac{t^{2}}{2 !}+\frac{t^{3}}{3 !}+\ldots . .\right)\right] \\ & =n \log \left[(q+p)+p\left(t+\frac{t^{2}}{2 !}+\frac{t^{3}}{3 !}+\ldots \ldots . .\right)\right] \\ & =n \log \left[1+p\left(t+\frac{t^{2}}{2 !}+\frac{t^{3}}{3 !}+\ldots \ldots . .\right)\right] \end{aligned} $$ using $\log (1+x)=x-\frac{x^{2}}{2}+\frac{x^{3}}{3} \ldots \ldots$. , we get $$ K_{X}(t)=n\left[p\left(t+\frac{t^{2}}{2 !}+\frac{t^{3}}{3 !}+\ldots .\right)-\frac{p^{2}}{2}\left(t+\frac{t^{2}}{2 !}+\frac{t^{3}}{3 !}+\ldots .\right)^{2}+\frac{p^{3}}{3}\left(t+\frac{t^{2}}{2 !}+\ldots .\right)^{3} \ldots .\right] $$ The $\mathrm{r}^{\text {th }}$ cumulant is $K_{r}=$ coefficient of $\frac{t^{r}}{r !}$ in $K_{X}(t)=\frac{d^{r}}{d t^{r}} K_{X}(t) / t=0$ So $\quad \mathrm{K}_{1}=\mathrm{np}=\mu_{1}^{\prime}$ $$ \begin{aligned} & \mathrm{K}_{2}=\mathrm{np}-\mathrm{np} \mathrm{p}^{2}=\text { coefficient of } \frac{\mathrm{t}^{2}}{2 !} \\ & \Rightarrow \quad \mathrm{K}_{2}=\mathrm{np}(1-\mathrm{p})=\mathrm{npq}=\mu_{2} \\ & {[\Theta p+q=1]} \\ & \& \quad \frac{\mathrm{K}_{3}}{3 !}=\frac{\mathrm{np}}{3 !}-\frac{n \mathrm{p}^{2}}{2} \frac{2}{2 !}+\frac{n \mathrm{p}^{3}}{3} \\ & \Rightarrow \quad \mathrm{K}_{3}=\mathrm{np}-3 \mathrm{np} \mathrm{p}^{2}+2 \mathrm{p}^{3} \mathrm{n}=\mathrm{np}\left(1-3 \mathrm{p}+2 \mathrm{p}^{2}\right) \\ & =\mathrm{np}(\mathrm{p}-1)(2 \mathrm{p}-1) \\ & =n p(-q)[2 p-p-q] \\ & =\operatorname{npq}(\mathrm{q}-\mathrm{p})=\mu_{3} \end{aligned} $$ Example: $-p=$ probability of getting a head $=1 / 2$ $$ \mathrm{q} \text { = probability of not getting a head }=1 / 2 $$ The probability of getting $x$ heads in a random throw of 10 coins is $$ \mathrm{p}(\mathrm{x})=\left(\frac{10}{\mathrm{x}}\right)\left(\frac{1}{2}\right)^{\mathrm{x}}\left(\frac{1}{2}\right)^{10-\mathrm{x}}=\left(\frac{10}{\mathrm{x}}\right)\left(\frac{1}{2}\right)^{10} ; \mathrm{x}=0,1,2, \ldots \ldots, 10 $$ $\therefore \quad$ Probability of getting at least 7 heads is $$ \begin{aligned} P(X \geq 7) & =p(7)+p(8)+p(9)+p(10) \\ & =\left(\frac{1}{2}\right)^{10}\left\{\left(\frac{10}{7}\right)+\left(\frac{10}{8}\right)+\left(\frac{10}{9}\right)+\left(\frac{10}{10}\right)\right\} \\ & =\frac{120+45+10+1}{1024}=\frac{176}{1024} \end{aligned} $$ ## Recurrence Relation $$ \mu_{\mathrm{r}+1}=\operatorname{pq}\left[\mathrm{nr} \mu_{\mathrm{r}-1}+\frac{\mathrm{d}}{\mathrm{dp}} \mu_{\mathrm{r}}\right] $$ where $\mu_{\mathrm{r}}$ is the $\mathrm{r}^{\text {th }}$ moment about mean. Now, $\mu_{\mathrm{r}}=\mathrm{E}\left[(\mathrm{X}-\mathrm{np})^{\mathrm{r}}\right]=\sum_{\mathrm{x}=0}^{\mathrm{n}}(\mathrm{x}-\mathrm{np})^{\mathrm{r}}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}$ Differentiating w.r.t.p, we get $$ \begin{aligned} & \frac{d \mu_{r}}{d p}=\sum_{x=0}^{n}\left[\left({ }^{n} C_{x} p^{x} q^{n-x}\right) r(x-n p)^{r-1}(-n)+(x-n p)^{r}{ }^{n} C_{x} \frac{d}{d p}\left(p^{x} q^{n-x}\right)\right] \\ & =-n r \sum_{x=0}^{n}(x-n p)^{r-1}{ }^{n} C_{x} p^{x} q^{n-x} \\ & +\sum_{\mathrm{x}=0}^{\mathrm{n}}(\mathrm{x}-\mathrm{np})^{\mathrm{r}}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}}\left[\mathrm{q}^{\mathrm{n}-\mathrm{x}} \mathrm{xp}^{\mathrm{x}-1}+\mathrm{p}^{\mathrm{x}}(\mathrm{n}-\mathrm{x}) \mathrm{q}^{\mathrm{n}-\mathrm{x}-1}(-1)\right] \\ & {\left[\Theta \mathrm{p}=1-\mathrm{q}, \mathrm{q}=1-\mathrm{p} \therefore \frac{\mathrm{dq}}{\mathrm{dp}}=-1\right]} \\ & =-n r \sum_{x=0}^{n}(x-n p)^{r-1} f(x)+\sum_{x=0}^{n}(x-n p)^{r}{ }^{n} C_{x} p^{x} q^{n-x}\left[\frac{x}{p}-\frac{n-x}{q}\right] \\ & =-n r \mu_{r-1}+\sum_{x=0}^{n}(x-n p)^{r}{ }^{n} C_{x} p^{x} q^{n-x} \frac{(x-n p)}{p q} \\ & {\left[\therefore \frac{\mathrm{x}}{\mathrm{p}}-\frac{(\mathrm{n}-\mathrm{x})}{\mathrm{q}}=\frac{\mathrm{xq}-\mathrm{np}+\mathrm{xp}}{\mathrm{pq}}=\frac{\mathrm{x}(\mathrm{p}+\mathrm{q})-\mathrm{np}}{\mathrm{pq}}=\frac{\mathrm{x}-\mathrm{np}}{\mathrm{pq}}\right]} \\ & \Rightarrow \quad \frac{\mathrm{d} \mu_{\mathrm{r}}}{\mathrm{dp}}=-\mathrm{nr} \mu_{\mathrm{r}-1}+\sum_{\mathrm{x}=0}^{\mathrm{n}} \frac{(\mathrm{x}-\mathrm{np})^{\mathrm{r}+1}}{\mathrm{pq}}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & =-\mathrm{nr} \mu_{\mathrm{r}-1}+\frac{1}{\mathrm{pq}} \mu_{\mathrm{r}+1} \end{aligned} $$ ## M.G.F about its Mean $$ M_{X-n p}(t)=\left[q e^{-p t}+p e^{t q}\right]^{n} $$ Now by definition $M_{X-n p}(t)=\sum_{x=0}^{n} e^{(x-n p) t} f(x)$ $$ \begin{aligned} & =\sum_{\mathrm{x}=0}^{\mathrm{n}} \mathrm{e}^{\mathrm{xt}} \mathrm{e}^{-\mathrm{npt}}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & =e^{-n p t} \sum_{x=0}^{n}{ }^{n} C_{x} q^{n-x}\left(p e^{t}\right)^{x} \\ & =\mathrm{e}^{-\mathrm{npt}}\left(\mathrm{q}+\mathrm{pe}^{\mathrm{t}}\right)^{\mathrm{n}} \\ & =\left(\mathrm{e}^{-\mathrm{pt}}\right)^{\mathrm{n}}(\mathrm{q}+\mathrm{pe})^{\mathrm{t}} \\ & =\left[\mathrm{qe}^{-\mathrm{pt}}+\mathrm{pe}^{\mathrm{t}(1-\mathrm{p})}\right]^{\mathrm{n}} \\ & \Rightarrow \quad \mathrm{M}_{\mathrm{X}-\mathrm{np}}(\mathrm{t})=\left[\mathrm{qe}^{-\mathrm{pt}}+\mathrm{pe}^{\mathrm{qt}}\right]^{\mathrm{n}} \\ & {[\Theta p+q=1]} \\ & M_{X-n p}(t)=\left[q\left(1-p t+\frac{p^{2} t^{2}}{2 !}-\frac{p^{3} t^{3}}{3 !}+\ldots \ldots\right)+p\left(1+q t+\frac{q^{2} t^{2}}{2 !}+\frac{q^{3} t^{3}}{3 !}+\ldots .\right)^{n}\right] \\ & =\left[(p+q)+p q(p+q) \frac{t^{2}}{2 !}+p q\left(q^{2}-p^{2}\right) \frac{t^{3}}{3 !}+\ldots .\right]^{n} \end{aligned} $$ As $\mathrm{p}+\mathrm{q}=1$, we have $$ \begin{aligned} & \mathrm{M}_{\mathrm{X}-\mathrm{np}}(\mathrm{t})=\left[1+\left(\mathrm{pq} \frac{\mathrm{t}^{2}}{2 !}+\mathrm{pq}(\mathrm{q}-\mathrm{p}) \frac{\mathrm{t}^{3}}{3 !}+\ldots .\right)\right]^{\mathrm{n}} \\ & =1+\mathrm{n}\left[\mathrm{pq} \frac{\mathrm{t}^{2}}{2 !}+\mathrm{pq}(\mathrm{q}-\mathrm{p}) \frac{\mathrm{t}^{3}}{3 !}+\ldots \ldots . .\right] \\ & +\frac{\mathrm{n}(\mathrm{n}-1)}{2 !}\left[\mathrm{pq} \frac{\mathrm{t}^{2}}{2 !}+\mathrm{pq}(\mathrm{q}-\mathrm{p}) \frac{\mathrm{t}^{3}}{3 !}+\ldots \ldots\right]^{2}+\ldots \ldots \\ & =1+n p q \frac{t^{2}}{2 !}+n p q(q-p) \frac{t^{3}}{3 !}+\ldots \ldots . \\ & \therefore \quad \mu_{1}=0 \end{aligned} $$ $$ \begin{aligned} & \mu_{2}=\text { coefficient of } \frac{t^{2}}{2 !}=n p q \\ & \mu_{3}=\text { coefficient of } \frac{t^{3}}{3 !}=n p q(q-p) \end{aligned} $$ Example: - In 8 throw of a dice, 5 or 6 is considered as success. Find the mean number of successes and S.D. Solution: - Here $\mathrm{n}=8, \mathrm{p}=\frac{2}{6}=\frac{1}{3}, \mathrm{q}=\frac{2}{3}$ $\therefore \quad$ Mean $=\mathrm{np}=8 \times \frac{1}{3}=\frac{8}{3}$ and $\quad \mathrm{SD}=\sqrt{\mathrm{npq}}=\sqrt{\frac{8}{3} \times \frac{2}{3}}=\frac{4}{3}$ Example: - 6 dice are thrown 729 times. How many times due you expect at least 3 dice to show five or six? Solution: $-\mathrm{n}=6, \mathrm{~N}=729$ $$ \begin{aligned} & \mathrm{p}=\frac{2}{6}=\frac{1}{3}, \mathrm{x}=3, \mathrm{q}=\frac{2}{3} \\ & \therefore \quad \mathrm{P}(\mathrm{X} \geq 3)=\mathrm{f}(3)+\mathrm{f}(4)+\mathrm{f}(5)+\mathrm{f}(6) \end{aligned} $$ Now $\mathrm{f}(\mathrm{x})={ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}$ $$ \begin{aligned} \therefore \quad(1) \Rightarrow \mathrm{P}(\mathrm{X} \geq 3) & =\sum_{\mathrm{x}=3}^{6}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} \\ & ={ }^{6} \mathrm{C}_{3}\left(\frac{1}{3}\right)^{3}\left(\frac{2}{3}\right)^{3}+{ }^{6} \mathrm{C}_{4}\left(\frac{1}{3}\right)^{4}\left(\frac{2}{3}\right)^{2}+6\left(\frac{1}{3}\right)^{5}\left(\frac{2}{3}\right)+\left(\frac{1}{3}\right)^{6}=\frac{233}{729} \end{aligned} $$ No. of times, at least 3 dice will show $$ \begin{aligned} 5 \text { or } 6=\mathrm{NP}(\mathrm{X} \geq 3) & =729 \times \frac{233}{729} \\ & =233 \end{aligned} $$ ## Fitting of Binomial distribution By fitting of Binomial distribution we mean to find out the theoretical or expected frequencies for all values of $X=0,1,2 \ldots, n$ The frequency $\mathrm{F}(\mathrm{x})$ for $\mathrm{X}=\mathrm{x}$ is $$ F(x)=N f(x)=N \sum_{x=0}^{n}{ }^{n} C_{x} p^{x} q^{n-x} $$ for $\mathrm{x}=0$ $$ \mathrm{F}(0)=\mathrm{Nq}^{\mathrm{n}} $$ Also $\frac{f(x)}{f(x-1)}=\frac{{ }^{n} C_{x} p^{x} q^{n-x}}{{ }^{n} C_{x-1} p^{x-1} q^{n-x+1}}$ $\frac{N f(x)}{N f(x-1)}=\frac{n-x+1}{x} \cdot \frac{p}{q}$ $\Rightarrow \quad \frac{\mathrm{F}(\mathrm{x})}{\mathrm{F}(\mathrm{x}-1)}=\left(\frac{\mathrm{n}-\mathrm{x}+1}{\mathrm{x}}\right) \frac{\mathrm{p}}{\mathrm{q}}$ $\Rightarrow \quad \mathrm{F}(\mathrm{x})=\left(\frac{\mathrm{n}-\mathrm{x}+1}{\mathrm{x}}\right) \cdot \frac{\mathrm{p}}{\mathrm{q}} \mathrm{F}(\mathrm{x}-1)$ where $\mathrm{x}=1,2, \ldots \ldots, \mathrm{n}$ ## POISSON DISTRIBUTION Poisson distribution can be obtained from Binomial distribution under three conditions (i) no. of trials is large i.e. $n \rightarrow \infty$ (ii) probability of success is very small i.e. $p \rightarrow 0$ (iii) the mean no. of successes is finite, i.e. $\mathrm{np}=\mathrm{m}$ (say) In case of Binomial distribution, $$ \mathrm{f}(\mathrm{x})={ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}}, \mathrm{x}=0,1,2, \ldots \mathrm{n} $$ Taking limit case of $\mathrm{f}(\mathrm{x})$ in Binomial distribution, we get $\mathrm{f}(\mathrm{x})$ for Poisson distribution, $$ f(x)=\frac{m^{x} e^{-m}}{x !}, \quad x=0,1,2,3, \ldots . $$ Now $\mathrm{n} \rightarrow \infty, \mathrm{p} \rightarrow 0, \mathrm{np}=\mathrm{m}$ i.e. $\quad \mathrm{p}=\frac{\mathrm{m}}{\mathrm{n}}, \mathrm{q}=1-\frac{\mathrm{m}}{\mathrm{n}}$ as $\mathrm{n} \rightarrow \infty$ Probability of $\mathrm{x}$ successes in Binomial distance is $$ f(x)={ }^{n} C_{x} p^{x} q^{n-x} $$ Probability of $\mathrm{x}$ successes in Poisson distance is $$ \begin{aligned} & f(x)=\lim _{n \rightarrow \infty}{ }^{n} C_{x} p^{x} q^{n-x} \\ & =\lim _{\mathrm{n} \rightarrow \infty}{ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}}\left(\frac{\mathrm{m}}{\mathrm{n}}\right)^{\mathrm{x}}\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{\mathrm{n}-\mathrm{x}} \\ & =\lim _{n \rightarrow \infty} \frac{n !}{x !(n-x) !}\left(\frac{m}{n}\right)^{x}\left(1-\frac{m}{n}\right)^{n-x} \end{aligned} $$ $$ \begin{aligned} & =\lim _{\mathrm{n} \rightarrow \infty} \frac{\mathrm{m}^{\mathrm{x}}}{\underbrace{}_{\mathrm{x}}} \frac{\mathrm{n}(\mathrm{n}-1)(\mathrm{n}-2) \ldots(\mathrm{n}-\mathrm{x}+1)}{\mathrm{n}^{\mathrm{x}}}\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{\mathrm{n}}\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{-\mathrm{x}} \\ & =\lim _{\mathrm{n} \rightarrow \infty} \frac{\mathrm{m}^{\mathrm{x}}}{\lfloor\mathrm{x}}\left(\frac{\mathrm{n}}{\mathrm{n}}\right)\left(\frac{\mathrm{n}-1}{\mathrm{n}}\right)\left(\frac{\mathrm{n}-2}{\mathrm{n}}\right) \ldots . .\left(\frac{\mathrm{n}-\mathrm{x}-1}{\mathrm{n}}\right)\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{\mathrm{n}}\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{-\mathrm{x}} \\ & =\lim _{\mathrm{n} \rightarrow \infty} \frac{\mathrm{m}^{\mathrm{x}}}{\lfloor x} 1 \cdot\left(1-\frac{1}{\mathrm{n}}\right)\left(1-\frac{2}{\mathrm{n}}\right) \cdots\left(1-\frac{\mathrm{x}-1}{\mathrm{n}}\right)\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{\mathrm{n}}\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{-\mathrm{x}} \\ & =\lim _{\mathrm{n} \rightarrow \infty} \frac{\mathrm{m}^{\mathrm{x}}}{\lfloor\mathrm{x}}\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{\mathrm{n}} \quad[\Theta \text { Rest of the terms tends to unity as } \mathrm{n} \rightarrow \infty] \\ & =\lim _{\mathrm{n} \rightarrow \infty} \frac{\mathrm{m}^{\mathrm{x}}}{L_{\mathrm{x}}}\left(1-\frac{\mathrm{m}}{\mathrm{n}}\right)^{\mathrm{n}} \end{aligned} $$ Now $\quad \lim _{n \rightarrow \infty}\left(1-\frac{m}{n}\right)^{n}=\lim _{n \rightarrow \infty}\left(1-n \cdot \frac{m}{n}+\frac{n(n-1)}{2 !} \frac{m^{2}}{n^{2}}-\frac{n(n-1)(n-2)}{3 !} \frac{m^{3}}{n^{3}}+\ldots.\right)$ $$ \begin{aligned} & =\lim _{\mathrm{n} \rightarrow \infty}\left(1-\mathrm{m}+\frac{\mathrm{m}^{2}}{2 !}-\frac{\mathrm{m}^{3}}{3 !}+\ldots . .\right) \\ & =\mathrm{e}^{-\mathrm{m}} \end{aligned} $$ $\therefore \quad$ from (1) $$ f(x)=\frac{\mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}}} \mathrm{e}^{-\mathrm{m}}, \mathrm{x}=0,1,2, \ldots . $$ ## Moments of Poisson distribution $$ \mu_{\mathrm{r}}^{\prime}=\mathrm{E}\left[\mathrm{X}^{\mathrm{r}}\right]=\mathrm{r}^{\text {th }} \text { moment about origin } $$ $\therefore \quad \mu_{1}^{\prime}=\mathrm{E}[\mathrm{X}]=\sum_{\mathrm{x}=0}^{\infty} \mathrm{x} \mathrm{f}(\mathrm{x})$ $=\sum_{\mathrm{x}=0}^{\infty} \mathrm{xe} \mathrm{e}^{-\mathrm{m}} \frac{\mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}}}=\sum_{\mathrm{x}=0}^{\infty} \frac{\mathrm{xe}^{-\mathrm{m}} \mathrm{m}^{\mathrm{x}}}{\mathrm{x} \mid \mathrm{x}-1}$ $=\sum_{\mathrm{x}=1}^{\infty} \frac{\mathrm{e}^{-\mathrm{m}} \mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}-1}}$ $=\mathrm{me}^{-\mathrm{m}} \sum_{\mathrm{x}=1}^{\infty} \frac{\mathrm{m}^{\mathrm{x}-1}}{\underline{\mathrm{x}-1}}$ $=m e^{-m}\left[1+\frac{m}{\lfloor 1}+\frac{m^{2}}{\lfloor 2}+\ldots ..\right]$ $\Rightarrow \quad \mu_{1}^{\prime}=\mathrm{me}^{-\mathrm{m}} \mathrm{e}^{\mathrm{m}}=\mathrm{m}$ $\Rightarrow \quad \mu_{1}^{\prime}=\mathrm{m}=$ Mean Now $\mu_{2}^{\prime}=E\left(X^{2}\right)=\sum_{x=0}^{\infty} x^{2} e^{-m} \frac{m^{x}}{\lfloor x}$ $$ \begin{aligned} & =\sum_{\mathrm{x}=0}^{\infty}\left[\frac{\mathrm{x}(\mathrm{x}-1)+\mathrm{x}}{\underline{\mathrm{x}}}\right] \mathrm{e}^{-\mathrm{m}} \mathrm{m}^{\mathrm{x}} \\ & =\mathrm{e}^{-\mathrm{m}}\left[\sum_{\mathrm{x}=0}^{\infty} \frac{\mathrm{x}(\mathrm{x}-1)}{\underline{\mathrm{x}}} \mathrm{m}^{\mathrm{x}}+\sum_{\mathrm{x}=0}^{\infty} \frac{\mathrm{x} \mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}}}\right] \\ & =\mathrm{e}^{-\mathrm{m}} \sum_{\mathrm{x}=2}^{\infty} \frac{\mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}-2}-2}+\mathrm{e}^{-\mathrm{m}} \sum_{\mathrm{x}=1}^{\infty} \frac{\mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}-1}} \\ & =\mathrm{m}^{2} \mathrm{e}^{-\mathrm{m}} \sum_{\mathrm{x}=2}^{\infty} \frac{\mathrm{m}^{\mathrm{x}-2}}{\underline{\mathrm{x}-2}}+\mathrm{m} \\ & =\mathrm{m}^{2} \mathrm{e}^{-\mathrm{m}} \cdot \mathrm{e}^{\mathrm{m}}+\mathrm{m} \\ & \Rightarrow \quad \mu_{2}^{\prime}=\mathrm{m}^{2}+\mathrm{m} \end{aligned} $$ So $\quad$ mean $=$ Variance Moment generating function about origin M.G.F about origin is by definition $$ \begin{aligned} & M_{X}(t)=E\left[e^{t x}\right]=\sum_{x=0}^{\infty} e^{t x} f(x) \\ & =\sum_{x=0}^{\infty} e^{t x} \frac{e^{-m} m^{x}}{\lfloor x}=e^{-m} \sum_{x=0}^{\infty} \frac{\left(m e^{t}\right)^{x}}{\lfloor x} \\ & =\mathrm{e}^{-\mathrm{m}}\left[1+\mathrm{me}^{\mathrm{t}}+\frac{\left(\mathrm{me}^{\mathrm{t}}\right)^{2}}{2}+\ldots . .\right] \\ & =\mathrm{e}^{-\mathrm{m}} \cdot \mathrm{e}^{\mathrm{m}} \mathrm{t} \\ & \Rightarrow \quad \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\mathrm{e}^{\mathrm{m}\left(\mathrm{e}^{\mathrm{t}}-1\right)} \end{aligned} $$ Moment generating function about mean $$ \begin{aligned} & M_{X-m}(t)=\sum_{x=0}^{\infty} e^{t(x-m)} f(x) \\ & =\sum_{x=0}^{\infty} e^{t(x-m)} \frac{e^{-m} m^{x}}{\lfloor} \\ & =\sum_{x=0}^{\infty} e^{t x} e^{-m t} e^{-m} \frac{m^{x}}{\lfloor x} \\ & =\mathrm{e}^{-(1+\mathrm{t}) \mathrm{m}} \sum_{\mathrm{x}=0}^{\infty} \frac{\left(m \mathrm{e}^{\mathrm{t}}\right)^{\mathrm{x}}}{\underline{\mathrm{x}}} \\ & =\mathrm{e}^{-\mathrm{m}(1+\mathrm{t})} \mathrm{e}^{\mathrm{me}}=\mathrm{e}^{\mathrm{t}\left(\mathrm{e}^{\mathrm{t}}-\mathrm{t}-1\right)} \\ & =e^{m\left(1+t+\frac{t^{2}}{2}+\frac{t^{3}}{12}+\ldots . .\right)-t-1}=e^{m\left(\frac{t^{2}}{2}+\frac{t^{3}}{2}+\ldots . .\right)} \\ & \text { Now } \quad \mu_{r}^{\prime}=\frac{d^{r}}{d t^{r}} M_{X}(t) / t=0 \end{aligned} $$ $\therefore \quad \mu_{1}^{\prime}=\frac{\mathrm{d}}{\mathrm{dt}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) / \mathrm{t}=0$ $$ \begin{aligned} & =\frac{\mathrm{d}}{\mathrm{dt}} \mathrm{e}^{\mathrm{m}\left(\mathrm{e}^{\mathrm{t}}-1\right)} / \mathrm{t}=0 \\ & =\mathrm{e}^{\mathrm{m}\left(\mathrm{e}^{\mathrm{t}}-1\right)} \mathrm{me}^{\mathrm{t}} / \mathrm{t}=0 \end{aligned} $$ $\mu_{1}^{\prime}=\mathrm{m}$ and $\quad \mu_{2}^{\prime}=\frac{\mathrm{d}^{2}}{\mathrm{dt}^{2}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) / \mathrm{t}=0$ $$ \begin{aligned} & =\frac{d}{d t}\left[m e^{t} e^{m\left(e^{t}-1\right)}\right] / t=0 \\ & =m\left[e^{t m\left(e^{t}-1\right)}+e^{t} e^{m\left(e^{t}-1\right)} m e^{t}\right] / t=0 \\ & =m\left[1 . e^{0}+1.1 \cdot m .1\right] \\ & =m(1+m)=m^{2}+m \end{aligned} $$ ## Characteristic function $$ \phi_{X}(t)=e^{m\left(e^{i t}-1\right)}=E\left[e^{i t x}\right] $$ ## Cumulant generating function about origin $$ \begin{aligned} \mathrm{K}_{\mathrm{X}}(\mathrm{t}) & =\log \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\log \mathrm{e}^{\mathrm{m}\left(\mathrm{e}^{\mathrm{t}}-1\right)} \\ & =\mathrm{m}\left(\mathrm{e}^{\mathrm{t}}-1\right) \\ \Rightarrow \quad \mathrm{K}_{\mathrm{X}}(\mathrm{t}) & =\mathrm{m}\left[\mathrm{t}+\frac{\mathrm{t}^{2}}{2 !}+\frac{\mathrm{t}^{3}}{3 !}+\ldots \ldots\right] \end{aligned} $$ The $\mathrm{r}^{\text {th }}$ cumulant is $$ \begin{array}{ll} & \mathrm{K}_{\mathrm{r}}=\text { coefficient of } \frac{\mathrm{t}^{\mathrm{r}}}{\lfloor\mathrm{r}} \text { in } \mathrm{K}_{\mathrm{X}}(\mathrm{t}) \\ \therefore & \mathrm{K}_{1}=\mathrm{m}, \mathrm{K}_{2}=\mathrm{m}, \mathrm{K}_{3}=\mathrm{m}, \ldots \ldots \\ \text { i.e. } & \mathrm{K}_{\mathrm{r}}=\mathrm{m}, \text { for } \mathrm{r}=1,2,3, \ldots . \end{array} $$ All cumulants are equal for Poisson distribution ## Recurrence Relation for Moments $$ \mu_{\mathrm{r}+1}=\mathrm{m}\left[\mathrm{r} \mu_{\mathrm{r}-1}+\frac{\mathrm{d} \mu_{\mathrm{r}}}{\mathrm{dm}}\right] $$ Now $r^{\text {th }}$ central moment by definition is $$ \begin{aligned} & \mu_{\mathrm{r}}=\mathrm{E}\left[(\mathrm{X}-\mathrm{m})^{\mathrm{r}}\right] \\ & =\sum_{x=0}^{\infty}(x-m)^{r} f(x)=\sum_{x=0}^{\infty}(x-m)^{r} \frac{e^{-m} m^{x}}{\underline{x}} \\ & \therefore \quad \frac{\mathrm{d} \mu_{\mathrm{r}}}{\mathrm{dm}}=\sum_{\mathrm{x}=0}^{\infty}\left[\mathrm{r}(\mathrm{x}=\mathrm{m})^{\mathrm{r}-1}(-1) \mathrm{e}^{-\mathrm{m}} \frac{\mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}}}\right]+\sum_{\mathrm{x}=0}^{\infty}\left[(\mathrm{x}-\mathrm{m})^{\mathrm{r}} \mathrm{e}^{-\mathrm{m}}(-1) \frac{\mathrm{m}^{\mathrm{x}}}{\lfloor\mathrm{x}}\right] \\ & +\sum_{\mathrm{x}=0}^{\infty}(\mathrm{x}-\mathrm{m})^{\mathrm{r}} \mathrm{e}^{-\mathrm{m}} \mathrm{x} \frac{\mathrm{m}^{\mathrm{x}-1}}{\underline{\mathrm{x}}} \\ & =-r \sum_{x=0}^{\infty}(x-m)^{r-1} \frac{e^{-m} m^{x}}{\lfloor x}+\sum_{x=0}^{\infty}(x-m)^{r} \frac{e^{-m} m^{x}}{\lfloor x}\left[\frac{x}{m}-1\right] \\ & =-r \sum_{x=0}^{\infty}(x-m)^{r-1} \frac{e^{-m} m^{x}}{\lfloor x}+\sum_{x=0}^{\infty}(x-m)^{r} \frac{e^{-m} m^{x}}{\lfloor x}\left(\frac{x-m}{m}\right) \\ & =-r \mu_{r-1}+\frac{1}{m} \sum_{x=0}^{\infty}(x-m)^{r+1} \frac{e^{-m} m^{x}}{\lfloor x} \\ & \Rightarrow \quad \frac{\mathrm{d} \mu_{\mathrm{r}}}{\mathrm{dm}}=-\mathrm{r} \mu_{\mathrm{r}-1}+\frac{1}{\mathrm{~m}} \mu_{\mathrm{r}+1} \\ & \Rightarrow \quad \frac{\mathrm{d} \mu_{\mathrm{r}}}{\mathrm{dm}}+\mathrm{r} \mu_{\mathrm{r}-1}=\frac{1}{\mathrm{~m}} \mu_{\mathrm{r}+1} \\ & \Rightarrow \quad \mu_{\mathrm{r}+1}=\mathrm{m}\left[\mathrm{r} \mu_{\mathrm{r}-1}+\frac{\mathrm{d} \mu_{\mathrm{r}}}{\mathrm{dm}}\right] \end{aligned} $$ Recurrence relation or Fitting of Poisson distribution $$ \begin{aligned} & f(x)=\frac{e^{-m} m^{x}}{x} \\ & f(x+1)=\frac{e^{-m} m^{x+1}}{\lfloor x+1} \\ \therefore \quad & \frac{f(x+1)}{f(x)}=\frac{m}{x+1} \end{aligned} $$ $\Rightarrow \quad \mathrm{f}(\mathrm{x}+1)=\frac{\mathrm{mf}(\mathrm{x})}{\mathrm{x}+1}$ Now $\quad \mathrm{Nf}(\mathrm{x})=\mathrm{F}(\mathrm{x})$ $\therefore \quad \mathrm{F}(0)=\mathrm{Nf}(0)=\mathrm{Ne}^{-\mathrm{m}}$ $\therefore(1) \Rightarrow \mathrm{Nf}(\mathrm{x}+1)=\left(\frac{\mathrm{m}}{\mathrm{x}+1}\right), \mathrm{f}(\mathrm{x}) \mathrm{N}$ $\Rightarrow \quad \mathrm{F}(\mathrm{x}+1)=\frac{\mathrm{m}}{\mathrm{x}+1} \mathrm{~F}(\mathrm{x}), \mathrm{x}=0,1,2, \ldots$. Put $x=0$ in (2), w get $$ \begin{aligned} & \mathrm{F}(1)=\mathrm{m} F(0)=\mathrm{m} \mathrm{Ne}^{-\mathrm{m}} \\ & \mathrm{F}(2)=\frac{\mathrm{m}}{2} \mathrm{~F}(1)=\frac{\mathrm{m}^{2}}{2} \mathrm{Ne}^{-\mathrm{m}} \\ & \mathrm{F}(3)=\frac{\mathrm{m}}{3} \mathrm{~F}(2)=\frac{\mathrm{m}^{3}}{6} \mathrm{Ne}^{-\mathrm{m}}, \ldots . . \end{aligned} $$ ## Additive Property of Poisson variate If $\mathrm{X}_{1}, \mathrm{X}_{2}$ are two independent Poisson variates with parameters $\mathrm{m}_{1}, \mathrm{~m}_{2}$. Then their sum $\mathrm{X}_{1}+\mathrm{X}_{2}$ is also a Poisson varite with Parameter $\left(\mathrm{m}_{1}+\mathrm{m}_{2}\right)$ i.e. $\mathrm{X}_{1}+\mathrm{X}_{2} \sim \mathrm{P}\left(\mathrm{m}_{1}+\mathrm{m}_{2} ; \mathrm{x}\right)$ Proof: - If $\mathrm{X}$ is a Poisson variable, then $$ f(x)=\frac{e^{-m} m^{x}}{\lfloor x}, x=0,1,2 \ldots $$ $\& \quad M_{x_{1}}(t)=e^{m_{1}\left(e^{t}-1\right)}$ $$ \mathrm{M}_{\mathrm{X}_{2}}(\mathrm{t})=\mathrm{e}^{\mathrm{m}_{2}\left(\mathrm{e}^{\mathrm{t}}-1\right)} $$ "The MGF of sum of two independent r.v. is equal to the product of their MGF." By using this result, we have $$ \begin{aligned} \mathrm{M}_{\mathrm{X}_{1}+\mathrm{X}_{2}}(\mathrm{t}) & =\mathrm{M}_{\mathrm{X}_{1}}(\mathrm{t}) \cdot \mathrm{M}_{\mathrm{X}_{2}}(\mathrm{t}) \\ & =\mathrm{e}^{\mathrm{m}_{1}\left(\mathrm{e}^{\mathrm{t}}-1\right)} \cdot \mathrm{e}^{\mathrm{m}_{2}\left(\mathrm{e}^{\mathrm{t}}-1\right)} \\ & =\mathrm{e}^{\left(\mathrm{e}^{\mathrm{t}}-1\right)\left(\mathrm{m}_{1}+\mathrm{m}_{2}\right)} \end{aligned} $$ This is the MGF of Poisson variate $\mathrm{X}_{1}+\mathrm{X}_{2}$ with parameter $\left(\mathrm{m}_{1}+\mathrm{m}_{2}\right)$. Example: - Show that in a Poisson distribution with unit mean, deviation about mean is $\frac{2}{\mathrm{e}}$ times the standard deviation. Solution: - Here mean $=1$ $$ \text { Variance }=\mathrm{SD}=1 \quad \text { [ } \quad \text { In Poisson distance mean }=\text { variance }] $$ $\therefore \quad$ M.D $=$ Mean Deviation $=\mathrm{E}[|\mathrm{X}-1|]$ $$ \begin{aligned} & =\sum_{\mathrm{x}-0}^{\infty} \frac{|\mathrm{x}-1| \mathrm{e}^{-1}}{\lfloor\mathrm{x}} \\ & =\mathrm{e}^{-1} \sum_{\mathrm{x}=0}^{\infty} \frac{|\mathrm{x}-1|}{\lfloor\mathrm{x}} \\ & =\frac{1}{\mathrm{e}}\left[1+\frac{1}{\lfloor 2}+\frac{2}{\lfloor 3}+\frac{3}{\lfloor 4}+\ldots . .\right] \end{aligned} $$ Now general term of the series $$ \frac{1}{\lfloor 2}+\frac{2}{\lfloor 3}+\frac{3}{\lfloor 4}+\ldots . \text { is } $$ General term $=\frac{\mathrm{n}}{\lfloor\mathrm{n}+1}=\frac{\mathrm{n}+1-1}{\lfloor\mathrm{n}+1}$ $$ =\frac{1}{\lfloor n}-\frac{1}{\lfloor n+1} $$ Putting $\mathrm{n}=1,2,3 \ldots$ $$ \begin{aligned} \text { M.D } & =\frac{1}{\mathrm{e}}\left[1+\left(1-\frac{1}{2}\right)+\left(\frac{1}{12}-\frac{1}{13}\right)+\left(\frac{1}{13}-\frac{1}{44}\right)+\ldots . .\right] \\ & =\frac{1}{\mathrm{e}}(1+1)=\frac{2}{\mathrm{e}} \times 1=\frac{2}{\mathrm{e}} \times \text { S.D. } \end{aligned} $$ ## M.G.F about mean M.G.F about mean for Binomial distribution is $$ \begin{aligned} \mathrm{M}_{\mathrm{X}-\mathrm{np}}(\mathrm{t}) & =\mathrm{E}\left[\mathrm{e}^{\mathrm{t}(\mathrm{X}-\mathrm{np})}\right] \\ & =\left[\mathrm{qe}^{-\mathrm{pt}}+\mathrm{pe}^{(1-\mathrm{p}) \mathrm{t}}\right]^{\mathrm{n}}=\left[\mathrm{qe}^{-\mathrm{pt}}+\mathrm{pe}^{\mathrm{qt}}\right]^{\mathrm{n}} \\ & =\mathrm{e}^{-\mathrm{npt}}\left[\mathrm{q}+\mathrm{pe}^{\mathrm{t}}\right]^{\mathrm{n}} \end{aligned} $$ $$ \begin{aligned} \therefore \quad \log M_{X-n p}(t) & =-n p t+n \log \left[q\left(1+\frac{p^{t}}{q}\right)\right] \\ & =-n p t+n \log q+n \log \left(1+\frac{p^{t}}{q}\right) \end{aligned} $$ Put $n p=m$, we get $$ \begin{aligned} & \log M_{X-n p}(t)=-m t+n \log \left(1-\frac{m}{n}\right)+n \log \left[1+\frac{m e^{t}}{n\left(1-\frac{m}{n}\right)}\right] \\ & =-m t+n\left[-\frac{m}{n}-\frac{m^{2}}{2 n^{2}}-\frac{m^{3}}{3 n^{3}} \ldots .\right]+n\left[\frac{m e^{t}}{n\left(1-\frac{m}{n}\right)}-\frac{1}{2}\left\{\frac{m e^{t}}{n\left(1-\frac{m}{n}\right)}\right\}^{2}\right. \\ & \left.+\frac{1}{3}\left\{\frac{m e^{t}}{n\left(1-\frac{m}{n}\right)}\right\}^{3}\right] \\ & =-m t-m+\frac{m e^{t}}{\left(1-\frac{m}{n}\right)}-\frac{1}{2 n}\left\{\frac{m e^{t}}{1-\frac{m}{n}}\right\}^{2}-\frac{m^{2}}{2 n}+0\left(\frac{1}{n^{2}}\right) \\ & =-m t-m+\frac{m e^{t}}{1-\frac{m}{n}}+0\left(\frac{1}{n}\right) \end{aligned} $$ As $n \rightarrow \infty$, we have $$ \begin{aligned} & \lim _{n \rightarrow \infty} \log M_{X-n p}(t)=-m t-m+m e^{t} \\ & \Rightarrow \quad \lim _{n \rightarrow \infty} M_{X-n p}(t)=e^{m e^{t}-m t-m} \\ & =\mathrm{e}^{\mathrm{m}\left(\mathrm{e}^{\mathrm{t}}-\mathrm{t}-1\right)} \end{aligned} $$$$ =\text { M.G.F about mean of Poisson distance } $$ Example: - If $\mathrm{X}$ is a Poisson variate s.t. $\mathrm{P}(\mathrm{X}=1)=\mathrm{P}(\mathrm{X}=2)$, find $\mathrm{P}(\mathrm{X}=4)$. Solution: - Probability of $x$ successes in Poisson distribution is $$ \begin{aligned} & P(X=x)=f(x)=\frac{e^{-m} m^{x}}{\lfloor x} \\ \therefore \quad & P(X=1)=\frac{e^{-m} m}{\lfloor}=e^{-m} m \\ & P(X=2)=\frac{e^{-m} m^{2}}{\lfloor}=\frac{m^{2}}{2} e^{-m} \end{aligned} $$ Since $P(X=1)=P(X=2)$ $\therefore \quad$ from (1) \\& (2), we get $$ \mathrm{me} \mathrm{e}^{-\mathrm{m}}=\frac{\mathrm{m}^{2}}{2} \mathrm{e}^{-\mathrm{m}} \Rightarrow \mathrm{m}=2 $$ Therefore for $\mathrm{X}=4$ $$ \begin{aligned} \mathrm{P}(\mathrm{X}=4) & =\frac{\mathrm{e}^{-\mathrm{m}} \mathrm{m}^{4}}{44}=\frac{\mathrm{e}^{-2}(2)^{4}}{24} \\ & =\frac{16}{24} \mathrm{e}^{-2}=0.09 \end{aligned} $$ Let $\quad \mathrm{e}^{-2}=\mathrm{a}$ $\Rightarrow \quad \log _{10} \mathrm{a}=-2 \log _{10} \mathrm{e}$ $\Rightarrow \quad \log _{10} \mathrm{a}=-2(0.4343)=-0.8686$ $$ =\mathrm{T} .1314 $$ $\Rightarrow \quad \mathrm{a}=\operatorname{antilog}(\mathrm{T} .1314)=0.1353$ Example: -Six coins are tossed, 6400 times, using Poisson distribution, what is the probability of getting six heads $x$ times? Solution: - When a coin is tossed, probability of getting head is $=\frac{1}{2}$ When 6 coins are tossed, probability of getting 6 heads $=\left(\frac{1}{2}\right)^{2}=\frac{1}{64}=p($ say $)$ $$ \text { Mean }=n p=6400 \times \frac{1}{64}=100(\text { Mean }) $$ $\Rightarrow \quad \mathrm{m}=100$ Probability of getting six heads $\mathrm{x}$ times $$ \begin{aligned} & =\frac{\mathrm{e}^{-\mathrm{m}} \mathrm{m}^{\mathrm{x}}}{\lfloor\mathrm{x}} \text { with } \mathrm{m}=100 \\ & =\frac{\mathrm{e}^{-100}(100)^{\mathrm{x}}}{\lfloor\mathrm{x}} \end{aligned} $$ Example: - After correcting 50 pages of the proof of a book, the proof reader finds that there are on the average 2 errors of 5 pages. How many pages would one expect to find with $0,1,2,3 \& 4$ errors in 1000 pages of first print of the book? Solution: - Now Mean $=\frac{2}{5}=0.4=\mathrm{M}$ (say) $\mathrm{X} \rightarrow$ Poisson distance, $$ \begin{aligned} f(x)=P(X=x) & =\frac{e^{-m} m^{x}}{\lfloor x} \\ & =\frac{e^{-0.4}(0.4)^{x}}{\lfloor x} \end{aligned} $$ Expected number of pages with 1000 pages is $$ 1000 \mathrm{f}(\mathrm{x})=\frac{1000 \mathrm{e}^{-0.4}(0.4)^{\mathrm{x}}}{\lfloor\mathrm{x}} $$ Now we want to find $f(0), f(1), f(2), f(3), f(4)$ by using Recurrence Relation, $$ \begin{array}{lll} & f(x+1)=\frac{m}{x+1} f(x & \\ X=x & f(x) & \text { Expected number of pages } \\ & & =1000 \mathrm{f}(\mathrm{x}) \\ 0 & \mathrm{f}(0)=\mathrm{e}^{-0.4}=0.6703 & 670.3 \approx 670 \\ 1 & \mathrm{f}(1)=\frac{0.4}{0+1} \mathrm{f}(0)=0.4 \times 0.6703 & 268.12 \approx 268 \\ & =0.26812 & \end{array} $$ 2 $$ \mathrm{f}(2)=\frac{0.4}{1+1} \mathrm{f}(1)=0.053624 \quad 53.62 \approx 54 $$ $$ f(3)=\frac{0.4}{2+1} f(2)=0.0071298 $$ 4 $$ f(4)=\frac{0.4}{3+1} f(3)=0.0007129 $$ Example: - In a certain factory making blades, there is a small chance $\frac{1}{500}$ for any blade to be defective. The blades are supplied in packets of 10 . Use Poisson distance to calculate the approximate. Number of packets containing no defective, one defective \\& 2 defective blades respectively in a consignment of 10000 packets. $$ \left(\mathrm{e}^{-0.02}=0.9802\right) $$ Solution: - Let $\mathrm{p} \rightarrow$ probability of blade being defective $=\frac{1}{500}$ $$ \begin{aligned} & n=10 \\ & \therefore \quad \text { Mean }=m=n p=\frac{1}{50}=0.02 \end{aligned} $$ Number of packets containing $\mathrm{x}$ defective blades, $$ \begin{aligned} & \mathrm{F}(\mathrm{x})=\frac{\mathrm{Ne}^{-\mathrm{m}} \mathrm{m}^{\mathrm{x}}}{\underline{\mathrm{x}}} \\ & =\frac{10000 \times \mathrm{e}^{-0.02}(0.02)^{\mathrm{x}}}{\lfloor\mathrm{x}} \\ & =\frac{10000 \times 0.9802(0.02)^{\mathrm{x}}}{\lfloor\mathrm{x}} \\ & F(x)=\frac{9802(0.02)^{x}}{\lfloor x} \\ & \text { X } \\ & \mathrm{F}(\mathrm{x}) \\ & 0 \\ & \mathrm{~F}(0)=\frac{9802(0.02)^{0}}{\underline{0}}=9802 \\ & 1 \\ & F(1)=\frac{9802(0.02)^{1}}{\lfloor 1}=196 \end{aligned} $$ $$ F(2)=\frac{9802(0.02)^{2}}{12} $$ ## Mode of Poisson distribution Let the most probable values of r.v. $\mathrm{X}$ be $\mathrm{r}$ i.e. $\mathrm{r}$ is mode. Then $\mathrm{P}(\mathrm{X}=\mathrm{r})$ should be greater than $\mathrm{P}(\mathrm{X}=\mathrm{r}+1)$ and $\mathrm{P}(\mathrm{X}=\mathrm{r}-1)$ i.e $$ \begin{aligned} & \frac{\mathrm{e}^{-\mathrm{m}^{\mathrm{r}}} \mathrm{r}^{-1}}{\lfloor\mathrm{r}-1}<\frac{\mathrm{e}^{-\mathrm{m}^{2}} \mathrm{~m}^{\mathrm{r}}}{\lfloor\mathrm{r}}>\frac{\mathrm{e}^{-\mathrm{m}^{\mathrm{r}} \mathrm{m}^{\mathrm{r}+1}}}{\lfloor\mathrm{r}+1} \\ \Rightarrow \quad & \frac{\mathrm{m}^{\mathrm{r}-1}}{\lfloor\mathrm{r}-1}<\frac{\mathrm{m} \cdot \mathrm{m}^{\mathrm{r}-1}}{\mathrm{r}\lfloor\mathrm{r}-1}>\frac{\mathrm{m}^{\mathrm{r}-1} \mathrm{~m}^{2}}{(\mathrm{r}+1) \mathrm{r}\lfloor\mathrm{r}-1} \\ \Rightarrow \quad & 1<\frac{\mathrm{m}}{\mathrm{r}}>\frac{\mathrm{m}^{2}}{\mathrm{r}(\mathrm{r}+1)} \\ \Rightarrow \quad & \mathrm{r}(\mathrm{r}+1)<\mathrm{m}(\mathrm{r}+1)>\mathrm{m}^{2} \\ \Rightarrow \quad & \mathrm{r}<\mathrm{m} \quad \&(\mathrm{r}+1)>\mathrm{m} \\ \Rightarrow \quad & \mathrm{r}<\mathrm{m} \quad \& \mathrm{r}>\mathrm{m}-1 \\ \Rightarrow \quad & \mathrm{m}-1<\mathrm{r}<\mathrm{m} \end{aligned} $$ Thus if $\mathrm{m}$ is an integer, there shall be two modes $\mathrm{m}$ and $(\mathrm{m}-1)$. If $\mathrm{m}$ is not an integer, then mode is the integral value between $(\mathrm{m}-1) \& \mathrm{~m}$. Example: - A Poisson distance a double mode at $\mathrm{X}=3$ and $\mathrm{X}=4$, what is the probability that $\mathrm{X}$ will have one or the other of these two values. $$ \left.\begin{array}{rr} \text { Solution: - } & P(X=x)=\frac{e^{-m} m^{x}}{\lfloor x} \\ \therefore & P(X=3)=\frac{e^{-m} m^{3}}{\left\lfloor\frac{3}{4}\right.} \\ \& & P(X=4)=\frac{e^{-m^{4}}}{\lfloor 4} \end{array}\right\} $$ Since the Poisson distance has a double mode at $\mathrm{X}=3$ and $\mathrm{X}=4$, so $$ \begin{aligned} \mathrm{P}(\mathrm{X}=3) & =\mathrm{P}(\mathrm{X}=4) \\ \Rightarrow \quad \frac{\mathrm{e}^{-\mathrm{m}} \mathrm{m}^{3}}{\lfloor 3} & =\frac{\mathrm{e}^{-\mathrm{m}} \mathrm{m}^{4}}{\lfloor 4} \end{aligned} $$ $\Rightarrow \quad \mathrm{m}=\frac{\not 4}{\underline{\underline{3}}}=4$ $\therefore \quad$ from (1), $\mathrm{P}(\mathrm{X}=3)=\frac{\mathrm{e}^{-4} 4^{3}}{\underline{3}}=\frac{32}{3} \mathrm{e}^{-4}$ $\& \mathrm{P}(\mathrm{X}=4)=\frac{\mathrm{e}^{-4} 4^{4}}{\lfloor 4}=\frac{\mathrm{e}^{-4} \cdot 4 \cdot 4^{3}}{4\lfloor 3}$ $=\frac{\mathrm{e}^{-4} \cdot 4^{3}}{\underline{3}}=\frac{32}{3} \mathrm{e}^{-4}$ Now $\mathrm{P}[(\mathrm{X}=3) \mathrm{Y}(\mathrm{X}=4)]=\mathrm{P}(\mathrm{X}=3)+\mathrm{P}(\mathrm{X}=4)$ $$ \begin{aligned} & =\frac{64}{3} \mathrm{e}^{-4} \\ & =\frac{64}{3}(0.0183)=0.3904 \end{aligned} $$ Example: - If $\mathrm{X}$ is a Poisson variate with mean $\mathrm{m}$, show that $\frac{\mathrm{X}-\mathrm{m}}{\sqrt{\mathrm{m}}}$ is a variable with mean zero and variance unity. Solution: - We know that $$ \begin{aligned} & \text { Mean }=\mathrm{E}(\mathrm{X})=\mathrm{m} \\ & \text { Let } Z=\frac{X-m}{\sqrt{m}} \\ & \text { then } E(Z)=E\left(\frac{X-m}{\sqrt{m}}\right)=\frac{1}{\sqrt{m}} E(X-m) \\ & =\frac{1}{\sqrt{\mathrm{m}}}[\mathrm{E}(\mathrm{X})-\mathrm{E}(\mathrm{m})] \\ & =\frac{1}{\sqrt{\mathrm{m}}}[\mathrm{m}-\mathrm{m}]=0 \\ & \text { and variance }=\mathrm{E}\left(\frac{\mathrm{X}-\mathrm{m}}{\sqrt{\mathrm{m}}}\right)^{2} \\ & =\frac{1}{\mathrm{~m}} \mathrm{E}(\mathrm{X}-\mathrm{m})^{2}=\frac{1}{\mathrm{~m}} \mu_{2} \end{aligned} $$ $$ =\frac{1}{\mathrm{~m}} \cdot \mathrm{m}=1 $$ Also find M.G.F of this variable and show that it approaches $\mathrm{e}^{\frac{1}{2} \mathrm{t}^{2}}$ as $\mathrm{m} \rightarrow \infty$ $$ \text { Let } Y=\frac{X-m}{\sqrt{m}} $$ then $M_{Y}(t)=E\left[e^{t^{4}}\right]=E\left[e^{t \frac{(X-m)}{\sqrt{m}}}\right]$ $$ \begin{aligned} & =e^{-\frac{m t}{\sqrt{m}}} E\left(e^{\frac{t x}{\sqrt{m}}}\right) \\ & =\mathrm{e}^{-\sqrt{\mathrm{m}} t} \sum_{\mathrm{x}=0}^{\infty} \frac{\mathrm{e}^{-\mathrm{m}} \mathrm{m}^{\mathrm{x}}}{\lfloor\mathrm{x}} \mathrm{e}^{\frac{\mathrm{tx}}{\sqrt{\mathrm{m}}}} \\ & =\mathrm{e}^{-\sqrt{\mathrm{m}} \mathrm{t}} \mathrm{e}^{-\mathrm{m}} \sum_{\mathrm{x}=0}^{\infty} \frac{\left(m \mathrm{e}^{\mathrm{t} / \sqrt{\mathrm{m}}}\right)^{\mathrm{x}}}{\lfloor\mathrm{x}} \end{aligned} $$ $$ \begin{aligned} & =e^{-m-\sqrt{m} t} e^{m e^{t / \sqrt{m}}} \end{aligned} $$ $$ \begin{aligned} & M_{Y}(t)=e^{\frac{t^{2}}{2}+\frac{t^{3}}{3 \sqrt{m}}+\ldots . .} \end{aligned} $$ As $\mathrm{m} \rightarrow \infty, \lim _{\mathrm{m} \rightarrow \infty} \mathrm{M}_{4}(\mathrm{t})=\mathrm{e}^{\mathrm{t}^{2} / 2}$ which is MGF of standard normal variate (Normal Distribution)? ## GEOMETRIC DISTRIBUTION Suppose we have a series of independent trials or repetitions and on each trial or repetition, the probability of success ' $p$ ' remains the same. Then the probability preceding the first success is given by $\mathrm{pq}^{\mathrm{x}}$ where $\mathrm{X} \rightarrow$ Number of failures preceding the $1^{\text {st }}$ success in series of independent trials. and $\mathrm{p}=\mathrm{P}($ success in a single trial) eg., in tossing a coin, the possible outcomes of experiment are $\mathrm{S}, \mathrm{FS}, \mathrm{FFS}, \ldots \ldots \mathrm{FF}, \ldots . . \mathrm{FS}, \ldots \ldots \ldots$ $\mathrm{X}=0,1,2, \ldots \ldots, \mathrm{x}$ and corresponding probability are $\mathrm{p}, \mathrm{qp}, \mathrm{q}^{2} \mathrm{p}, \ldots \ldots ., \mathrm{q}^{\mathrm{x}} \mathrm{p} \ldots$ $\therefore \quad$ Probability distribution of $\mathrm{X}$, Also $\sum_{\mathrm{x}} \mathrm{f}(\mathrm{x})=1$ Now $\sum_{x=0}^{\infty} f(x)=\sum_{x=0}^{\infty} p q^{x}$ $$ \begin{aligned} & =p+p q+p^{2}+p^{3}+\ldots . . \\ & =p\left(1+q+q^{2}+\ldots \ldots\right) \\ & =p \cdot \frac{1}{1-q} \\ & =p \cdot \frac{1}{p}=1 \end{aligned} $$ Moments of Geometric distribution $$ \begin{aligned} \mu_{1}^{\prime}=E(X) & =\sum_{x=0}^{\infty} x f(x)=\sum_{x=1}^{\infty} x p q^{x} \\ & =p q\left(\sum_{x=1}^{\infty} x q^{x-1}\right) \\ & =p q\left(1+2 q+3 q^{2}+\ldots . .\right) \end{aligned} $$ $$ \begin{aligned} &=p q(1-q)^{-2} \\ & \Rightarrow \quad \mu_{1}^{\prime}=p q \cdot \frac{1}{p^{2}}=\frac{q}{p} \\ & \Rightarrow \quad \mu_{1}^{\prime}=\text { Mean }=\frac{q}{p} \\ & \mu_{2}^{\prime}=\sum_{x=0}^{\infty} x^{2} f(x)=\sum_{x=0}^{\infty}[x(x-1)+x] p q^{x} \\ &=\sum_{x=2}^{\infty} x(x-1) p q^{x}+\sum_{x=1}^{\infty} x p q^{x} \\ &=2 p q^{2} \sum_{x=2}^{\infty} \frac{x(x-1)}{2.1} q^{x-2}+\frac{q}{p} \\ &=2 p q^{2}(1-q)^{-3}+\frac{q}{p}=\frac{2 q^{2}}{p^{2}}+\frac{q}{p} \end{aligned} $$ which is $2^{\text {nd }}$ moment about origin. $\therefore \quad$ variance, $\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}$ $$ \begin{aligned} & =\frac{2 q^{2}}{p^{2}}+\frac{q}{p}-\frac{q^{2}}{p^{2}}=\frac{q^{2}}{p^{2}}+\frac{q}{p} \\ & =\frac{q}{p^{2}}(q+p)=\frac{q}{p^{2}} \end{aligned} $$ MGF about origin $$ \begin{aligned} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) & =\mathrm{E}\left[\mathrm{e}^{\mathrm{t} X}\right]=\sum_{\mathrm{x}=0}^{\infty} \mathrm{e}^{\mathrm{tx}} \mathrm{pq}^{\mathrm{x}} \\ & =\mathrm{p} \sum_{\mathrm{x}=0}^{\infty}\left(\mathrm{qe}^{\mathrm{t}}\right)^{\mathrm{x}}=\mathrm{p}\left[1-\mathrm{qe}^{\mathrm{t}}\right]^{-1} \\ & =\frac{\mathrm{p}}{1-\mathrm{qe}^{\mathrm{t}}} \end{aligned} $$ Moments from MGF $$ \mu_{1}^{\prime}=\frac{\mathrm{d}}{\mathrm{dt}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) / \mathrm{t}=0=\frac{\mathrm{p}(-1)\left(-\mathrm{qe} \mathrm{e}^{\mathrm{t}}\right)}{\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{2}} / \mathrm{t}=0 $$ $$ \begin{aligned} & =\frac{p q}{(1-q)^{2}}=\frac{p q}{p^{2}}=\frac{q}{p} \\ & \mu_{2}^{\prime}=\frac{\mathrm{d}^{2}}{\mathrm{dt}^{2}} M_{X}(t) / t=\frac{d}{d t}\left[\frac{p^{t} e^{t}}{\left(1-q^{t}\right)}\right] / t=0 \\ & =p q\left[\frac{\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{2} \mathrm{e}^{\mathrm{t}}-\mathrm{e}^{\mathrm{t}} \cdot 2\left(1-\mathrm{qe}^{\mathrm{t}}\right)\left(-\mathrm{qe}^{\mathrm{t}}\right)}{\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{4}}\right] / \mathrm{t}=0 \\ & =p q\left[\frac{\left(1-q e^{t}\right) e^{t}+2 q e^{2 t}}{\left(1-q e^{t}\right)^{3}}\right] / t=0 \\ & =p q\left[\frac{(1-q)+2 q}{(1-q)^{3}}\right] \\ & =p q\left(\frac{p+2 q}{p^{3}}\right)=\frac{q}{p}+\frac{2 q^{2}}{p^{2}} \\ & \mu_{3}^{\prime}=\frac{\mathrm{d}^{3}}{\mathrm{dt}^{3}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) / \mathrm{t}=0 \\ & =p q \frac{d}{d t}\left[e^{t}\left(1-\mathrm{qe}^{t}\right)^{-2}+2 \mathrm{qe}^{2 \mathrm{t}}\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{-3}\right] / \mathrm{t}=0 \\ & =p q\left[\mathrm{e}^{\mathrm{t}}\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{-2}+\mathrm{e}^{\mathrm{t}}(-2)\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{-3}\left(-\mathrm{qe}^{\mathrm{t}}\right)+2 \mathrm{qe}^{2 \mathrm{t}} \cdot 2\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{-3}+2 \mathrm{qe}^{2 \mathrm{t}}(-3)\right. \\ & \left.\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{-4}\left(-\mathrm{qe}^{\mathrm{t}}\right)\right] \\ & =p q\left[(1-q)^{-2}+2 q(1-q)^{-3}+4 q(1-q)^{-3}+6 q^{2} e(1-q)^{-4}\right] \\ & \Rightarrow \quad \mu_{3}^{\prime}=\left[\frac{1}{\mathrm{p}^{2}}+\frac{6 \mathrm{q}}{\mathrm{p}^{3}}+\frac{6 \mathrm{q}^{2}}{\mathrm{p}^{4}}\right] \mathrm{pq} \\ & \mu_{3}^{\prime}=\frac{q}{p}+\frac{6 q^{2}}{p^{2}}+\frac{6 q^{3}}{p^{3}} \\ & =\frac{\mathrm{q}}{\mathrm{p}}+\frac{6 \mathrm{q}^{2}}{\mathrm{p}^{3}}(\mathrm{p}+\mathrm{q}) \\ & =\frac{q}{p}+\frac{6 q^{2}}{p^{3}} \cdot 1=\frac{q}{p}+\frac{6 q^{2}}{p^{3}} \end{aligned} $$ $$ \begin{aligned} & \mu_{4}^{\prime}=\frac{\mathrm{d}^{4}}{\mathrm{dt}^{4}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) / \mathrm{t}=0 \\ & =p q \frac{d}{d t}\left[e^{t}\left(1-q e^{t}\right)^{-2}+2 q e^{2 t}\left(1-q e^{t}\right)^{-3}+4 q e^{2 t}\left(1-q e^{t}\right)^{-3}\right. \\ & +6 q^{2} e^{3 t}\left(1-q e^{t}\right)^{-4} / t=0 \\ & =p q\left[e^{t}\left(1-q^{t}\right)^{-2}+e^{t} q e^{t} \cdot 2\left(1-q^{t}\right)^{-3}+4 q e^{2 t}\left(1-q e^{t}\right)^{-3}\right. \\ & +6 q^{2} e^{3 t}\left(1-q e^{t}\right)^{-4}+8 q e^{2 t}\left(1-q^{t}\right)^{-3}+12 q^{2} e^{3 t}\left(1-q e^{t}\right)^{-4} \\ & \left.+18 \mathrm{q}^{2} \mathrm{e}^{3 \mathrm{t}}\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{-4}+24 \mathrm{q}^{2} \mathrm{e}^{3 \mathrm{t}}\left(1-\mathrm{qe}^{\mathrm{t}}\right)^{-5}\left(\mathrm{qe}^{\mathrm{t}}\right)\right] / \mathrm{t}=0 \\ & =p q\left[(1-q)^{-2}+2 q(1-q)^{-3}+4 q(1-q)^{-3}+6 q^{2}(1-q)^{-4}+8 q(1-q)^{-3}+\right. \\ & \left.12 q^{2}(1-q)^{-4}+18 q^{2}(1-q)^{-4}+24 q^{3}(1-q)^{-5}\right] \\ & =p q\left[\frac{1}{p^{2}}+\frac{14 q}{p^{3}}+\frac{36 q^{2}}{p^{4}}+\frac{24 q^{3}}{p^{5}}\right] \\ & \mu_{4}^{\prime}=\frac{q}{p}+\frac{14 q^{2}}{p^{2}}+\frac{36 q^{3}}{p^{3}}+\frac{24 q^{4}}{p^{4}} \end{aligned} $$ ## LESSON 7 THEORETICAL CONTINUOUS DISTRIBUTIONS ## Uniform or rectangular distribution A continuous r.v. $\mathrm{X}$ is said to have continuous uniform (or rectangular) distribution over an interval $[a, b]$ if its p.d.f. is given by $$ f(x)=\left\{\begin{array}{lll} K & , & a \leq x \leq b \\ 0 & , & \text { otherwise } \end{array}\right. $$ also $\quad \int_{a}^{b} f(x) d x=1$ $\Rightarrow \quad \int_{\mathrm{a}}^{\mathrm{b}} \mathrm{Kdx}=1 \quad \Rightarrow[\mathrm{Kx}]_{\mathrm{a}}^{\mathrm{b}}=1$ $\Rightarrow \quad \mathrm{K}(\mathrm{b}-\mathrm{a})=1 \quad \Rightarrow \mathrm{K}=\frac{1}{\mathrm{~b}-\mathrm{a}}$ $\therefore \quad f(x)= \begin{cases}\frac{1}{b-a} & a \leq x \leq b \\ 0, & \text { otherwise }\end{cases}$ Moments $$ \begin{aligned} \mu_{r}^{\prime} & =E\left[X^{r}\right]=\int_{a}^{b} x^{r} f(x) d x \\ \therefore \quad \mu_{r}^{\prime} & =\int_{a}^{b} x^{r}\left(\frac{1}{b-a}\right) d x \\ & =\frac{1}{(b-a)}\left(\frac{x^{r+1}}{r+1}\right)_{a}^{b} \end{aligned} $$ $$ \begin{aligned} & =\frac{1}{(b-a)(r+1)}\left[b^{r+1}-a^{r+1}\right] \\ \therefore \quad \mu_{1}^{\prime} & =\frac{1}{(b-a)}\left(\frac{b^{2}-a^{2}}{2}\right)=\frac{b+a}{2} \\ \Rightarrow \quad \mu_{1}^{\prime} & =\text { Mean }=\frac{b+a}{2} \\ \mu_{2}^{\prime} & =\frac{1}{(b-a) \cdot 3}\left(b^{3}-a^{3}\right)=\frac{(b-a)\left(b^{2}+a^{2}+a b\right)}{3(b-a)} \\ & =\frac{1}{3}\left(b^{2}+a^{2}+a b\right) \end{aligned} $$ Variance, $\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}$ $$ \begin{aligned} & =\frac{1}{3}\left(b^{2}+a^{2}+a b\right)-\frac{1}{4}\left(b^{2}+a^{2}+2 a b\right) \\ & =\frac{b^{2}}{12}+\frac{a^{2}}{12}-\frac{2 a b}{12} \\ & =\frac{1}{12}\left(b^{2}+a^{2}+2 a b\right)=\frac{(a-b)^{2}}{12} \\ \Rightarrow \quad \mu_{2} & =\frac{1}{12}(b-a)^{2} \end{aligned} $$ ## MOMENT GENERATING FUNCTION $$ \begin{aligned} M_{X}(t) & =\int_{a}^{b} e^{t x} \cdot \frac{1}{(b-a)} d x \\ \Rightarrow \quad M_{X}(t) & =\frac{1}{(b-a)}\left[\frac{e^{t x}}{t}\right]_{a}^{b} \\ & =\frac{1}{t(b-a)}\left[e^{b t}-e^{a t}\right] \end{aligned} $$ ## Characteristic function $$ \phi_{X}(t)=\int_{a}^{b} e^{i t x} f(x) d x=\frac{e^{i b t}-e^{i a t}}{i t(b-a)} $$ ## Mean deviation $$ \begin{aligned} E[\mid X-\text { Mean } \mid] & =\int_{a}^{b}\left|x-\frac{(a+b)}{2}\right| \cdot \frac{1}{(b-a)} d x \\ & =\frac{1}{(b-a)} \int_{a}^{b}\left|x-\left(\frac{a+b}{2}\right)\right| d x \end{aligned} $$ Put $\mathrm{t}=\mathrm{x}-\frac{\mathrm{a}+\mathrm{b}}{2} \Rightarrow \mathrm{dt}=\mathrm{dx}$ when $\mathrm{x}=\mathrm{b}, \mathrm{t}=\mathrm{a}-\frac{(\mathrm{a}+\mathrm{b})}{2}=\frac{\mathrm{a}-\mathrm{b}}{2}$ when $\mathrm{x}=\mathrm{b}, \mathrm{t}=\mathrm{b}-\frac{(\mathrm{a}+\mathrm{b})}{2}=\frac{\mathrm{b}-\mathrm{a}}{2}$ $$ \begin{aligned} \therefore(1) \Rightarrow E[\mid X-\text { Mean } \mid] & =\frac{1}{(b-a)} \int_{-\left(\frac{b-a}{2}\right)}^{\frac{b-a}{2}}|t| d t \\ & =\frac{2}{(b-a)} \int_{0}^{b-a / 2} t d t=\frac{2}{(b-a)}\left[\frac{t^{2}}{2}\right]_{0}^{\frac{b-a}{2}} \\ & =\frac{2}{2(b-a)} \cdot \frac{(b-a)^{2}}{4}=\frac{b-a}{4} \end{aligned} $$ Example: - Show that for the rectangular distance if $\mathrm{f}(\mathrm{x})=\frac{1}{2 \mathrm{a}},-\mathrm{a}<\mathrm{x}<\mathrm{a}$, The MGF about origin is $\frac{\sinh \text { at }}{\text { at }}$. Also show that moments of even order are given by $\quad \mu_{2 n}=\frac{a^{2 n}}{2 n+1}$ Solution: - By definition, $$ \begin{aligned} M_{X}(t) & =\int_{-a}^{a} e^{t x} \cdot \frac{1}{2 a} d x=\frac{1}{2 a t}\left[e^{t x}\right]_{-a} \\ & =\frac{1}{2 a t}\left(e^{a t}-e^{-a t}\right)=\frac{1}{2 a t} \cdot 2 \sinh a t \end{aligned} $$ $$ \begin{aligned} \Rightarrow \quad \mathrm{M}_{\mathrm{X}}(\mathrm{t}) & =\frac{\sinh \mathrm{at}}{\mathrm{at}} \\ & =\frac{1}{\mathrm{at}}\left[\mathrm{at}+\frac{(\mathrm{at})^{3}}{3 !}+\frac{(\mathrm{at})^{5}}{5 !}+\ldots \ldots\right] \\ & =1+\frac{(\mathrm{at})^{2}}{3 !}+\frac{(\mathrm{at})^{4}}{5 !}+\ldots \ldots \end{aligned} $$ Since there are no terms with odd powers of $\mathrm{t}$ in $\mathrm{M}_{\mathrm{X}}(\mathrm{t})$, all moments of odd order about origin are zero i.e. $$ \mu_{2 n+1}^{\prime}(\text { about origin })=0 $$ In particular, $\mu_{1}^{\prime}=0$ i.e. Mean $=0$ $\Rightarrow \quad \mu_{\mathrm{r}}^{\prime}($ about origin $)=\mu_{\mathrm{r}}($ about Mean $)$ $$ \begin{array}{ll} \Theta \mu_{\mathrm{r}}^{\prime}=\mathrm{E}\left[\mathrm{X}^{\mathrm{r}}\right], & \mu_{\mathrm{r}}=\mathrm{E}\left[(\mathrm{X}-\mu)^{\mathrm{r}}\right] \\ \text { if } \mu=0=\text { Mean, then } & \mu_{\mathrm{r}}=\mathrm{E}\left[\mathrm{X}^{\mathrm{r}}\right]=\mu_{\mathrm{r}}^{\prime} \end{array} $$ Therefore $\mu_{2 \mathrm{n}+1}=0$ The moments of even order are given by $$ \begin{aligned} & \mu_{2 n}^{\prime}=\mu_{2 n}=\text { Coefficient of } \frac{t^{2 n}}{2 n} \text { in } M_{X}(t) \\ & \Rightarrow \quad \mu_{2 \mathrm{n}}=\frac{\mathrm{a}^{2 \mathrm{n}}}{2 \mathrm{n}+1} \end{aligned} $$ ## THE EXPONENTIAL DISTRIBUTION A continuous r.v. $X$ assuming non-negative values is said to have an exponential distribution with parameter $\boldsymbol{\theta}>\mathbf{0}$ if its p.d.f. is given by The Cumulative Distribution function $\mathrm{F}(\mathrm{x})$ is given by, $$ \begin{aligned} F(x) & =\int_{0}^{x} f(x) d x=\theta \int_{0}^{x} e^{-\theta x} d x \\ & = \begin{cases}1-e^{-\theta x}, & x \geq 0 \\ 0, & \text { otherwise }\end{cases} \end{aligned} $$ MOMENT GENERATING FUNCTION $$ \begin{aligned} & M_{X}(t)=E\left[e^{t x}\right]=\theta \int_{0}^{\infty} e^{t x} e^{-\theta x} d x \\ & =\theta \int_{0}^{\infty} e^{(\theta-t) x} d x \\ & =\theta\left[\frac{\mathrm{e}^{-(\theta-\mathrm{t}) \mathrm{x}}}{-(\theta-\mathrm{t})}\right]_{0}^{\infty} \\ & =\frac{\theta}{\theta-\mathrm{t}}, \quad \theta>\mathrm{t} \\ & \Rightarrow \quad \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\frac{\theta}{\theta-\mathrm{t}}=\frac{\theta}{\theta\left(1-\frac{\mathrm{t}}{\theta}\right)}=\frac{1}{1-\frac{\mathrm{t}}{\theta}} \\ & =\left(1-\frac{\mathrm{t}}{\theta}\right)^{-1}=1+\frac{\mathrm{t}}{\theta}+\frac{\mathrm{t}^{2}}{\theta}+\frac{\mathrm{t}^{3}}{\theta}+\ldots \ldots . \\ & =\sum_{\mathrm{r}=0}^{\infty}\left(\frac{\mathrm{t}}{\theta}\right)^{\mathrm{r}} \end{aligned} $$ $(1) \Rightarrow M_{X}(t)=1+\frac{t}{\theta}+\frac{L^{2}}{\theta^{2}} \frac{\mathrm{t}^{2}}{\lfloor 2}+\frac{b^{3}}{\theta^{3}} \frac{t^{3}}{\lfloor 3}+\ldots \ldots .$. $\therefore \quad \mu_{r}^{\prime}=E\left[X^{r}\right]=$ coefficient of $\frac{t^{r}}{\lfloor r}$ in $M_{X}(t)$ $$ =\frac{\lfloor\mathrm{r}}{\theta^{2}}, \quad \mathrm{r}=1,2,3, \ldots \ldots \ldots $$ $\therefore \quad$ Mean $=\mu_{1}^{\prime}=\frac{1}{\theta}$ $$ \mu_{2}^{\prime}=\frac{2}{\theta^{2}} $$ Variance $=\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}=\frac{2}{\theta^{2}}-\frac{1}{\theta^{2}}=\frac{1}{\theta^{2}}$ Also by definition, Mean $=\int_{0}^{\infty} x f(x) d x=\int_{0}^{\infty} x \theta e^{-\theta x} d x$ $$ \begin{aligned} & =\theta\left[\left(\mathrm{x} \frac{\mathrm{e}^{-\theta x}}{-\theta}\right)_{0}^{\infty}-\int_{0}^{\infty}(1) \frac{\mathrm{e}^{-\theta x}}{-\theta} \mathrm{dx}\right] \\ & =\theta\left[0+(-1) \frac{\mathrm{e}^{-\theta \mathrm{x}}}{\theta^{2}}\right]_{0}^{\infty} \\ & =-\frac{1}{\theta}\left[\mathrm{e}^{-\theta \mathrm{x}}\right]_{0}^{\infty}=\frac{-1}{\theta}[0-1]=\frac{1}{\theta} \end{aligned} $$ $E\left(X^{2}\right)=\mu_{2}^{\prime}=\int_{0}^{\infty} x^{2} f(x) d x=\int_{0}^{\infty} x^{2} \cdot \theta e^{-\theta x} d x$ $$ \begin{aligned} & =\theta\left[\left(\mathrm{x}^{2} \cdot \frac{\mathrm{e}^{-\theta x}}{-\theta}\right)_{0}^{\infty}-\int_{0}^{\infty} 2 \mathrm{x} \cdot \frac{\mathrm{e}^{-\theta \mathrm{x}}}{-\theta} \mathrm{dx}\right] \\ & =\theta\left[0-0+\frac{2}{\theta} \int_{0}^{\infty} \mathrm{x} \mathrm{e}^{-\theta \mathrm{x}} \mathrm{dx}\right] \\ & =\frac{2}{\theta}\left[\theta \int_{0}^{\infty} \mathrm{xe}^{-\theta \mathrm{x}} \mathrm{dx}\right]=\frac{2}{\theta} \cdot \frac{1}{\theta}=\frac{2}{\theta^{2}} \end{aligned} $$ Variance $=\mathrm{E}[\mathrm{X}-\mathrm{E}(\mathrm{x})]^{2}$ $$ \begin{aligned} & =E\left[X-\frac{1}{\theta}\right]^{2}=E\left(X^{2}\right)-\frac{1}{\theta^{2}} \\ & =\frac{2}{\theta^{2}}-\frac{1}{\theta^{2}}=\frac{1}{\theta^{2}} \end{aligned} $$ ## Mean Deviation $$ \begin{aligned} \mathrm{E}|\mathrm{X}-\mathrm{E}(\mathrm{X})| & =\mathrm{E}\left[\left|\mathrm{X}-\frac{1}{\theta}\right|\right] \\ & =\int_{0}^{\infty}\left|\mathrm{x}-\frac{1}{\theta}\right| \theta \mathrm{e}^{-\theta \mathrm{x}} \mathrm{dx} \\ & =\frac{\theta}{\theta} \int_{0}^{\infty}|\mathrm{x} \theta-1| \mathrm{e}^{-\theta \mathrm{x}} \mathrm{dx}=\int_{0}^{\infty}|\mathrm{x} \theta-1| \mathrm{e}^{-\theta \mathrm{x}} \mathrm{dx} \end{aligned} $$ Put $\quad \theta x=y \Rightarrow \theta d x=d y$ when $x=0 \Rightarrow y=0$ $$ \mathrm{x}=\infty \Rightarrow \mathrm{y}=\infty $$ $$ \begin{aligned} \therefore \quad \mathrm{E}[|\mathrm{X}-\mathrm{E}(\mathrm{X})|] & =\int_{0}^{\infty}|\mathrm{y}-1| \mathrm{e}^{-\mathrm{y}} \frac{\mathrm{dy}}{\theta} \\ & =\frac{1}{\theta} \int_{0}^{\infty}|\mathrm{y}-1| \mathrm{e}^{-\mathrm{y}} \mathrm{dy} \end{aligned} $$ when $|\mathrm{y}-1|=-(\mathrm{y}-1) \quad$ for $\mathrm{y}-1<0$ $\Rightarrow \quad|\mathrm{y}-1|=1-\mathrm{y}, \quad \mathrm{y}<1$ and $\quad|y-1|=y-1 \quad$ for $y-1>0$ $$ =\mathrm{y}-1 \quad \text { for } \mathrm{y}>1 $$ $\therefore$ (1) $\mathrm{E}[|\mathrm{X}-\mathrm{E}(\mathrm{X})|]=\frac{1}{\theta}\left[\int_{0}^{1}(1-\mathrm{y}) \mathrm{e}^{-\mathrm{y}} \mathrm{dy}+\int_{1}^{\infty}(\mathrm{y}-1)^{-\mathrm{y}} \mathrm{dy}\right]$ $$ =\frac{1}{\theta}\left[\left((1-y) \frac{\mathrm{e}^{-\mathrm{y}}}{-1}\right)_{0}^{1}-\int_{0}^{1}(-1) \frac{\mathrm{e}^{-\mathrm{y}}}{-1} \mathrm{dy}+\left((\mathrm{y}-1) \frac{\mathrm{e}^{-\mathrm{y}}}{-1}\right)_{1}^{\infty}\right. $$ $$ \begin{aligned} & \left.\quad-\int_{1}^{\infty} 1 \cdot \frac{\mathrm{e}^{-\mathrm{y}}}{-1} \mathrm{dy}\right] \\ = & \frac{1}{\theta}\left[-0+1-\left(\frac{\mathrm{e}^{-\mathrm{y}}}{-1}\right)_{0}^{1}+0-0+\left(\frac{\mathrm{e}^{-\mathrm{y}}}{-1}\right)_{1}^{\infty}\right] \\ = & \frac{1}{\theta}\left[1+\left(\mathrm{e}^{-1}-1\right)-\left(0-\mathrm{e}^{-1}\right)\right] \\ = & \frac{1}{\theta}\left[\mathrm{e}^{-1}+\mathrm{e}^{-1}\right]=\frac{2}{\theta} \mathrm{e}^{-1} \end{aligned} $$ ## Cumulants The c.g.f. is $$ \begin{aligned} & \mathrm{K}_{\mathrm{X}}(\mathrm{t})=\log \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\log \left(1-\frac{\mathrm{t}}{\theta}\right)^{-1}=-\log \left(1-\frac{\mathrm{t}}{\theta}\right) \\ &=\left[\frac{\mathrm{t}}{\theta}+\frac{1}{2}\left(\frac{\mathrm{t}}{\theta}\right)^{2}+\frac{1}{3}\left(\frac{\mathrm{t}}{\theta}\right)^{3}+\ldots \ldots . .\right] \\ &=\sum_{\mathrm{r}=1}^{\infty}\left(\frac{\mathrm{t}}{\theta}\right)^{\mathrm{r}} \frac{1}{\mathrm{r}}=\sum_{\mathrm{r}=1}^{\infty} \frac{\mathrm{t}^{\mathrm{r}}}{\lfloor\mathrm{r}}\left(\frac{\lfloor\mathrm{r}-1}{\theta^{\mathrm{r}}}\right) \\ & \Rightarrow \quad \mathrm{K}_{\mathrm{r}}=\operatorname{coefficient} \text { of } \frac{\mathrm{t}^{\mathrm{r}}}{\lfloor\mathrm{r}} \mathrm{in}_{\mathrm{X}}(\mathrm{t}) \\ &=\frac{\lfloor\mathrm{r}-1}{\theta^{\mathrm{r}}} \quad, \mathrm{r}=1,2,3, \ldots \ldots . . \\ & \therefore \quad \mathrm{K}_{1}=\frac{1}{\theta} \quad, \mathrm{K}_{2}=\frac{1}{\theta^{2}} \\ & \mathrm{~K}_{3}=\frac{2}{\theta^{3}} \quad, \mathrm{~K}_{4}=\frac{6}{\theta^{4}} \end{aligned} $$ Example: - Let $\mathrm{X}$ have the probability density function $\mathrm{f}(\mathrm{x})=\theta \mathrm{e}^{-\theta \mathrm{x}}$ for $\mathrm{x}>0$ and zero, otherwise. Prove that if the +ve part of $\mathrm{x}$-axis is divided into intervals of equal length $\mathrm{h}$ starting at the origin, then the probabilities that $\mathrm{X}$ will lie in successive intervals from a G.P. with common ratio $\mathrm{e}^{-\theta \mathrm{h}}$. Solution: - $P(0 \leq X \leq h)=\int_{0}^{h} f(x) d x=\int_{0}^{h} \theta e^{-\theta x} d x$ $$ \begin{aligned} & =\theta\left[\frac{\mathrm{e}^{-\theta x}}{-\theta}\right]_{0}^{\mathrm{h}}=-1\left[\mathrm{e}^{-\theta \mathrm{h}}-1\right] \\ & =1-\mathrm{e}^{\theta \mathrm{h}} \\ \mathrm{P}(\mathrm{h} \leq \mathrm{X} \leq 2 \mathrm{~h})= & \int_{\mathrm{h}}^{2 \mathrm{~h}} \theta \mathrm{e}^{-\theta \mathrm{h}} \mathrm{dx} \\ & =-\frac{\theta}{\theta}\left[\mathrm{e}^{-\theta \mathrm{x}}\right]_{\mathrm{h}}^{2 \mathrm{~h}}=-\left[\mathrm{e}^{-2 \theta \mathrm{h}}-\mathrm{e}^{-\theta \mathrm{h}}\right] \\ = & \mathrm{e}^{-\theta \mathrm{h}}\left(1-\mathrm{e}^{-\theta \mathrm{h}}\right) \\ \mathrm{P}(2 \mathrm{~h} \leq \mathrm{X} \leq 3 \mathrm{~h}) & =-\left[\mathrm{e}^{-\theta \mathrm{x}}\right]_{2 \mathrm{~h}}^{3 \mathrm{~h}} \\ & =\mathrm{e}^{-2 \mathrm{~h} \theta}-\mathrm{e}^{-3 \mathrm{~h} \theta}=\mathrm{e}^{-2 \theta \mathrm{h}}\left(1-\mathrm{e}^{-\theta \mathrm{h}}\right) \end{aligned} $$ and so on. Thus the probabilities that $\mathrm{X}$ will lie in successive intervals from a G.P. with common ratio $\mathrm{e}^{-\theta \mathrm{h}}$. Example: - The r.v. $\mathrm{X}$ has the exponential distribution with density function given by (i) what is the probability that $\mathrm{X}$ is not smaller than two? (ii) Show that $\frac{\text { Mean }}{\text { S.D }}=1$ (iii) Find its M.G.F Solution: -(i) Probability that $\mathrm{X}$ is not smaller than two $$ \begin{aligned} P(X \geq 2) & =\int_{2}^{\infty} f(x) d x=2 \int_{2}^{\infty} e^{-2 x} d x \\ & =2\left[\frac{e^{-2 x}}{-2}\right]_{2}^{\infty}=-1\left[0-e^{-4}\right]=\frac{1}{e^{4}} \end{aligned} $$ (ii) $\quad$ Mean $=\mu_{1}^{\prime}=\mathrm{E}(\mathrm{X})=\int_{0}^{\infty} \mathrm{xf}(\mathrm{x}) \mathrm{dx}$ $$ \begin{aligned} & =2 \int_{0}^{\infty} \mathrm{xe}^{-2 \mathrm{x}} \mathrm{dx} \\ & =2\left[\left(\mathrm{x} \frac{\mathrm{e}^{-2 \mathrm{x}}}{-2}\right)_{0}^{\infty}+\int(1) \frac{\mathrm{e}^{-2 \mathrm{x}}}{2} \mathrm{dx}\right] \\ & =2\left[0-0-\frac{1}{4}\left(\mathrm{e}^{-2 \mathrm{x}}\right)_{0}^{\infty}\right] \\ & =2\left[-\frac{1}{4}(0-1)\right]=2 \cdot \frac{1}{4}=\frac{1}{2} \\ \mu_{2}^{\prime}= & \mathrm{E}\left(\mathrm{X}^{2}\right)=\int_{0}^{\infty} \mathrm{x}^{2} \mathrm{f}(\mathrm{x}) \mathrm{dx}=2 \int_{0}^{\infty} \mathrm{x}^{2} \mathrm{e}^{-2 \mathrm{x}} \mathrm{dx} \\ \mu_{2}^{\prime}= & 2\left[\left(\mathrm{x}^{2} \frac{\mathrm{e}^{-2 \mathrm{x}}}{-2}\right)_{0}^{\infty}+\int_{0}^{\infty} 2 \mathrm{x} \cdot \frac{\mathrm{e}^{-2 \mathrm{x}}}{+2} \mathrm{dx}\right]=2\left[0-0+\frac{1}{2} \cdot 2 \int_{0}^{\infty} \mathrm{xe}^{-2 \mathrm{x}} \mathrm{dx}\right]=2\left[\frac{1}{2} \cdot \frac{1}{2}\right]=\frac{1}{2} \end{aligned} $$ Variance, $\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime}=\frac{1}{2}-\frac{1}{4}=\frac{1}{4}$ $\therefore \quad$ S.D. $=\sqrt{\mu_{2}}=\frac{1}{2}$ $\therefore \quad \frac{\text { Mean }}{\text { S.D }}=\frac{1 / 2}{1 / 2}=1$ (iii) M.G.F., $M_{x}(t)=E\left[e^{t x}\right]=\int_{0}^{\infty} e^{t x} f(x) d x$ $$ \begin{aligned} & =\int_{0}^{\infty} \mathrm{e}^{\mathrm{tx}}(2) \mathrm{e}^{-2 \mathrm{x}} \mathrm{dx} \quad=2 \int_{0}^{\infty} \mathrm{e}^{(\mathrm{t}-2) \mathrm{x}} \mathrm{dx}=2\left[\frac{\mathrm{e}^{(\mathrm{t}-2) \mathrm{x}}}{(\mathrm{t}-2)}\right]_{0}^{\infty} \\ & =\frac{2}{(\mathrm{t}-2)}\left[\mathrm{e}^{-(2-\mathrm{t}) \mathrm{x}}\right]_{0}^{\infty}=\frac{2}{\mathrm{t}-2}[0-1]=\frac{-2}{\mathrm{t}-2}=\frac{2}{2-\mathrm{t}}, 2>\mathrm{t} \end{aligned} $$ ## NORMAL DISTRIBUTION A r.v. $\mathrm{X}$ is said to have Normal distribution with parameters $\mu$ and $\sigma^{2}$, called Mean \\& variance respectively if its density function is given by $$ f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{\frac{-1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}} \quad,-\infty<x<\infty ; \sigma>0 ;-\infty<\mu<\infty $$ If $\quad Z=\frac{X-\mu}{\sigma} \Rightarrow z=\frac{x-\mu}{\sigma}$ Here $\quad$ Mean $=\mathrm{E}(\mathrm{z})=0$ and $\quad$ variance $=V(Z)=1=$ S.D. then $\mathrm{f}(\mathrm{z})=\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2}} \quad,-\infty<\mathrm{z}<\infty$ then $\mathrm{f}(\mathrm{z})$ is known as Standard Normal distribution. Now $\int_{-\infty}^{\infty} f(x) d x=\frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}} d x$ Put $\mathrm{z}=\frac{\mathrm{x}-\mu}{\sigma} \Rightarrow \quad \sigma \mathrm{dz}=\mathrm{dx}$ $\therefore \quad \int_{-\infty}^{\infty} \mathrm{f}(\mathrm{x}) \mathrm{dx}=\frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2}} \cdot \sigma \mathrm{dz}$ $$ =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2}} \mathrm{dz} $$ $\therefore \quad \int_{-\infty}^{\infty} f(x) d x=\frac{2}{\sqrt{2 \pi}} \int_{0}^{\infty} e^{-\frac{1}{2} z^{2}} d z$ Put $\quad \frac{1}{2} \mathrm{z}^{2}=\mathrm{t}$ $\Rightarrow \quad \mathrm{zdz}=\mathrm{dt} \Rightarrow \mathrm{dz}=\frac{\mathrm{dt}}{\mathrm{z}}=\frac{\mathrm{dr}}{\sqrt{2 \mathrm{t}}}$ $\therefore \quad \int_{-\infty}^{\infty} f(x) d x=\sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \mathrm{e}^{-t} \frac{d t}{\sqrt{2 \pi}}=\frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}} \mathrm{e}^{-1 / 2} \mathrm{dt}$ $$ =\frac{1}{\sqrt{\pi}} \cdot \Gamma\left(\frac{1}{2}\right)=\frac{\sqrt{\pi}}{\sqrt{\pi}}=1 $$ $\Rightarrow \quad$ it is a probability density function. This is called Normal distribution or Normal probability curve or Gaussian distribution. This curve is symmetrical w.r.t. $y$-axis. The area under the curve gives the probability density function, $\mathrm{f}(\mathrm{x})=\mathrm{P}(0 \leq \mathrm{X} \leq \mathrm{x})$ where $\mathrm{x}$ is constant. Mean, $\mu_{1}^{\prime}=E(X)=\int_{-\infty}^{\infty} x f(x) d x$ $$ =\int_{-\infty}^{\infty} x \cdot \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2 \sigma^{2}}(x-\mu)^{2}} d x $$ Put $\quad z=\frac{x-\mu}{\sigma}$ $\Rightarrow \quad \mathrm{x}=\mu+\sigma \mathrm{z} \Rightarrow \mathrm{dx}=\sigma \mathrm{dz}$ $\therefore \quad \mu_{1}^{\prime}=\int_{-\infty}^{\infty}(\mu+\sigma z) \cdot \frac{1}{\sigma \sqrt{2 \pi}} \mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2} \sigma \mathrm{d} z}$ $$ =\frac{\mu}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2}} \mathrm{dz}+\frac{\sigma}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{ze}^{-\frac{1}{2} \mathrm{z}^{2}} \mathrm{dz} $$ $\Rightarrow \quad \mu_{1}^{\prime}=\mu(1)+0 \quad[$ from(1) $]\left[\Theta\right.$ integrand $\mathrm{ze}^{-\frac{1}{2} \mathrm{z}^{2}}$ is an odd function of $\left.\mathrm{z}\right]$ $\Rightarrow \quad \mu_{1}^{\prime}=\mu($ Mean) Variance, $\mu_{2}=E\left[(X-\mu)^{2}\right]$ $$ \begin{aligned} & =\int_{-\infty}^{\infty}(x-\mu)^{2} \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}} d x \\ & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \sigma^{2} z^{2} \frac{e^{-\frac{1}{2} z^{2}}}{\sigma} \cdot \sigma d z \\ & =\frac{2 \sigma^{2}}{\sqrt{2 \pi}} \int_{0}^{\infty} z^{2} e^{-\frac{1}{2} z^{2}} d z \end{aligned} $$ Put $\quad \frac{\mathrm{z}^{2}}{2}=\mathrm{t} \Rightarrow \mathrm{dz}=\frac{\mathrm{dt}}{\sqrt{2 \mathrm{t}}}$ $$ \begin{aligned} \mu_{2} & =\frac{\sqrt{2}}{\pi} \sigma^{2} \int_{0}^{\infty} 2 \mathrm{e}^{-\mathrm{t}} \frac{\mathrm{dt}}{\sqrt{2 \mathrm{t}}} \\ & =\frac{2}{\sqrt{\pi}} \sigma^{2} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}} \mathrm{t}^{1 / 2} \mathrm{dt} \\ & =\frac{2}{\sqrt{\pi}} \sigma^{2} \Gamma\left(\frac{3}{2}\right)=\frac{2}{\sqrt{\pi}} \sigma^{2} \cdot \frac{1}{2} \Gamma\left(\frac{1}{2}\right) \\ \Rightarrow \quad \mu_{2} & =\frac{\sigma^{2}}{\sqrt{\pi}} \cdot \sqrt{\pi}=\sigma^{2} \quad \Rightarrow \quad \mu_{2}=\sigma^{2} \end{aligned} $$ ## Normal distribution as a limiting case of Binomial distribution It can be derived under these conditions: - (i) $n$, the number of trials is infinitely large i.e. $n \rightarrow \infty$ (ii) neither $\mathrm{p}$ nor $\mathrm{q}$ is very small. p.d.f. of Binomial distribution is $$ \begin{aligned} & f(x)={ }^{n} C_{x} p^{x} q^{n-x} \\ & =\frac{\lfloor\underline{n}}{\underline{x}\lfloor n-x} p^{x} q^{n-x}, x=0,1,2, \ldots . ., n \\ & Z=\frac{X-E(X)}{\sqrt{\operatorname{var}(x)}} \quad \text { i.e. } z=\frac{x-n p}{\sqrt{n p q}} \end{aligned} $$ where $\mathrm{x}=0,1,2, \ldots ., \mathrm{n}$ when $x=0 \Rightarrow z=-\sqrt{\frac{n p}{q}} ; \quad$ As $n \rightarrow \infty, z \rightarrow-\infty$ when $x=n \Rightarrow z=\frac{n-n p}{\sqrt{n p q}}=\frac{n q}{\sqrt{n p q}}=\sqrt{\frac{n q}{p}}$ and $\quad$ As $\mathrm{n} \rightarrow \infty, \quad \mathrm{z} \rightarrow \infty$ Thus in the limiting case as $n \rightarrow \infty$, $\mathrm{z}$ takes values $-\infty$ to $\infty$. Using Stirling's approximation formula, $$ \underline{r}=\sqrt{2 \pi} \mathrm{e}^{-\mathrm{r}} \mathrm{r}^{\mathrm{r}+\frac{1}{2}} $$ From (1) as $n \rightarrow \infty$, we get $$ \begin{aligned} & \lim _{n \rightarrow \infty} f(x)=\lim _{n \rightarrow \infty} \frac{\sqrt{2 \pi} e^{-n} n^{n+\frac{1}{2}} p^{x} q^{n-x}}{\sqrt{2 \pi} e^{-x} x^{x+\frac{1}{2}} \sqrt{2 \pi} e^{-(n-x)}(n-x)^{n-x+\frac{1}{2}}} \\ & =\frac{1}{\sqrt{2 \pi}} \lim _{n \rightarrow \infty} \frac{n^{n+\frac{1}{2}} p^{x} q^{n-x}}{x^{x+\frac{1}{2}}(n-x)^{n-x+\frac{1}{2}}} \\ & =\frac{1}{\sqrt{2 \pi}} \lim _{n \rightarrow \infty} \frac{n^{n+\frac{1}{2}} p^{x} q^{n-x}}{x^{x+\frac{1}{2}}(n-x)^{n-x+\frac{1}{2}}} \cdot \frac{\sqrt{n p q}}{\sqrt{n p q}} \\ & =\frac{1}{\sqrt{2 \pi}} \lim _{n \rightarrow \infty} \frac{n^{n+1} p^{x+\frac{1}{2}} q^{n-x+\frac{1}{2}}}{\sqrt{n p q} x^{x+\frac{1}{2}}(n-x)^{n-x+\frac{1}{2}}} \\ & =\frac{1}{\sqrt{2 \pi}} \lim _{n \rightarrow \infty} \frac{n^{n+1}}{\sqrt{n p q}} \cdot \frac{1}{x^{x+\frac{1}{2}}(n-x)^{n-x+\frac{1}{2}}} \frac{(n \cdot q)^{n-x+\frac{1}{2}}}{n^{n-x+\frac{1}{2}}} \cdot \frac{(n p)^{x+\frac{1}{2}}}{n^{x+\frac{1}{2}}} \\ & =\frac{1}{\sqrt{2 \pi}} \lim _{\mathrm{n} \rightarrow \infty} \frac{1}{\sqrt{\mathrm{npq}}}\left(\frac{\mathrm{np}}{\mathrm{x}}\right)^{\mathrm{x}+\frac{1}{2}}\left(\frac{\mathrm{nq}}{\mathrm{n}-\mathrm{x}}\right)^{\mathrm{n}-\mathrm{x}+\frac{1}{2}} \end{aligned} $$ from (2), $x=n p+z \sqrt{n p q}$ 168 $$ \therefore \quad \frac{n}{n p}=1+z \frac{\sqrt{n p q}}{n p}=1+z \sqrt{\frac{q}{n p}} $$ and $\quad \mathrm{n}-\mathrm{x}=\mathrm{n}-\mathrm{np}-\mathrm{z} \sqrt{\mathrm{npq}}$ $$ \begin{aligned} &=n(1-p)-z \sqrt{n p q}=n q-z \sqrt{n p q} \\ & \therefore \quad \frac{n-x}{n q}=1-z \sqrt{\frac{p}{n q}} \end{aligned} $$ Hence probability difference of the distribution is $$ \lim _{n \rightarrow \infty} f(x) d x=\lim _{n \rightarrow \infty} \frac{1}{\sqrt{2 \pi} \sqrt{n p q}} \cdot \frac{1}{N} d x $$ where $\mathrm{N}=\left(\frac{\mathrm{x}}{\mathrm{np}}\right)^{\mathrm{x}+\frac{1}{2}}\left(\frac{\mathrm{n}-\mathrm{x}}{\mathrm{nq}}\right)^{\mathrm{n}-\mathrm{x}+\frac{1}{2}}$ using (4) \\& (5), we get $$ \begin{aligned} & \mathrm{N}=\left(1+\mathrm{z} \sqrt{\frac{\mathrm{q}}{\mathrm{np}}}\right)^{\left(\mathrm{np}+\mathrm{z} \sqrt{\mathrm{npq}}+\frac{1}{2}\right)}\left(1-\mathrm{z} \sqrt{\frac{\mathrm{p}}{\mathrm{nq}}}\right)^{\left(\mathrm{nq}-\mathrm{z} \sqrt{\mathrm{npq}}+\frac{1}{2}\right)} \\ & \therefore \log N=\left(n p+z \sqrt{n p q}+\frac{1}{2}\right) \log \left(1+z \sqrt{\frac{q}{n p}}\right)+\left(n q-z \sqrt{n p q}+\frac{1}{2}\right) \log \left(1-z \sqrt{\frac{p}{n q}}\right) \\ & =\left(n p+z \sqrt{n p q}+\frac{1}{2}\right)\left[z \sqrt{\frac{q}{n p}}-\frac{1}{2} z^{2} \frac{q}{n p}+\frac{1}{3} z^{3} \frac{q^{3 / 2}}{(n p)^{3 / 2}} \ldots \ldots .\right] \\ & +\left(n q-z \sqrt{n p q}+\frac{1}{2}\right)\left[-z \sqrt{\frac{p}{n q}}-\frac{z^{2}}{2} \frac{p}{n q}-\frac{z^{3}}{3} \frac{p^{3 / 2}}{(n q)^{3 / 2}} \ldots \ldots .\right] \\ & =\mathrm{z} \sqrt{\mathrm{npq}}-\frac{\mathrm{z}^{2}}{\mathrm{z}} \mathrm{q}+\frac{\mathrm{z}^{3}}{3} \frac{\mathrm{q}^{3 / 2}}{\sqrt{\mathrm{nq}}}+\ldots \ldots+\mathrm{z}^{2} \mathrm{q}-\frac{\mathrm{z}^{3}}{2} \frac{\mathrm{q}^{3 / 2}}{\sqrt{\mathrm{np}}}+\frac{\mathrm{z}}{2} \sqrt{\frac{\mathrm{q}}{\mathrm{np}}}-\frac{\mathrm{z}^{2}}{4} \frac{\mathrm{q}}{\mathrm{np}} \\ & +\frac{z^{3}}{6} \frac{q^{3 / 2}}{(n p)^{3 / 2}}+\left[-z \sqrt{n p q}-\frac{z^{2}}{2} p-\frac{z^{3}}{3} \frac{p^{3 / 2}}{\sqrt{n q}}+z^{2} p+\frac{z^{3}}{2} \frac{p^{3 / 2}}{\sqrt{n q}}-\frac{3}{2} \sqrt{\frac{p}{n q}}\right] \\ & -\frac{\mathrm{z}^{2}}{4} \frac{\mathrm{p}}{\mathrm{nq}}-\frac{\mathrm{z}^{3}}{6} \frac{\mathrm{p}^{3 / 2}}{(\mathrm{nq})^{3 / 2}}+\ldots \ldots . . \end{aligned} $$ Collecting the terms in decreasing power of $n$, we get $$ \begin{aligned} \log \mathrm{N} & =\frac{\mathrm{z}^{2}}{2}(\mathrm{p}+\mathrm{q})+\frac{1}{\sqrt{\mathrm{n}}}\left[\frac{\mathrm{z}^{3}}{\underline{3}}\left(\frac{-\mathrm{q}^{3 / 2}}{\sqrt{\mathrm{p}}}+\frac{\mathrm{p}^{3 / 2}}{\sqrt{\mathrm{q}}}\right)+\frac{\mathrm{z}}{2}\left(\sqrt{\frac{\mathrm{q}}{\mathrm{p}}}-\sqrt{\frac{\mathrm{p}}{\mathrm{q}}}\right)\right]+\ldots \ldots . \\ & =\frac{\mathrm{z}^{2}}{2}+\frac{1}{\sqrt{\mathrm{n}}}\left[\left(\frac{\mathrm{z}^{3}}{\underline{3}} \frac{\mathrm{p}^{2}-\mathrm{q}^{2}}{\sqrt{\mathrm{pq}}}\right)+\frac{3}{2}\left(\frac{\mathrm{q}-\mathrm{p}}{\sqrt{\mathrm{pq}}}\right)\right]+[\text { terms of higher powers } 1 / \sqrt{\mathrm{n}}] \end{aligned} $$ Taking limit as $\mathrm{n} \rightarrow \infty$ $$ \lim _{n \rightarrow \infty} \log N=z^{2 / 2} $$ or $\quad \lim _{n \rightarrow \infty} \log N=z^{2 / 2} \quad \Rightarrow \quad \lim _{n \rightarrow \infty} \frac{1}{N}=e^{-z^{2 / 2}}$ Put this value in (6), we get the probability of the distribution of $\mathrm{Z}$ is given by $$ \begin{aligned} \mathrm{dP} & =\lim _{\mathrm{n} \rightarrow \infty} \mathrm{f}(\mathrm{x}) \mathrm{dx}=\lim _{\mathrm{n} \rightarrow \infty} \frac{1}{\sqrt{2 \pi} \sqrt{\mathrm{npq}}} \cdot \mathrm{e}^{-\mathrm{z} / 2} \sqrt{\mathrm{npq}} \mathrm{dz} \quad\left[\Theta \mathrm{dz}=\frac{\mathrm{dx}}{\sqrt{\mathrm{npq}}}\right] \\ & =\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-\mathrm{z}^{2} / 2} \mathrm{dz} \end{aligned} $$ Hence probability function of $\mathrm{Z}$ is $$ \mathrm{f}(\mathrm{z})=\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-\mathrm{z}^{2} / 2}, \quad-\infty<\mathrm{z}<\infty $$ The probability function of normal distribution with variable $\mathrm{X}$ is $$ f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}} $$ ## Some characteristics of Normal distribution $$ f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}} $$ where $-\infty<\mathrm{x}<\infty$ and $f(z)=\frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^{2}} \quad$ where $z=\frac{x-\mu}{\sigma}$ (1) The curve is Bell shaped \\& symmetrical about the line $\mathrm{x}=\mu$ (2) Mean, Median \\& Mode coincides. (3) Max. probability occurs at the point $x=\mu$ and is given by $$ \left[\left.\mathrm{f}(\mathrm{x})\right|_{\max .}=\frac{1}{\sigma \sqrt{2 \pi}}\right. $$ (4) $\beta_{1}=0, \beta_{2}=3$ (5) All odd order moments are zero i.e. $\mu_{2 n+1}=0$ (6) Linear combination of independent normal variate is also a normal variate. (7) Points of inflexion of the curve are given by $x=\mu \pm \sigma$ (8) Mean Derivation is $\frac{4}{5} \sigma$ (appox.) $$ =\sqrt{\frac{2}{\pi}} \sigma $$ (9) Area property $$ \begin{aligned} & \mathrm{P}(\mu-\sigma<\times<\mu+\sigma)=0.6826 \\ & \mathrm{P}(\mu-2 \sigma<\times<\mu+2 \sigma)=0.9544 \\ & \mathrm{P}(\mu-3 \sigma<\times \mu+3 \sigma)=0.9973 \end{aligned} $$ $$ \begin{array}{ll} \text { when } & x=\mu-\sigma \quad, z=-1 \quad z=-2 \quad z=-1 \quad z=0 \quad z=+1 \quad z=+2 \\ \text { and } \quad x=\mu+\sigma \quad, z=+1 \\ \therefore \quad & P(-1<Z<+1)=0.6826 \\ & P(-2<Z<+2)=0.9544 \\ & P(-3<Z<+3)=0.9973 \end{array} $$ also $\quad \mathrm{P}(-\mathrm{z}<\mathrm{Z}<0)=\mathrm{P}(0<\mathrm{Z}<\mathrm{z})$ due to symmetry. Theorem: - Prove that Mean, Median and Mode coincide. ## Proof: - Mode To calculate Mode, we have $$ \mathrm{f}^{\prime}(\mathrm{x})=0 \quad, \mathrm{f}^{\prime \prime}(\mathrm{x})<0 $$ Now $\mathrm{f}(\mathrm{x})=\frac{1}{\sigma \sqrt{2 \pi}} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}}$ $$ \therefore \quad \log \mathrm{f}(\mathrm{x})=-\log \sigma \sqrt{2 \pi}-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2} $$ Differentiating w.r.t. $\mathrm{x}$, we get $$ \begin{aligned} & \frac{1}{f(x)} f^{\prime}(x)=0-\frac{2}{2 \sigma^{2}}(x-\mu)=\frac{-1}{\sigma^{2}}(x-\mu) \\ \Rightarrow \quad & f^{\prime}(x)=-\frac{f(x)}{\sigma^{2}}(x-\mu) \end{aligned} $$ Now $\quad \mathrm{f}^{\prime}(\mathrm{x})=0 \Rightarrow \mathrm{x}-\mu=0 \Rightarrow \mathrm{x}=\mu$ Also $\quad f^{\prime \prime}(x)=\frac{-1}{\sigma^{2}}\left[1 . f(x)+(x-\mu) f^{\prime}(x)\right]$ $$ \begin{aligned} & =\frac{-1}{\sigma^{2}}\left[f(x)-(x-\mu) \frac{f(x)(x-\mu)}{\sigma^{2}}\right] \\ & =\frac{f(x)}{\sigma^{2}}\left[1-\frac{(x-\mu)^{2}}{\sigma^{2}}\right] \end{aligned} $$ Put $\quad x=\mu$, we get $$ \mathrm{f}^{\prime \prime}(\mathrm{x})=-\frac{1}{\sigma^{2}}\left[(\mathrm{f}(\mathrm{x})]_{\mathrm{x}=\mu}=\frac{-1}{\sigma^{2}} \cdot \frac{1}{\sigma \sqrt{2 \pi}}<0\right. $$ Hence $\mathrm{x}=\mu$ is the mode of Normal distribution. ## Median Median is the middle value of variate. Let $\mathrm{M}$ be the Median distribution, then $$ \int_{-\infty}^{\infty} f(x) d x+\int_{M}^{\infty} f(x) d x=1 $$ Also $\quad \int_{-\infty}^{M} f(x) d x=\int_{-\infty}^{M} f(x) d x$ So we have $\int_{-\infty}^{M} f(x) d x=\frac{1}{2}$ $\Rightarrow \quad \int_{-\infty}^{\mathrm{M}} \frac{1}{\sigma \sqrt{2 \pi}} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx}=\frac{1}{2}$ $\Rightarrow \quad \frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\mu} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx}+\frac{1}{\sigma \sqrt{2 \pi}} \int_{\mu}^{\mathrm{M}} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx}=\frac{1}{2}$ or $\quad \mathrm{I}_{1}+\mathrm{I}_{2}=\frac{1}{2}$ Now $\quad I_{1}=\frac{1}{\sigma \sqrt{2 \pi}} \int e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)}$ Put $\quad z=\frac{x-\mu}{\sigma}$ $\therefore \quad \mathrm{I}_{1}=\frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{0} \mathrm{e}^{-\mathrm{z}^{2} / 2} \mathrm{dz}$ $$ =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{0} \mathrm{e}^{-\mathrm{z}^{2 / 2}} \mathrm{dz} $$ $\Rightarrow \quad \mathrm{I}_{1}=\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2}} \mathrm{dz}$ Put $\quad \mathrm{z}^{2} / 2=\mathrm{t} \quad \Rightarrow \mathrm{zdz}=\mathrm{dt} \quad \Rightarrow \mathrm{dz}=\frac{\mathrm{dt}}{\sqrt{2 \mathrm{t}}}$ $$ \begin{aligned} \mathrm{I}_{1} & =\frac{1}{\sqrt{2 \pi}} \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}} \frac{1}{\sqrt{2}} \mathrm{t}^{-1 / 2} \mathrm{dt} \\ & =\frac{1}{\sqrt{2 \pi}} \Gamma\left(\frac{1}{2}\right)=\frac{1}{2 \sqrt{\pi}} \cdot \sqrt{\pi}=\frac{1}{2} \\ \Rightarrow \quad \mathrm{I}_{2} & =0 \Rightarrow \frac{1}{\sigma \sqrt{2 \pi}} \int_{\mu}^{\mu} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx}=0 \\ \Rightarrow \quad \mathrm{M} & =\mu \end{aligned} $$ Hence Mean $=$ Madian $=$ Mode ## Moment generating function about origin $$ \begin{aligned} M_{X}(t) & =E\left[e^{t x}\right]=\int_{-\infty}^{\infty} e^{t x} f(x) d x \\ & =\frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{t x} e^{-\frac{1}{2}\left(\frac{x-\mu}{\delta}\right)^{2}} d x \end{aligned} $$ Put $\quad z=\frac{x-\mu}{\sigma}$ $$ \begin{aligned} \Rightarrow \quad x & =\mu+\sigma z \\ d x & =\sigma d z \end{aligned} $$ $$ \begin{aligned} M_{X}(t) & =\frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{t(\mu+\sigma z)} e^{-\frac{1}{2}} \cdot \sigma d z \\ M_{x}(t) & =\frac{e^{\mu t}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2}\left(z^{2-2 t \sigma z)}\right.} d z \\ & =\frac{e^{\mu t}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2}\left(z^{\left.2+t^{2} \sigma^{2}-2 t \sigma z\right)+\frac{1}{2} t^{2} \sigma^{2}} d z\right.} d \\ & =\frac{e^{\mu t}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2}(z-t \sigma)^{2}} e^{\frac{1}{2} t^{2} \sigma^{2}} d z \\ & =\frac{e^{\mu t+\frac{1}{2} t^{2 \sigma^{2}}}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-\frac{1}{2}(z-t \sigma)^{2}} d z \end{aligned} $$ Put $\quad z-t \sigma=\mu \quad \Rightarrow d z=d \mu$ $$ \begin{aligned} \therefore \quad \mathrm{M}_{\mathrm{X}}(\mathrm{t}) & =\frac{\mathrm{e}^{\mu \mathrm{t}+\frac{1}{2} \mathrm{t}^{2} \sigma^{2}}}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{-\frac{1}{2} \mu^{2}} \mathrm{~d} \mu \\ & =\mathrm{e}^{\mu \mathrm{t}+\frac{1}{2} \mathrm{t}^{2} \sigma^{2}}(1) \\ & =\mathrm{e}^{\mu \mathrm{t}+\frac{1}{2} \mathrm{t}^{2} \sigma^{2}} \end{aligned} $$ Moment generating function about mean $$ \begin{aligned} \mathrm{M}_{\mathrm{X}-\mu}(\mathrm{t}) & =\mathrm{E}\left[\mathrm{e}^{\mathrm{t}(\mathrm{X}-\mu)}\right]=\mathrm{E}\left[\mathrm{e}^{\mathrm{tX}} \cdot \mathrm{e}^{-\mu \mathrm{t}}\right] \\ & =\mathrm{e}^{-\mu \mathrm{t}} \mathrm{E}\left[\mathrm{e}^{\mathrm{tX}}\right]=\mathrm{e}^{-\mu \mathrm{t}} \mathrm{M}_{\mathrm{X}}(\mathrm{t}) \\ & =\mathrm{e}^{-\mu \mathrm{t}} \mathrm{e}^{\mu \mathrm{t}+\frac{1}{2} \sigma^{2} \mathrm{t}^{2}}=\mathrm{e}^{\frac{1}{2} \sigma^{2} \mathrm{t}^{2}} \end{aligned} $$ Also by defination $$ \begin{aligned} \mathrm{M}_{\mathrm{X}-\mu}(\mathrm{t}) & =\int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2 \pi}} \mathrm{e}^{\mathrm{t}(\mathrm{x}-\mu)} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx} \\ & =\mathrm{e}^{-\mu \mathrm{t}} \int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2 \pi}} \mathrm{e}^{\mathrm{tx}} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx} \\ & =\mathrm{e}^{-\mu \mathrm{t}} \cdot \mathrm{e}^{\mu \mathrm{t}+\frac{1}{\sigma^{2} \mathrm{t}^{2}}}=\mathrm{e}^{\frac{1}{2} \sigma^{2} \mathrm{t}^{2}} \end{aligned} $$ Moment generating function about any point ' $a$ ' $$ M_{X-a}(t)=e^{t(\mu-a)+\frac{1}{2} \sigma^{2} t^{2}} $$ By defination $$ \begin{aligned} \mathrm{M}_{\mathrm{X}-\mathrm{a}}(\mathrm{t}) & =\mathrm{E}\left[\mathrm{e}^{\mathrm{t}(\mathrm{X}-\mathrm{a})}\right] \\ \mathrm{M}_{\mathrm{X}-\mathrm{a}}(\mathrm{t}) & =\frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{\mathrm{t}(\mathrm{x}-\mathrm{a})} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx} \\ & =\mathrm{e}^{-\mathrm{at}} \cdot \frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} \mathrm{e}^{\mathrm{tx}} \cdot \mathrm{e}^{\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx} \\ & =\mathrm{e}^{-\mathrm{at}} \cdot \mathrm{e}^{\mu \mathrm{t}+\frac{1}{2} \sigma^{2} \mathrm{t}^{2}}=\mathrm{e}^{\mathrm{t}(\mu-\mathrm{a})+\frac{1}{2} \sigma^{2} \mathrm{t}^{2}} \end{aligned} $$ ## Cumulants $$ \begin{aligned} K_{X}(t) & =\log M_{X}(t)=\log \left(e^{\mu t+\frac{1}{2} \sigma^{2} t^{2}}\right) \\ & =\mu t+\frac{1}{2} \sigma^{2} t^{2} \end{aligned} $$ The $1^{\text {st }}$ cumulants, $K_{1}=$ coefficient of $t=\mu_{1}^{\prime}($ Mean $)=\mu$ The $2^{\text {nd }}$ cumulants, $K_{2}=\sigma^{2}=$ variance $=\mu_{2}$ $$ \mathrm{K}_{\mathrm{r}}=0 \quad \text { for } \mathrm{r}=3,4, \ldots \ldots $$ Thus $\mu_{3}=K_{3}=0$ and $\mu_{4}=K_{4}+3 K_{2}^{2}=3 \sigma^{4}$ $\left[\Theta \mathrm{K}_{4}=\mu_{4}-\mathrm{K}_{2}^{2}\right]$ Now $\beta_{1}=\frac{\mu_{4}}{\mu_{2}}=\frac{3 \sigma^{2}}{\sigma^{4}}=3$ $\left(\beta_{1}=0, \beta_{2}=3\right)$ ## Moments from Moment generating function M.G.F about mean $=\mathrm{e}^{\frac{1}{2} \sigma^{2} \mathrm{t}^{2}}$ $$ =1+\frac{1}{2} \mathrm{t}^{2} \sigma^{2}+\frac{\left(\frac{\mathrm{t}^{2} \sigma^{2}}{2}\right)^{2}}{\underline{2}}+\frac{\left(\frac{1}{2} \mathrm{t}^{2} \sigma^{2}\right)^{3}}{\underline{3}}+\ldots . .+\frac{\left(\frac{\mathrm{t}^{2} \sigma^{2}}{2}\right)^{\mathrm{n}}}{\underline{\mathrm{n}}}+\ldots . . $$ from here, (1), $\quad \mu_{\mathrm{r}}=$ coefficient of $\frac{\mathrm{t}^{\mathrm{r}}}{\lfloor\mathrm{r}}=\mathrm{rth}$ moment about mean Since there is no term with odd powers of $t$ in (1), there fore all moments of odd order about Mean vanish. i.e. $\quad \mu_{2 n+1}=0, \quad \mathrm{n}=1,2, \ldots \ldots$ and $\quad \mu_{2 n}=$ coefficient of $\frac{t^{2 n}}{\lfloor n}$ in $(1)=\frac{\sigma^{2 n}\lfloor 2 n}{2^{n}\lfloor n}$ $$ \left[\Theta \frac{\mathrm{t}^{2 n} \sigma^{2 n} \cdot 2 \mathrm{n}}{2^{\mathrm{n}}\lfloor\underline{\mathrm{n}\lfloor 2 \mathrm{n}}}=\left(\frac{\sigma^{2 \mathrm{n}}\lfloor 2 \mathrm{n}}{2^{\mathrm{n}}\lfloor n}\right) \frac{\mathrm{t}^{2 \mathrm{n}}}{\lfloor 2 \mathrm{n}}\right] $$ $\therefore \quad \mu_{2 n}=\frac{\sigma^{2 n}}{2^{n}\lfloor n}[2 n(2 n-1)(2 n-n) \ldots . .4 \cdot 3.2 .1]$ $=\frac{\sigma^{2 n}}{2^{n}\lfloor n}[1 \cdot 3 \cdot 5 \ldots \ldots .(2 n-1)][2 \cdot 4 \cdot 6 .(2 n-2) 2 n]$ $=\frac{\sigma^{2 n}}{2^{n}\lfloor n}\left[1.3 .5 \ldots \ldots(2 n-1) 2^{n}[1.2 \ldots . . n]\right.$ $=\frac{\sigma^{2 \mathrm{n}}}{2^{\mathrm{n}}\lfloor\mathrm{n}}\left[1.3 .5 \ldots \ldots(2 \mathrm{n}-1) 2^{\mathrm{n}}\lfloor\mathrm{n}\right.$ $$ \begin{array}{lll} \Rightarrow & \mu_{2 n}=\sigma^{2 n}[1.3 .5 \ldots \ldots(2 n-1)] \\ \therefore & & \mu_{2}=1 . \sigma^{2}=\sigma^{2} \\ & \mu_{3}=0, & \mu_{4}=1.3 \sigma^{4} \\ & \mu_{5}=0, & \mu_{6}=1.3 .5 \sigma^{6} \\ & \beta_{1}=0, & \beta_{2}=3 \end{array} $$ Also $\mu_{1}^{\prime}=$ moment about origin $$ =E\left[X^{r}\right]=\int_{-\infty}^{\infty} x^{r} f(x) d x $$ So $\quad \mu_{2 n+1}=$ odd moments about mean $$ =\int_{-\infty}^{\infty}(x-\mu)^{2 n+1} f(x) d x $$ and $\quad \mu_{2 n}=\int_{-\infty}^{\infty}(x-\mu)^{2 n} f(x) d x=$ even order moments about mean ## Characteristic Function $$ \phi_{X}(t)=E\left[e^{i t x}\right]=e^{i \mu t-\frac{1}{2} t^{2} \sigma^{2}} $$ ## Mean deviation $$ \begin{aligned} M . D & =\int_{-\infty}^{\infty}|x-\mu| f(x) d x \\ & =\int_{-\infty}^{\infty}|x-\mu| \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}} d x \end{aligned} $$ Put $\quad z=\frac{x-\mu}{\sigma} \quad \Rightarrow d z=\frac{1}{\sigma} d x$ $$ \begin{aligned} & =\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}|\sigma \mathrm{z}| \frac{\mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2}}}{\sigma} \cdot \sigma \mathrm{dz} \\ & \therefore \quad \text { M.D }=\frac{2}{\sqrt{2 \pi}} \int_{0}^{\infty} \sigma z \mathrm{e}^{-\frac{1}{2} \mathrm{z}^{2}} \mathrm{dz} \end{aligned} $$ $[\Theta$ integrand is eveb function of $\mathrm{z}]$ Put $\quad \frac{1}{2} \mathrm{z}^{2}=\mathrm{t} \quad \Rightarrow \frac{1}{2} \cdot 2 \mathrm{zdz}=\mathrm{dt}$ $\Rightarrow \quad \mathrm{zdz}=\mathrm{dt}$ $\therefore \quad$ M.D. $=\frac{2}{\sqrt{2 \pi}} \sigma \int_{0}^{\infty} \mathrm{e}^{-\mathrm{t}} \mathrm{dt}$ $$ \begin{aligned} & =\sqrt{\frac{2}{\pi}} \sigma\left[\frac{\mathrm{e}^{-\mathrm{t}}}{-1}\right]_{0}^{\infty}=\sqrt{\frac{2}{\pi}} \sigma \cdot[0-1]=\sqrt{\frac{2}{\pi}} \sigma \\ & =\frac{4}{5} \sigma=0.8 \sigma \Rightarrow \text { M.D }=\sqrt{\frac{2}{\pi}} \sigma \end{aligned} $$ ## Points of Inflexion of a Normal Curve At the points of inflexion, $$ \mathrm{f}^{\prime \prime}(\mathrm{x})=0, \quad \mathrm{f}^{\prime \prime \prime}(\mathrm{x}) \neq 0 $$ Now $f^{\prime \prime}(x)=-\frac{f(x)}{\sigma^{2}}\left[1-\frac{(x-\mu)^{2}}{\sigma^{2}}\right]$ Put $\quad \mathrm{f}^{\prime \prime}(\mathrm{x})=0, \quad 1-\frac{(\mathrm{x}-\mu)^{2}}{\sigma^{2}}=0$ $\Rightarrow \quad(\mathrm{x}-\mu)^{2}=\sigma^{2} \Rightarrow \mathrm{x}-\mu= \pm \sigma$ $\Rightarrow \quad \mathrm{x}=\mu \pm \sigma$ Now $f^{\prime \prime \prime}(x)=-\frac{f^{\prime}(x)}{\sigma^{2}}\left[1-\frac{(x-\mu)^{2}}{\sigma^{2}}\right]-\frac{f(x)}{\sigma^{2}}\left[\frac{-2(x-\mu)}{\sigma^{2}}\right]$ $$ =-\frac{f^{\prime}(x)}{\sigma^{2}}\left[1-\frac{(x-\mu)^{2}}{\sigma^{2}}\right]+\frac{2 f(x)(x-\mu)}{\sigma^{4}} $$ At $x=\mu \pm \sigma$, $$ \begin{aligned} \mathrm{f}^{\prime \prime \prime}(\mathrm{x}) & =\frac{-\mathrm{f}^{\prime}(\mathrm{x})}{\sigma^{2}}\left[1-\frac{(\mu+\sigma-\mu)^{2}}{\sigma^{2}}\right]+\frac{2 \mathrm{f}(\mathrm{x})}{\sigma^{4}}(\mu+\sigma-\mu) \\ \Rightarrow \quad \mathrm{f}^{\prime \prime \prime}(\mathrm{x}) & =\frac{-\mathrm{f}^{\prime}(\mathrm{x})}{\sigma^{2}}[1-1]+\frac{2 \mathrm{f}(\mathrm{x})}{\sigma^{3}} \cdot \sigma \\ & =0+\frac{2 \mathrm{f}(\mathrm{x})}{\sigma^{3}}=\frac{2 \mathrm{f}(\mu+\sigma)}{\sigma^{3}} \neq 0 \end{aligned} $$ At $x=\mu-\sigma$, $$ \begin{aligned} \mathrm{f}^{\prime \prime \prime}(\mathrm{x}) & =-\frac{\mathrm{f}^{\prime}(\mathrm{x})}{\sigma^{2}}\left[1-\frac{(-\sigma)^{2}}{\sigma^{2}}\right]+\frac{2 \mathrm{f}(\mathrm{x})}{\sigma^{4}}(-\sigma) \\ & =-\frac{\mathrm{f}^{\prime}(\mathrm{x})}{\sigma^{2}}(1-1)-\frac{2 \mathrm{f}(\mathrm{x})}{\sigma^{3}}=0-\frac{2 \mathrm{f}(\mathrm{x})}{\sigma^{3}} \\ \Rightarrow \quad \mathrm{f}^{\prime \prime \prime}(\mathrm{x}) & =\frac{-2 \mathrm{f}(\mathrm{x})}{\sigma^{3}}=\frac{-2 \mathrm{f}(\mu-\sigma)}{\sigma^{3}} \neq 0 \\ \mathrm{f}^{\prime \prime \prime}(\mathrm{x}) & =\frac{( \pm \sigma)}{\sigma^{4}} \mathrm{f}(\mu \pm \sigma)\left[3-\frac{( \pm \sigma)^{2}}{\sigma^{2}}\right] \\ & =\frac{ \pm 2 \mathrm{f}(\mu \pm \sigma)}{\sigma^{3}} \neq 0 \end{aligned} $$ The S.D. is the distance of point of inflexion from the axis of symmetry. Example: $-\mu_{1}^{\prime}($ about 10$)=40$ $$ \mu_{4}^{\prime}(\text { about } 50)=48 $$ Find Mean and S.D. Solution: - By defination $1^{\text {st }}$ moment about 10 , $$ \begin{aligned} & \mu_{1}^{\prime}=\int_{-\infty}^{\infty}(x-10) f(x) d x=40 \\ \Rightarrow \quad & =\int_{-\infty}^{\infty} x f(x) d x-10 \int_{-\infty}^{\infty} f(x) d x=40 \\ \Rightarrow \quad & \int_{-\infty}^{\infty} x f(x) d x=40+10=50 \quad\left[\Theta \int_{-\infty}^{\infty} f(x) d x=1\right] \\ \therefore \quad & \text { Mean }=\int_{-\infty}^{\infty} x f(x) d x=50 \end{aligned} $$ As Mean is 50 . $$ \begin{aligned} & \mu_{4}^{\prime}=48\left(4^{\text {th }} \text { moment about Mean }\right) \\ \Rightarrow \quad & 3 \sigma^{4}=48 \quad \Rightarrow \sigma^{4}=16=2^{4} \\ \Rightarrow \quad & \sigma=\text { S.D. }=2 \end{aligned} $$ Theorem: - If $\mathrm{X}_{1}$ and $\mathrm{X}_{2}$ are independent normal variates, then $\mathrm{X}_{1}+\mathrm{X}_{2}$ is also a normal variate. Proof: $-X_{1} \sim N\left(\mu_{1}, \sigma_{1}^{2}\right)$ $$ \begin{gathered} X_{2} \sim N\left(\mu_{2}, \sigma_{2}^{2}\right) \\ M_{X_{1}}(t)=E\left[e^{t X_{1}}\right]=e^{\mu_{1} t+\frac{1}{2} \sigma_{1}^{2} t^{2}} \\ M_{X_{2}}(t)=E\left[e^{t X_{2}}\right]=e^{\mu_{2} t+\frac{1}{2} \sigma_{2}^{2} t^{2}} \end{gathered} $$ As $\mathrm{X}_{1}$ and $\mathrm{X}_{2}$ are independent, $$ \begin{aligned} \therefore \quad \mathrm{M}_{\mathrm{X}_{1}+\mathrm{X}_{2}}(\mathrm{t}) & =\mathrm{M}_{\mathrm{X}_{1}}(\mathrm{t}) \cdot \mathrm{M}_{\mathrm{X}_{2}}(\mathrm{t}) \\ & =\mathrm{e}^{\left(\mu_{1}+\mu_{2}\right) \mathrm{t}+\frac{1}{2} \mathrm{t}^{2}\left(\sigma_{1}^{2}+\sigma_{2}^{2}\right)} \end{aligned} $$ which is the M.G.F of Normal distance with Mean $\mu_{1}+\mu_{2}$ and variance $\sigma_{1}^{2}+\sigma_{2}^{2}$ i.e. $X_{1}+X_{2} \sim N\left(\mu_{1}+\mu_{2}, \sigma_{1}^{2}+\sigma_{2}^{2}\right)$ Theorem: - A linear combination of independent normal variates is also a normal variate. i.e. if $\quad Y=a_{1} X_{1}+a_{1} X_{2}+\ldots \ldots+a_{n}+X_{n}$, then $Y$ is also a normal variate. Proof: - $X_{i}$ are independent normal variates. $\therefore \quad \mathrm{M}_{\mathrm{X}_{\mathrm{i}}}(\mathrm{t})=\mathrm{e}^{\mu_{\mathrm{i}} \mathrm{t}+\frac{1}{2} \mathrm{t}^{2} \sigma_{\mathrm{i}}^{2}}$ and $\quad M_{X_{i}}\left(a_{i} t\right)=e^{\mu_{i} a_{i} t+\frac{1}{2} t^{2} \sigma_{i}^{2} a_{i}^{2}}$ Now M.G.F of $Y=\sum_{i=1}^{n} a_{i} X_{i}$ is $$ \begin{aligned} M_{\sum a_{i} X_{i}}(t) & =M_{a_{1} X_{1}+a_{2} X_{2}+\ldots .+a_{n} X_{n}}(t) \\ & =M_{a_{1} X_{1}}(t) M_{a_{2} X_{2}}(t) \ldots \ldots . M_{a_{n} X_{n}}(t) \\ & =M_{X_{1}}\left(a_{1} t\right) \cdot M_{X_{2}}\left(a_{2} t\right) \ldots \ldots \ldots . M_{X_{n}}\left(a_{n} t\right) \\ & =e^{\mu_{1} a_{1} t+\frac{1}{2} \sigma_{1}^{2} a_{1}^{2} t^{2}} \cdot e^{\mu_{2} a_{2} t+\frac{1}{2} \sigma_{2}^{2} a_{2}^{2} t^{2}} \ldots \ldots \ldots . . \end{aligned} $$ $\left[\Theta \mathrm{X}_{\mathrm{i}}\right.$ 's are independent $]$$$ \left[\Theta M_{C X}(t)=M_{X}(C t)\right] $$ [from(1)] $$ =e^{t\left(\sum a_{i} \mu_{i}\right)+\frac{1}{2} t^{2}\left(\sum a_{i}^{2} \sigma_{i}^{2}\right)} $$ which is M.G.F of Normal distribution with Mean $=\sum \mathrm{a}_{\mathrm{i}} \mu_{\mathrm{i}}$ and Variance $=\sum \mathrm{a}_{\mathrm{i}}^{2} \sigma_{\mathrm{i}}^{2}$ ## Remarks (1) If we take $a_{1}=a_{2}=1, a_{3}=a_{4}=\ldots . .0$, then $X_{1}+X_{2} \sim N\left(\mu_{1}+\mu_{2}, \sigma_{1}^{2}+\sigma_{2}^{2}\right)$ if we take $\quad a_{1}=1, a_{2}=-1, a_{3}=a_{4}=\ldots \ldots . .0$, then $X_{1}-X_{2} \sim N\left(\mu_{1}+\mu_{2}, \sigma_{1}^{2}+\sigma_{2}^{2}\right)$ Thus we see that sum as well as the difference of two independent normal variates is also a normal variate. Note: - If $X_{i} \sim N\left(\mu, \sigma^{2}\right), X_{i}$ are identically distributed independent Normal variates, then their mean $$ \bar{X}_{\mathrm{i}} \sim \mathrm{N}\left(\mu, \frac{\sigma^{2}}{\mathrm{n}}\right) $$ Taking $\mathrm{a}_{1}=\mathrm{a}_{2}=\ldots \ldots . . . \mathrm{a}_{\mathrm{n}}=\frac{1}{\mathrm{n}}$ in the result $$ \mathrm{Y}=\sum \mathrm{a}_{\mathrm{i}} \mathrm{X}_{\mathrm{i}} \sim \mathrm{N}\left(\left(\sum \mathrm{a}_{\mathrm{i}} \mu_{\mathrm{i}}, \sum \mathrm{a}_{\mathrm{i}}^{2} \sigma_{\mathrm{i}}^{2}\right)\right. $$ then $\frac{1}{\mathrm{n}}\left(\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots+\mathrm{X}_{\mathrm{n}}\right) \sim \mathrm{N}\left\{\frac{1}{\mathrm{n}} \sum_{\mathrm{i}=1}^{\mathrm{n}} \mu_{\mathrm{i}}, \frac{1}{\mathrm{n}^{2}} \sum_{\mathrm{i}=1}^{\mathrm{n}} \sigma_{\mathrm{i}}^{2}\right\}$ $\Rightarrow \quad \bar{X}_{\mathrm{i}} \sim \mathrm{N}\left\{\frac{1}{\mathrm{n}} \cdot \mathrm{n} \mu, \frac{1}{\mathrm{n}^{2}} \mathrm{n} \sigma^{2}\right\}$ $\left[\Theta\right.$ As each $X_{i}$ has mean $=\mu$, variance $\left.=\sigma^{2}\right]$ $\Rightarrow \quad \bar{X}_{\mathrm{i}} \sim \mathrm{N}\left(\mu, \frac{1}{\mathrm{n}} \sigma^{2}\right)$ Theorem: - If $\mathrm{X} \sim \mathrm{B}(\mathrm{n}, \mathrm{p})$, then prove that MGF of $\frac{\mathrm{X}-\mathrm{np}}{\sqrt{\mathrm{npq}}}$ tends to $\mathrm{e}^{\mathrm{t}^{2} / 2}$ as $\mathrm{n} \rightarrow \infty$. Proof: - Let $U=\frac{X-n p}{\sqrt{n p q}}$ $$ \mathrm{f}(\mathrm{x})={ }^{\mathrm{n}} \mathrm{C}_{\mathrm{x}} \mathrm{p}^{\mathrm{x}} \mathrm{q}^{\mathrm{n}-\mathrm{x}} $$ MGF of Binomial distribution is $$ \begin{aligned} & \mathrm{M}_{\mathrm{X}}(\mathrm{t})=\left(\mathrm{q}+\mathrm{pe}^{\mathrm{t}}\right)^{\mathrm{n}} \\ & \therefore \quad \mathrm{M}_{\mathrm{U}}(\mathrm{t})=\mathrm{e}^{-\mathrm{at} / \mathrm{h}} \mathrm{M}_{\mathrm{X}}\left(\frac{\mathrm{t}}{\mathrm{h}}\right) \text { where } \mathrm{U}=\frac{\mathrm{X}-\mathrm{a}}{\mathrm{h}} \\ & M_{U}(t)=e^{-\frac{n p t}{\sqrt{n p q}}}\left[q+p e^{\frac{t}{\sqrt{n p q}}}\right]^{n} \\ & \therefore \quad \log \mathrm{M}_{\mathrm{U}}(\mathrm{t})=\frac{-\mathrm{npt}}{\sqrt{\mathrm{npq}}}+\mathrm{n} \log \left\lfloor\mathrm{q}+\mathrm{pe}^{\mathrm{t} / \sqrt{\mathrm{npq}}}\right\rfloor \\ & =\frac{-n p t}{\sqrt{n p q}}+n \log \left[1+p\left\{\frac{t}{\sqrt{n p q}}+\frac{t^{2}}{L^{2} n p q}+\frac{t^{3}}{\left\lfloor 3(n p q)^{3 / 2}\right.}+\ldots \ldots .\right\}\right] \\ & =\frac{\mathrm{t}^{2}}{2 \mathrm{q}}(1-\mathrm{p})+0\left(\mathrm{n}^{-1 / 2}\right) \\ & =\frac{\mathrm{t}^{2}}{2}+0\left(\frac{1}{\sqrt{\mathrm{n}}}\right) \end{aligned} $$ As $\quad \mathrm{n} \rightarrow \infty, \quad \log \mathrm{M}_{\mathrm{U}}(\mathrm{t})=\mathrm{t}^{2} / 2$ $\Rightarrow \quad \mathrm{M}_{\mathrm{U}}(\mathrm{t})=\mathrm{e}^{\mathrm{t}^{2} / 2}$ ## Area property $$ \text { If } \mathrm{X} \sim \mathrm{N}\left(\mu, \sigma^{2}\right) $$ The probability that r.v. $X$ will lie between $X=\mu$ and $X=x_{1}$ is given by $$ \begin{aligned} \mathrm{P}\left(\mu<\mathrm{X}<\mathrm{x}_{1}\right) & =\int_{\mu}^{\mathrm{x}_{1}} \mathrm{f}(\mathrm{x}) \mathrm{dx} \\ & =\frac{1}{\sigma \sqrt{2 \pi}} \int_{\mu}^{\mathrm{x}_{1}} \mathrm{e}^{-\frac{1}{2}\left(\frac{\mathrm{x}-\mu}{\sigma}\right)^{2}} \mathrm{dx} \end{aligned} $$ Put $\quad Z=\frac{X-\mu}{\sigma}$, then $z=\frac{x-\mu}{\sigma}$ If $\quad X=\mu$, then $Z=0$ \\& when $X=x_{1}, Z=z_{1}$ (say) where $z_{1}=\frac{x_{1}-\mu}{\sigma}$ $\therefore \quad \mathrm{P}\left(\mu<\mathrm{X}<\mathrm{x}_{1}\right)=\mathrm{P}\left(0<\mathrm{Z}<\mathrm{z}_{1}\right)$ $$ =\frac{1}{\sqrt{2 \pi}} \int_{0}^{z_{1}} \mathrm{e}^{-\frac{1}{2} z^{2}} \mathrm{dz}=\int_{0}^{\mathrm{z}_{1}} \varphi(\mathrm{z}) \mathrm{dz} $$ where $\phi(z)=\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-\mathrm{z}^{2} / 2}$ is the probability function. The definite integral $\int_{0}^{z_{1}} \varphi(z) d z$ is known as normal probability Integral \\& gives the area under Normal curve between ordinates at $Z=0 \& Z=z_{1}$. These are tabulated for different values of $z_{1}$ at interval of 0.01 $$ \begin{aligned} & \mathrm{P}(\mu-\sigma<\mathrm{X}<\mu+\sigma)=\mathrm{P}(-1<\mathrm{Z}<1) \end{aligned} $$ $$ \begin{aligned} & =2(0.3413) \\ & =0.6826 \end{aligned} $$ and $\quad \mathrm{P}(\mu-2 \sigma<\mathrm{X}<\mu+2 \sigma)=\mathrm{P}(-2<\mathrm{Z}<2)$ $$ \begin{aligned} & =\int_{-2}^{2} \varphi(z) d z=2 \int_{0}^{2} \varphi(z) d z \\ & =2(0.4772)=0.9544 \end{aligned} $$ and $\quad(\mu-3 \sigma<\mathrm{X}<\mu+3 \sigma)=\mathrm{P}(-3<\mathrm{Z}<3)$ $$ =2(0.4987)=0.9974 $$ and outside range $\mu \pm 3 \sigma$ $$ \begin{aligned} \mathrm{P}(|\mathrm{X}-\mu|>3 \sigma) & =\mathrm{P}(|\mathrm{Z}|>3) \\ & =1-\mathrm{P}(-3<\mathrm{Z}<3) \\ & =1-0.9974=0.0026 \end{aligned} $$ Example: - If $\mathrm{X} \sim \mathrm{N}(12,16)$ Find (i) $\mathrm{P}(\mathrm{X} \geq 20)$ (ii) $\mathrm{P}(\mathrm{X} \leq 20)$ (iii) $\mathrm{P}(0 \leq \mathrm{X} \leq 12)$ Solution: - (i) Here $\mu=12, \sigma=4$ $$ \begin{aligned} \mathrm{Z}=\frac{\mathrm{X}-\mu}{\sigma} & =\frac{\mathrm{X}-12}{4} \\ \therefore \quad \mathrm{P}(\mathrm{X} \geq 20) & =\mathrm{P}(\mathrm{Z} \geq 2) \\ & =0.5-\mathrm{P}(0 \leq \mathrm{Z}<2) \\ & =0.5-0.4772=0.0228 \end{aligned} $$ (ii) $\mathrm{P}(\mathrm{X} \leq 20)=1-\mathrm{P}(\mathrm{X} \geq 20)$ $$ =1-0.0228=0.9772 $$ (iii) $\mathrm{P}(0 \leq \mathrm{X} \leq 12)=\mathrm{P}(-3 \leq \mathrm{Z} \leq 0)=\mathrm{P}(0 \leq \mathrm{Z} \leq 3)$ $\therefore \quad \mathrm{P}(0 \leq \mathrm{X} \leq 12)=0.4987$ (iv) Find $x^{\prime}$ when $\mathrm{P}\left(\mathrm{X}>\mathrm{x}^{\prime}\right)=0.24$ Solution: - when $X=x^{\prime}$, then $Z=\frac{x^{\prime}-12}{4}=z_{1}$ (say) $\therefore \quad \mathrm{P}\left(\mathrm{Z}>\mathrm{z}_{1}\right)=0.24$ $$ \mathrm{P}\left(0<\mathrm{Z}<\mathrm{z}_{1}\right)=0.5-0.24=0.26 $$ From table, $\mathrm{z}_{1}=0.71$ $$ \begin{array}{ll} \therefore(1) \Rightarrow & \mathrm{z}_{1}=\frac{\mathrm{x}^{\prime}-12}{4}=0.71 \\ \Rightarrow & \mathrm{x}^{\prime}=14.84 \end{array} $$ Example: $-\mathrm{X} \sim \mathrm{N}\left(30, \sigma^{2}\right.$ Find (i) $\mathrm{P}(26 \leq \mathrm{X} \leq 40) \quad$ (ii) $\mathrm{X} \geq 45 \quad$ (iii) $|\mathrm{X}-30|>5$ Solution: - Here $\mu=30, \sigma=5$ (i) $\quad$ when $\mathrm{X}=26, \quad \mathrm{Z}=\frac{\mathrm{X}-\mu}{\sigma}=\frac{26-30}{5}=-0.8$ \\& $\quad$ when $X=40, \quad Z=\frac{40-30}{5}=2.0$ $\therefore \quad \mathrm{P}(26 \leq \mathrm{X} \leq 40)=\mathrm{P}(-0.8 \leq \mathrm{Z} \leq 2)$ $=\mathrm{P}(-0.8 \leq \mathrm{Z}<0.8)+\mathrm{P}(0 \leq \mathrm{Z} \leq 2)$ $=\mathrm{P}(0 \leq \mathrm{Z} \leq 0.8)+\mathrm{P}(0 \leq \mathrm{Z}<2)$ $=0.2881+0.4772$ $\mathrm{P}(26 \leq \mathrm{X} \leq 40)=0.7653$ (ii) $\mathrm{P}(\mathrm{X} \geq 45)=\mathrm{P}(\mathrm{Z} \geq 3)=0.5-\mathrm{P}(0 \leq \mathrm{Z} \leq 3)$ $$ =0.5-0.4987=0.0013 $$ (iii) $\mathrm{P}(|\mathrm{X}-30|>5)$ Put $|\mathrm{X}-30|=|\mathrm{Y}| \quad \therefore|\mathrm{Y}| \leq 5$ $\Rightarrow \quad-5 \leq \mathrm{Y} \leq 5$ $\therefore \quad \mathrm{P}(|\mathrm{X}-30| \leq 5)=\mathrm{P}(25 \leq \mathrm{X} \leq 35)$ $$ =\mathrm{P}(-1 \leq \mathrm{Z} \leq 1)=2 \mathrm{P}(0 \leq \mathrm{Z} \leq 1) $$ Then $\mathrm{P}(|\mathrm{X}-30|>5)=1-\mathrm{P}(|\mathrm{X}-30| \leq 5)$ $$ =1-0.6826=0.3174 $$ Example: $-\mathrm{X} \sim \mathrm{N}(12,16)$ Find $\mathrm{x}_{0}^{1}$ and $\mathrm{x}_{1}^{1}$ when $\mathrm{P}\left(\mathrm{x}_{0}^{1} \leq \mathrm{X} \leq \mathrm{x}_{1}^{1}\right)=0.50$ and $\mathrm{P}\left(\mathrm{X}>\mathrm{x}_{1}^{1}\right)=0.25$ Solution: - when $\quad \mathrm{X}=\mathrm{x}_{1}^{\prime}, \quad \mathrm{Z}=\frac{\mathrm{x}_{1}^{\prime}-12}{4}=\mathrm{z}_{1}$ (say) when $\mathrm{X}=\mathrm{x}_{0}^{\prime}, \quad \mathrm{Z}=\frac{\mathrm{x}_{0}^{\prime}-12}{4}=-\mathrm{z}_{1}$ (from figure) $$ \mathrm{P}\left(\mathrm{X}>\mathrm{X}_{1}^{\prime}\right)=0.25 \quad \Rightarrow \mathrm{P}\left(\mathrm{Z}>\mathrm{z}_{1}\right)=0.25 $$ and $\quad\left[\mathrm{P}\left(\mathrm{Z}>\mathrm{z}_{1}\right)=0.5-\mathrm{P}\left(0 \leq \mathrm{Z} \leq \mathrm{z}_{1}\right)=0.25\right] \mathrm{X}$ $\therefore \quad \mathrm{P}\left(0<\mathrm{Z}<\mathrm{z}_{1}\right)=0.5-\mathrm{P}\left(\mathrm{Z}>\mathrm{z}_{1}\right)=0.25$ from table, $\mathrm{z}_{1}=0.67$ Hence $\frac{x_{1}^{\prime}-12}{4}=0.67$ $\Rightarrow \quad \mathrm{x}_{1}^{\prime}=0.67 \times 4+12=14.68$ and $\frac{\mathrm{x}_{0}^{\prime}-12}{4}=-0.67 \Rightarrow \mathrm{x}_{0}^{\prime}=9.32$ ## LESSON 8 MULTIPLE AND PARTIAL CORRELATION INTRODUCTION. When the value of the one variable is associated with or influenced by other variable, e.g., the age of husband and wife, the height of father and son, the supply and demand of a commodity and so on, Karl Pearson's coefficient of correlation can be used as a measure of linear relationship between them. But sometimes there is correlation between many variables and the value of one variable may be influenced by many others, e.g., the yield of crop per acre say $\left(\mathrm{X}_{1}\right)$ depends upon quality of seed $\left(\mathrm{X}_{2}\right)$, fertility of soil $\left(\mathrm{X}_{3}\right)$, fertilizer used $\left(\mathrm{X}_{4}\right)$, irrigation facilities $\left(\mathrm{X}_{5}\right)$ etc. Whenever we are interested in studying the joint effect of a group of variable upon a variable not included in that group, our study is that of multiple correlation and multiple regression. The correlation and regression between only two variates after eliminating the linear effect of other variates in them is called the partial correlation and partial regression. ## Yule's Notation: Let us consider a distribution involving three random variables $\mathrm{X}_{1}, \mathrm{X}_{2}$ and $X_{3}$. Then the equation of the plane of regression of $X_{1}$ on $X_{2}$ and $X_{3}$ is $\mathrm{X}_{1}=\mathrm{a}+\mathrm{b}_{12.3} \mathrm{X}_{2}+\mathrm{b}_{13.2} \mathrm{X}_{3}$ Without loss of generality, we can assume the variables $\mathrm{X}_{1}, \mathrm{X}_{2}$, and $\mathrm{X}_{3}$ have been measured from their respective means, so that $$ \mathrm{E}\left(\mathrm{X}_{1}\right)=\mathrm{E}\left(\mathrm{X}_{2}\right)=\mathrm{E}\left(\mathrm{X}_{3}\right)=0 $$ Hence on taking expectation of both sides in (1), we get $\mathrm{a}=0$ Thus the plane of regression of $\mathrm{X}_{1}, \mathrm{X}_{2}$, and $\mathrm{X}_{3}$ becomes $$ \mathrm{X}_{1}=\mathrm{b}_{12.3} \mathrm{X}_{2}+\mathrm{b}_{13.2} \mathrm{X}_{3} $$ The coefficient $b_{12.3}$ and $b_{13.2}$ are known as the partial regression coefficients of $\mathrm{X}_{1}$ on $\mathrm{X}_{2}$ and of $\mathrm{X}_{1}$ on $\mathrm{X}_{3}$, respectively. The quantity $\mathrm{e}_{1.23}=\mathrm{b}_{12.3} \mathrm{X}_{2}+\mathrm{b}_{13.2} \mathrm{X}_{3}$ is called the estimate of $X_{1}$ as given by the plane of regression (1) and the quantity. $\mathrm{X}_{1.23}=\mathrm{X}_{1}-\mathrm{b}_{12.3} \mathrm{X}_{2}-\mathrm{b}_{13.2} \mathrm{X}_{3}$ is called the error of estimate or residual. In the general case of $n$ variable $X_{1}, X_{2}, \ldots \ldots X_{n}$ the equation of the plane of regression of $X_{1}$ on $X_{2}, X_{3}, \ldots \ldots X_{n}$ becomes $\mathrm{X}_{1}=\mathrm{b}_{12.34 \ldots \ldots \mathrm{n}} \mathrm{X}_{2}+\mathrm{b}_{13.24 \ldots \ldots \mathrm{n}} \mathrm{X}_{3}+\ldots \ldots . . \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots .(\mathrm{n}-1)} \mathrm{X}_{\mathrm{n}}$ The error of estimate or residual is given by $\mathrm{X}_{1.23 \ldots \ldots . \mathrm{n}}=\mathrm{X}_{1}-\mathrm{b}_{12.34 \ldots \ldots . \mathrm{n}} \mathrm{X}_{2}+\mathrm{b}_{13.24 \ldots \ldots \mathrm{n}} \mathrm{X}_{3}+\ldots \ldots . . \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-1)} \mathrm{X}_{\mathrm{n}}$. The notation used here are due to Yule. The subscripts before the dot (.) are known as primary subscripts and those after the dot are called secondary subscripts. The order of regression coefficient is determined by the number of secondary subscripts, e.g., $$ \mathrm{b}_{12.3}, \mathrm{~b}_{12.34, \ldots \ldots \ldots, \mathrm{b}_{12.34 \ldots \ldots \ldots \mathrm{n}}} $$ are the regression coefficients of order $1.2, \ldots \ldots(\mathrm{n}-2)$ respectively. Thus in general, a regression coefficient with p-secondary subscripts will be called a regression coefficient of order ' $p$ '. It may be noted that the order in which the secondary subscripts are written is immaterial but the order of the primary subscripts is important, e.g., in $b_{12.34 \ldots \ldots \ldots . n}, X_{2}$ is independent while $X_{1}$ is dependent variable but in $\mathrm{b}_{21.34 \ldots \ldots \ldots n}, \mathrm{X}_{1}$ is independent while $\mathrm{X}_{2}$ is dependent variable. Thus of the two primary subscripts, former refer to dependent variable and the latter to independent variable. The order of the residual is also determined by the number of secondary $(\mathrm{n}-1)$ respectively. Remaks: In the following sequences we shall assume that the variable under consideration has been measured from their respective means. ## Plane of regression: The equation of the plane of regression of $\mathrm{X}_{1}$ on $\mathrm{X}_{2}$ and $\mathrm{X}_{3}$ is $$ \mathrm{X}_{1}=\mathrm{b}_{12.3} \mathrm{X}_{2}+\mathrm{b}_{13.2} \mathrm{X}_{3} $$ The constants b's in (3) are determined by the principle of leastsquares, i.e., by minimizing the sum of the squares of the residual, viz., $\mathrm{S}=\sum \mathrm{X}_{1.23}^{2}=\sum\left(\mathrm{X}_{1}-\mathrm{b}_{12.3} \mathrm{X}_{2}-\mathrm{b}_{13.2} \mathrm{X}_{3}\right)^{2}$ the summation being extended to the given values $(\mathrm{N}$ in number) of the variables. The normal equation of estimating $b_{12.3}$ and $b_{13.2}$ are $$ \left.\begin{array}{l} \frac{\partial \mathrm{S}}{\partial \mathrm{b}_{12.3}}=0=-2 \sum \mathrm{X}_{2}\left(\mathrm{X}_{1}-\mathrm{b}_{12.3} \mathrm{X}_{2}-\mathrm{b}_{13.2} \mathrm{X}_{3}\right) \\ \frac{\partial \mathrm{S}}{\partial \mathrm{b}_{13.2}}=0=-2 \sum \mathrm{X}_{3}\left(\mathrm{X}_{1}-\mathrm{b}_{12.3} \mathrm{X}_{2}-\mathrm{b}_{13.2} \mathrm{X}_{3}\right) \end{array}\right\} $$ Since $X_{i}$ 's are measured from their respective means, we have $$ \left.\begin{array}{l} \sigma_{i}^{2}=\frac{1}{N} \sum X_{i}^{2}, \quad \operatorname{Cov}\left(X_{i}, X_{j}\right)=\frac{1}{N} \sum X_{i} X_{j} \\ r_{i j}=\frac{\operatorname{Cov}\left(X_{i}, X_{j}\right)}{\sigma_{i} \sigma_{j}}=\frac{\sum X_{i} X_{j}}{N \sigma_{i} \sigma_{j}} \end{array}\right\} $$ Hence from (6), we get $$ \begin{aligned} & \mathrm{r}_{12} \sigma_{1} \sigma_{2}-\mathrm{b}_{12.3} \sigma_{2}^{2}-\mathrm{b}_{13.2} \mathrm{r}_{23} \sigma_{2} \sigma_{3}=0 \\ & \mathrm{r}_{13} \sigma_{1} \sigma_{3}-\mathrm{b}_{12.3} \mathrm{r}_{23} \sigma_{2} \sigma_{3}-\mathrm{b}_{13.2} \sigma_{3}^{2}=0 \end{aligned} $$ solving equations (10.30d) for $b_{12.3}$ and $b_{13.2,}$, we get $$ \mathrm{b}_{12.3}=\frac{\left|\begin{array}{cc} \mathrm{r}_{12} \sigma_{1} & \mathrm{r}_{23} \sigma_{3} \\ \mathrm{r}_{13} \sigma_{1} & \sigma_{3} \end{array}\right|}{\left|\begin{array}{cc} \sigma_{2} & \mathrm{r}_{23} \sigma_{3} \\ \mathrm{r}_{23} \sigma_{2} & \sigma_{3} \end{array}\right|}=\frac{\sigma_{1}}{\sigma_{2}} \frac{\left|\begin{array}{cc} \mathrm{r}_{12} & \mathrm{r}_{23} \\ \mathrm{r}_{13} & 1 \end{array}\right|}{\left|\begin{array}{cc} 1 & \mathrm{r}_{23} \\ \mathrm{r}_{23} & 1 \end{array}\right|} $$ Similarly, we obtain $$ \mathrm{b}_{13.2}=\frac{\sigma_{1}}{\sigma_{3}} \frac{\left|\begin{array}{cc} 1 & \mathrm{r}_{12} \\ \mathrm{r}_{23} & \mathrm{r}_{13} \end{array}\right|}{\left|\begin{array}{cc} 1 & \mathrm{r}_{23} \\ \mathrm{r}_{23} & 1 \end{array}\right|} $$ If we write $$ \omega=\left|\begin{array}{ccc} 1 & r_{12} & r_{13} \\ r_{21} & 1 & r_{23} \\ r_{31} & r_{32} & 1 \end{array}\right| $$ And $\omega_{\mathrm{ij}}$ is the cofactor of the element in the ith row and $\mathrm{jth}$ column of $\omega$, we have from (9) and (10) $$ b_{12.3}=-\frac{\sigma_{1}}{\sigma_{2}} \frac{\omega_{12}}{\omega_{11}} \text { and } b_{13.2}=-\frac{\sigma_{1}}{\sigma_{3}} \frac{\omega_{13}}{\omega_{11}} $$ Substituting thes values in (3), we get the required equation of the plane of regression of $\mathrm{X}_{1}$ on $\mathrm{X}_{2}$ and $\mathrm{X}_{3}$ $$ \begin{aligned} & X_{1}=-\frac{\sigma_{1}}{\sigma_{2}} \cdot \frac{\omega_{12}}{\omega_{11}} \cdot X_{2}-\frac{\sigma_{1}}{\sigma_{3}} \frac{\omega_{13}}{\omega_{11}} \cdot X_{3}=0 \\ \Rightarrow \quad & \frac{X_{1}}{\sigma_{1}} \cdot \omega_{11}+\frac{X_{2}}{\sigma_{2}} \cdot \omega_{12}+\frac{X_{3}}{\sigma_{3}} \cdot \omega_{13}=0 \end{aligned} $$ or Eliminating the coefficient $b_{12.3}$ and $b_{13.2}$ in (3) and (8), the required equation of the plane of regression of $\mathrm{X}_{1}$ on $\mathrm{X}_{2}$ and $\mathrm{X}_{3}$ becomes $$ \left|\begin{array}{ccc} \mathrm{X}_{1} & \mathrm{X}_{2} & \mathrm{X}_{3} \\ \mathrm{r}_{12} \sigma_{1} \sigma_{2} & \sigma_{2}^{2} & \mathrm{r}_{23} \sigma_{2} \sigma_{3} \\ \mathrm{r}_{13} \sigma_{1} \sigma_{3} & \mathrm{r}_{23} \sigma_{2} \sigma_{3} & \sigma_{3}^{2} \end{array}\right|=0 $$ Dividing $\mathrm{C}_{1}, \mathrm{C}_{2}$ and $\mathrm{C}_{3}$ by $\sigma_{1}, \sigma_{2}$ and $\sigma_{3}$ respectively and also $\mathrm{R}_{2}$ and $\mathrm{R}_{3}$ by $\sigma_{2}$ and $\sigma_{3}$ respectively, we get $$ \begin{aligned} & \left|\begin{array}{ccc} \frac{X_{1}}{\sigma_{1}} & \frac{X_{2}}{\sigma_{2}} & \frac{X_{3}}{\sigma_{3}} \\ r_{12} & 1 & r_{23} \\ r_{13} & r_{23} & 1 \end{array}\right|=0 \\ \Rightarrow \quad & \frac{X_{1}}{\sigma_{1}} \cdot \omega_{11}+\frac{X_{2}}{\sigma_{2}} \cdot \omega_{12}+\frac{X_{3}}{\sigma_{3}} \cdot \omega_{13}=0 \end{aligned} $$ Where $\omega_{\mathrm{ij}}$ is defined in (11) ## Generalization: In general, the equation of the plane of regression of $\mathrm{X}_{1}$ on $\mathrm{X}_{2}, \mathrm{X}_{3} \ldots \ldots \mathrm{X}_{\mathrm{n}}$ is $\mathrm{X}_{1}=\mathrm{b}_{12.34 \ldots \ldots \mathrm{n}} \mathrm{X}_{2}+\mathrm{b}_{13.24 \ldots \ldots . \mathrm{n}} \mathrm{X}_{3}+\ldots \ldots . . \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-1)} \mathrm{X}_{\mathrm{n}}$ The sum of the squares of residuals is given by $$ \begin{aligned} \mathrm{S} & =\sum \mathrm{X}_{1.23 \ldots \ldots \mathrm{n}}^{2} \\ & =\sum\left(\mathrm{X}_{1}-\mathrm{b}_{12.34 \ldots \ldots . \mathrm{n}} \mathrm{X}_{2}+\mathrm{b}_{13.24 \ldots \ldots \mathrm{n}} \mathrm{X}_{3}-\ldots \ldots . \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-1)} \mathrm{X}_{\mathrm{n}}\right. \end{aligned} $$ Using the principle of least squares, the normal equations for estimating the (n-1), b's are $$ \begin{aligned} & \frac{\partial S}{\partial b_{12.34 \ldots \ldots . n}}=0=-2 \sum X_{2}\left(X_{1}-b_{12.34 \ldots \ldots \ldots . n} X_{2}-b_{13.24 \ldots \ldots \ldots . n} X_{3}-\ldots \ldots \ldots . . b_{1 n .23 \ldots \ldots \ldots .(n-1)} X_{n}\right) \\ & \frac{\partial \mathrm{S}}{\partial \mathrm{b}_{13.24 \ldots \ldots \mathrm{n}}}=0=-2 \sum \mathrm{X}_{3}\left(\mathrm{X}_{1}-\mathrm{b}_{12.34 \ldots \ldots \ldots \mathrm{n}} \mathrm{X}_{2}-\mathrm{b}_{13.24 \ldots \ldots \ldots \mathrm{n}} \mathrm{X}_{3}-\ldots \ldots \ldots \ldots \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-1)} \mathrm{X}_{\mathrm{n}}\right) \\ & \frac{\partial S}{\partial b_{1 n .23 \ldots \ldots . n}}=0=-2 \sum X_{n}\left(X_{1}-b_{12.34 \ldots \ldots . n} X_{2}-b_{13.24 \ldots \ldots \ldots n} X_{3}-\ldots \ldots \ldots \ldots b_{1 n .23 \ldots \ldots \ldots . .(n-1)} X_{n}\right) \end{aligned} $$ $$ \text { i.e., } \quad \sum X_{i} X_{1.23 \ldots . n}=0, \quad(i=2,3, \ldots \ldots . . n) $$ which on simplification after using (7) $$ \begin{aligned} & \mathrm{r}_{12} \sigma_{1} \sigma_{2}=\mathrm{b}_{12.34 \ldots \ldots . \mathrm{n}} \sigma_{2}{ }^{2}+\mathrm{b}_{13.24 \ldots \ldots \ldots \mathrm{n}} \mathrm{r}_{23} \sigma_{2} \sigma_{3}+\ldots \ldots \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-\mathrm{r})} \mathrm{r}_{2 \mathrm{n}} \sigma_{2} \sigma_{\mathrm{n}}=0 \\ & \mathrm{r}_{13} \sigma_{1} \sigma_{3}=\mathrm{b}_{12.34 \ldots \ldots \mathrm{n}} \mathrm{r}_{23} \sigma_{2} \sigma_{3}+\mathrm{b}_{13.24 \ldots \ldots . \mathrm{n}} \sigma_{3}{ }^{2}+\ldots \ldots \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-\mathrm{r}} \mathrm{r}_{3 \mathrm{n}} \sigma_{3} \sigma_{\mathrm{n}}=0 \end{aligned} $$ $$ \mathrm{r}_{1 \mathrm{n}} \sigma_{1} \sigma_{\mathrm{n}}=\mathrm{b}_{12.34 \ldots \ldots . \mathrm{n}} \mathrm{r}_{2 \mathrm{n}} \sigma_{2} \sigma_{\mathrm{n}}+\mathrm{b}_{13.24 \ldots \ldots . \mathrm{n}} \mathrm{r}_{3 \mathrm{n}} \sigma_{3} \sigma_{\mathrm{n}}+\ldots \ldots . \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-\mathrm{m})} \sigma_{\mathrm{n}}{ }^{2}=0 $$ Hence, the elimination of b's between (14) and (15) yields $$ \left|\begin{array}{ccccc} X_{1} & X_{2} & X_{3} & \ldots \ldots \ldots . . . & X_{n} \\ r_{12} \sigma_{1} \sigma_{2} & \sigma_{2}^{2} & r_{23} \sigma_{2} \sigma_{3} & \ldots \ldots \ldots \ldots & r_{23} \sigma_{2} \sigma_{3} \\ r_{13} \sigma_{1} \sigma_{3} & r_{23} \sigma_{2} \sigma_{3} & \sigma_{3}^{2} & \ldots \ldots \ldots \ldots & r_{3 n} \sigma_{3} \sigma_{n} \\ M & M & M & M & M \\ r_{1 n} \sigma_{1} \sigma_{n} & r_{2 n} \sigma_{2} \sigma_{n} & r_{3 n} \sigma_{3} \sigma_{n} & K \ldots \ldots \ldots & \sigma_{n}^{2} \end{array}\right|=0 $$ Dividing $\mathrm{C}_{1}, \mathrm{C}_{2} \ldots \ldots, \mathrm{C}_{\mathrm{n}}$ by $\sigma_{1}, \sigma_{2} \ldots \ldots \sigma_{\mathrm{n}}$ respectively and also $\mathrm{R}_{2}, \mathrm{R}_{3} \ldots \ldots \mathrm{R}_{\mathrm{n}}$ by $\sigma_{2}, \sigma_{3} \ldots \sigma_{\mathrm{n}}$ respectively, we get $$ \left|\begin{array}{ccccc} \frac{X_{1}}{\sigma_{1}} & \frac{X_{2}}{\sigma_{2}} & \frac{X_{3}}{\sigma_{3}} & \ldots \ldots & \frac{X_{n}}{\sigma_{n}} \\ r_{12} & 1 & r_{32} & \ldots \ldots \ldots & r_{2 n} \\ r_{13} & r_{23} & 1 & \ldots \ldots . & r_{3 n} \\ M & M & M & \ldots \ldots . & M \\ r_{1 n} & r_{2 n} & r_{3 n} & \ldots \ldots \ldots & 1 \end{array}\right|=0 $$ If we write $$ \omega=\left|\begin{array}{ccccc} 1 & r_{12} & r_{13} & \ldots \ldots & r_{1 n} \\ r_{21} & 1 & r_{23} & \ldots \ldots & r_{23} \\ r_{31} & r_{32} & 1 & \ldots \ldots & r_{3 n} \\ M & M & M & M & M \\ r_{n 1} & r_{n 2} & r_{n 3} & \ldots \ldots & 1 \end{array}\right| $$ And $\omega_{\mathrm{ij}}$ is the cofactor of the element in the ith row and jth column of $\omega$, we get from (18) $$ \Rightarrow \quad \frac{X_{1}}{\sigma_{1}} \cdot \omega_{11}+\frac{X_{2}}{\sigma_{2}} \cdot \omega_{12}+\frac{X_{3}}{\sigma_{3}} \cdot \omega_{13}+\ldots \ldots \ldots \ldots \ldots+\frac{X_{n}}{\sigma_{n}} \cdot \omega_{1 \mathrm{n}}=0 $$ As the required equation of plane of regression of $X_{1}$ on $X_{2}, X_{3} \ldots X_{n}$. Equation (20) can be re-written as $$ X_{1}=-\frac{\sigma_{1}}{\sigma_{2}} \cdot \frac{\omega_{12}}{\omega_{11}} \cdot X_{2}-\frac{\sigma_{1}}{\sigma_{3}} \frac{\omega_{13}}{\omega_{11}} \cdot X_{3}-\ldots \ldots \ldots \frac{\sigma_{1}}{\sigma_{n}} \frac{\omega_{1 n}}{\omega_{11}} \cdot X_{n}=0 $$ Comparing (21) with (14), we get $$ \left.\begin{array}{cc} \mathrm{b}_{12.34} & =-\frac{\sigma_{1}}{\sigma_{2}} \frac{\omega_{12}}{\omega_{11}} \\ \mathrm{~b}_{13.24} & =-\frac{\sigma_{1}}{\sigma_{3}} \frac{\omega_{13}}{\omega_{11}} \\ \mathrm{M} & \mathrm{M} \\ \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots .(\mathrm{n}-1)} & =-\frac{\sigma_{1}}{\sigma_{\mathrm{n}}} \frac{\omega_{1 \mathrm{n}}}{\omega_{11}} \end{array}\right\} $$ ## Properties of residuals Property 1. The sum of the product of any residual of order zero with any other residual of higher order is zero, provided the subscript of the former occurs among the secondary subscripts of the latter. The normal equation for estimating b's in trivarate and n-variate distributions are $$ \sum \mathrm{X}_{2} \mathrm{X}_{1.23}=0, \sum \mathrm{X}_{3} \mathrm{X}_{1.23}=0 $$ And $\quad \sum \mathrm{X}_{\mathrm{i}} \mathrm{X}_{1.23 \ldots . \mathrm{n}}=0 ; \mathrm{I}=2,3, \ldots \ldots \mathrm{n}$ respectively. Here $X_{i},(i=1,2,3 \ldots \ldots, n)$ can be regarded as a residual of order zero. Property 2. The sum of the product of any two residual in which all the secondary subscripts of the first occur among the secondary subscripts of the second is unaltered if we omit any or all the secondary subscripts of the first. e.g. $\quad \sum \mathrm{X}_{1.2} \mathrm{X}_{1.23}=\sum \mathrm{X}_{1} \mathrm{X}_{1.23}$ also $\sum \mathrm{X}_{1.23}^{2}=\sum \mathrm{X}_{1} \mathrm{X}_{1.23}$ so $\sum \mathrm{X}_{1.23}^{2}=\sum \mathrm{X}_{1.2} \mathrm{X}_{1.23}=\sum \mathrm{X}_{1} \mathrm{X}_{1.23}$ Property 3. The sum of the product of two residuals is zero if all the subscripts (primary as well as secondary) of the one occur among the secondary subscripts of the other, e.g., $$ \sum \mathrm{X}_{1.2} \mathrm{X}_{3.12}=0 $$ ## Variance of the Residuals Let us consider the plane of regression of $X_{1}$ on $X_{2}, X_{3} \ldots \ldots X_{n}$ as. $$ \mathrm{X}_{1}=\mathrm{b}_{12.34 \ldots \ldots \mathrm{n}} \mathrm{X}_{2}+\mathrm{b}_{13.24 \ldots \ldots \mathrm{n}} \mathrm{X}_{3}+\ldots \ldots . \mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots \ldots(\mathrm{n}-1)} \mathrm{X}_{\mathrm{n}} $$ Since all the $X_{i}$ 's are measured from their respective means, we have $$ \mathrm{E}\left(\mathrm{X}_{\mathrm{i}}\right)=0 ; \mathrm{i}=1,2,3, \ldots \ldots \mathrm{n} \quad \Rightarrow \mathrm{E}\left(\mathrm{X}_{1.23 \ldots \ldots \mathrm{n}}\right)=0 $$ Hence the variance of the residual is given by $$ \begin{aligned} & \sigma_{1.23 \ldots \mathrm{n}}^{2}=\frac{1}{\mathrm{~N}} \sum\left[\mathrm{X}_{1.23 \ldots \mathrm{n}}-\mathrm{E}\left(\mathrm{X}_{1.23 \ldots \mathrm{n}}\right)\right]^{2}=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1.23 \ldots \mathrm{n}}^{2} \\ & \sigma_{1.23 \ldots \mathrm{n}}^{2}=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1.23 \ldots \mathrm{n}} \mathrm{X}_{1.23 \ldots \mathrm{n}}=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1} \mathrm{X}_{1.23 \ldots \mathrm{n}} \\ & =\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1}\left(\mathrm{X}_{1}-\mathrm{b}_{12.34 \ldots \mathrm{n}} \mathrm{X}_{2}-\mathrm{b}_{13.24 \ldots \mathrm{n}} \mathrm{X}_{3}-\ldots \ldots \ldots \mathrm{b}_{1 \mathrm{n} .23 \ldots .(\mathrm{n}-1)} \mathrm{X}_{\mathrm{n} 2}\right. \\ & =\sigma_{1}^{2}-b_{12.34 \ldots . n} r_{12} \sigma_{1} \sigma_{2}-b_{13.24 \ldots n} r_{13} \sigma_{1} \sigma_{3}-\ldots \ldots \ldots \ldots \ldots-b_{1 n .23 \ldots \ldots .(n-1)} r_{1 n} \sigma_{1} \sigma_{n} \\ & \Rightarrow \sigma_{1}^{2}-\sigma_{1.23 \ldots \ldots \mathrm{n}}^{2}=b_{12.34 \ldots . \mathrm{n}} \mathrm{r}_{12} \sigma_{1} \sigma_{2}-\mathrm{b}_{13.24 \ldots \mathrm{n}} \mathrm{r}_{13} \sigma_{1} \sigma_{3}-\ldots \ldots \ldots \ldots \ldots-\mathrm{b}_{1 \mathrm{n} .23 \ldots \ldots(\mathrm{n}-1)} \mathrm{r}_{1 \mathrm{n}} \sigma_{1} \sigma_{\mathrm{n}} \end{aligned} $$ Eliminating the b's we get $$ \left|\begin{array}{cccc} \sigma_{1}^{2}-\sigma_{1.23 \ldots . \mathrm{n}}^{2} & \mathrm{r}_{12} \sigma_{1} \sigma & \ldots \ldots & \mathrm{r}_{1 \mathrm{n}} \sigma_{1} \sigma_{\mathrm{n}} \\ \mathrm{r}_{12} \sigma_{1} \sigma & \sigma_{2}^{2} & \ldots \ldots & \mathrm{r}_{2 \mathrm{n}} \sigma_{2} \sigma_{\mathrm{n}} \\ \mathrm{M} & \mathrm{M} & \mathrm{M} & \mathrm{M} \\ \mathrm{r}_{1 \mathrm{n}} \sigma_{1} \sigma_{\mathrm{n}} & \mathrm{r}_{2 \mathrm{n}} \sigma_{2} \sigma_{\mathrm{n}} & \ldots . . & \sigma_{2}^{2} \end{array}\right|=0 $$ Dividing $\mathrm{R}_{1}, \mathrm{R}_{2} \ldots . \mathrm{R}_{\mathrm{n}}$ by $\sigma_{1}, \sigma_{2} \ldots \sigma_{\mathrm{n}}$ respectively and also $\mathrm{C}_{1}, \mathrm{C}_{2} \ldots \ldots, \mathrm{C}_{\mathrm{n}}$ by $\sigma_{1}$, $\sigma_{2} \ldots \ldots \sigma_{\mathrm{n}}$ respectively, we get $$ \begin{aligned} & \left|\begin{array}{cccc} 1 & \mathrm{r}_{12} & \ldots \ldots & \mathrm{r}_{1 \mathrm{n}} \\ \mathrm{r}_{12} & 1 & \ldots \ldots & \mathrm{r}_{2 \mathrm{n}} \\ \mathrm{M} & \mathrm{M} & \mathrm{M} & \mathrm{M} \\ \mathrm{r}_{1 \mathrm{n}} & \mathrm{r}_{2 \mathrm{n}} & \ldots & 1 \end{array}\right|-\left|\begin{array}{cccc} \frac{\sigma_{1.23 \ldots \ldots \mathrm{n}}^{2}}{\sigma_{1}^{2}} & \mathrm{r}_{12} & \ldots \ldots & \mathrm{r}_{1 \mathrm{n}} \\ 0 & 1 & \ldots \ldots & \mathrm{r}_{2 \mathrm{n}} \\ \mathrm{M} & \mathrm{M} & \mathrm{M} & \mathrm{M} \\ 0 & \mathrm{r}_{2 \mathrm{n}} & \ldots . . & 1 \end{array}\right|=0 \\ & \Rightarrow \omega-\frac{\sigma_{1.23 \ldots \ldots \mathrm{n}}^{2}}{\sigma_{1}^{2}} \omega_{11}=0 \\ & \therefore \quad \sigma_{1.23 \ldots . . \mathrm{n}}^{2}=\sigma_{1}^{2}-\frac{\omega}{\omega_{11}} \end{aligned} $$ Remarks: In a tri-variate distribution, $$ \therefore \quad \sigma_{1.23 \ldots . . \mathrm{n}}^{2}=\sigma_{1}^{2}-\frac{\omega}{\omega_{11}} $$ Where $\omega$ and $\omega_{11}$ are defined in (11) ## Coefficient of Multiple Correlation: In a tri-variate distribution in which each of the variables $\mathrm{X}_{1}, \mathrm{X}_{2}$ and $\mathrm{X}_{3}$ has $\mathrm{N}$ observations, the multiple correlation coefficient of $X_{1}$ on $X_{2}$ and $X_{3}$, usually of $X_{2}$ and $X_{3}$ on $X_{1}$. In other words $R_{1.23}$ is the correlation coefficient between $X_{1}$ and its assumed value as given by the plane of regression of $X_{1}$ on $X_{2}$ and $X_{3}$ i.e. $\mathrm{e}_{1.23}=\mathrm{b}_{12.3} \mathrm{X}_{2}+\mathrm{b}_{13.2} \mathrm{X}_{3}$ we have, $\mathrm{X}_{1.23}=\mathrm{X}_{1}-\mathrm{b}_{12.3} \mathrm{X}_{2}-\mathrm{b}_{13.2} \mathrm{X}_{3}=\mathrm{X}_{1}-\mathrm{e}_{1.23}$ $\Rightarrow \mathrm{e}_{1.23}=\mathrm{X}_{1}-\mathrm{X}_{1.23}$ Since $\mathrm{X}_{\mathrm{i}}$ 's are measured from their respective means, we have $\mathrm{E}\left(\mathrm{X}_{1.23}\right)=0$ and $\mathrm{E}\left(\mathrm{e}_{1.23}\right)=0$ By definition $\mathrm{R}_{1.23}=\frac{\operatorname{Cov}\left(\mathrm{X}_{1}, \mathrm{e}_{1.23}\right)}{\sqrt{\mathrm{V}\left(\mathrm{X}_{1}\right) \mathrm{V}\left(\mathrm{e}_{1.23}\right)}}$ $\operatorname{Cov}\left(\mathrm{X}_{1}, \mathrm{e}_{1.23}\right)=\mathrm{E}\left[\left\{\mathrm{X}_{1}-\mathrm{E}\left(\mathrm{X}_{1}\right)\right\}\left\{\mathrm{e}_{1.23}-\mathrm{E}\left(\mathrm{e}_{1.23}\right)\right\}\right]=\mathrm{E}\left(\mathrm{X}_{1}, \mathrm{e}_{1.23}\right)$ $$ =\sigma_{1}^{2}-\sigma_{1.23}^{2} $$ Also $$ \begin{array}{r} \mathrm{V}\left(\mathrm{e}_{1.23}\right)=\sigma_{1}{ }^{2}-\sigma_{1.23}{ }^{2} \\ \mathrm{R}_{1.23}=\frac{\sigma_{1}^{2}-\sigma_{1.23}^{2}}{\sqrt{\sigma_{1}^{2}\left(\sigma_{1}^{2}-\sigma_{1.23}^{2}\right)}} \end{array} $$ $1-\mathrm{R}_{1.23}^{2}=\frac{\sigma_{1.23}^{2}}{\sigma_{1}^{2}}$ Using (24), we get $1-\mathrm{R}_{1.23}^{2}=\frac{\omega}{\omega_{11}}$ Where, $\omega=\left|\begin{array}{ccc}1 & r_{12} & r_{13} \\ r_{21} & 1 & r_{23} \\ r_{31} & r_{32} & 1\end{array}\right|$ and $\omega_{11}=\left|\begin{array}{cc}1 & \mathrm{r}_{23} \\ \mathrm{r}_{32} & 1\end{array}\right|$ Hence from (27), we get $\mathrm{R}_{1.23}^{2}=1-\frac{\omega}{\omega_{11}}=\frac{\mathrm{r}_{12}^{2}+\mathrm{r}_{13}^{2}-2 \mathrm{r}_{12} \mathrm{r}_{13} \mathrm{r}_{23}}{1-\mathrm{r}_{23}^{2}}$ This formula expresses the multiple correlation coefficient in terms of the total correlation coefficients between the pairs of variables ## Generalisation: $\mathrm{R}_{1.23 \ldots \ldots . \mathrm{n}}=\frac{\sigma_{1}^{2}-\sigma_{1.23 \ldots \ldots \ldots . \mathrm{n}}^{2}}{\sqrt{\sigma_{1}^{2}\left(\sigma_{1}^{2}-\sigma_{1.23 \ldots \ldots . \mathrm{n}}^{2}\right)}}$ $\mathrm{R}_{1.23 \ldots \ldots . \mathrm{n}}^{2}=1-\frac{\sigma_{1.23 \ldots \ldots \mathrm{n}}^{2}}{\sigma_{1}^{2}}$ $\mathrm{R}_{1.23 \ldots . \mathrm{n}}^{2}=1-\frac{\omega}{\omega_{11}}$ Where $\omega$ and $\omega_{11}$ are defined in (19) ## Coefficient of Partial Correlation: The partial correlation coefficient between $\mathrm{X}_{1}$ and $\mathrm{X}_{2}$, usually denoted by $\mathrm{r}_{12.3}$ is given by $$ r_{12.3}=\frac{\operatorname{Cov}\left(X_{1.3}, X_{2.3}\right)}{\sqrt{\operatorname{Var}\left(X_{1.3}\right) \operatorname{Var}\left(X_{2.3}\right)}} $$ We have $$ \begin{aligned} & \operatorname{Cov}\left(\mathrm{X}_{1.3}, \mathrm{X}_{2.3}\right)=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1.3} \mathrm{X}_{2.3}=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1} \mathrm{X}_{2.3} \\ &=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1}\left(\mathrm{X}_{2}-\mathrm{b}_{23} \mathrm{X}_{3}\right)=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1} \mathrm{X}_{2}-\mathrm{b}_{23} \sum \mathrm{X}_{1} \mathrm{X}_{3} \\ &=\mathrm{r}_{12} \sigma_{1} \sigma_{2}-\left(\mathrm{r}_{23} \sigma_{2} \mathrm{r}_{13} \sigma_{1} \sigma_{3}\right) / \sigma_{3} \\ &=\sigma_{1} \sigma_{2}\left(\mathrm{r}_{12}-\mathrm{r}_{23} \mathrm{r}_{13}\right) \\ & \operatorname{Var}\left(\mathrm{X}_{1.3}\right)=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1.3}^{2}=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1.3} \mathrm{X}_{2.3} \\ &=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1} \mathrm{X}_{2.3}=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1}\left(\mathrm{X}_{1}-\mathrm{b}_{13} \mathrm{X}_{3}\right) \\ &=\frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1}^{2}-\mathrm{b}_{13} \cdot \frac{1}{\mathrm{~N}} \sum \mathrm{X}_{1} \mathrm{X}_{3} \\ &=\sigma_{1}^{2}-\mathrm{r}_{13} \frac{\sigma_{1}}{\sigma_{3}} \mathrm{r}_{13} \sigma_{1} \sigma_{3} \\ &=\sigma_{1}^{2}\left(1-\mathrm{r}_{13}^{2}\right) \end{aligned} $$ Similarly, we can get $\operatorname{Var}\left(\mathrm{X}_{2.3}\right)=\sigma_{2}^{2}\left(1-\mathrm{r}_{23}^{2}\right)$ Hence, $\quad r_{12.3}=\frac{\sigma_{1} \sigma_{2}\left(r_{12}-r_{13} r_{23}\right)}{\sqrt{\sigma_{1}^{2}\left(1-r_{13}^{2}\right) \sigma_{2}^{2}\left(1-r_{23}^{2}\right)}}=\frac{\left(r_{12}-r_{13} r_{23}\right)}{\sqrt{\left(1-r_{13}^{2}\right)\left(1-r_{23}^{2}\right)}}$ Example: From the data relating to the yield of grain $\left(\mathrm{X}_{1}\right)$, fertilizer used $\left(\mathrm{X}_{2}\right)$ and whether condition $\left(\mathrm{X}_{3}\right)$ for 18 crops the following correlation coefficients were obtained: $$ \mathrm{r}_{12}=0.77, \quad \mathrm{r}_{13}=0.72, \quad \mathrm{r}_{23}=0.52 $$ Fond the partial correlation coefficient $r_{12.3}$ and multiple correlation coefficient $R_{1.23}$ Solution: $$ \begin{aligned} & r_{1.23}=\frac{\left(r_{12}-r_{13} r_{23}\right)}{\sqrt{\left(1-r_{13}^{2}\right)\left(1-r_{23}^{2}\right)}}=\frac{(0.77-0.72 \times 0.52)}{\sqrt{\left(1-(0.72)^{2}\right)\left(1-(0.52)^{2}\right)}}=0.62 \\ & R_{1.23}^{2}=\frac{r_{12}^{2}+r_{13}^{2}-2 r_{12} r_{13} r_{23}}{1-r_{23}^{2}}=\frac{(0.77)^{2}+(0.72)^{2}-2(0.77)(0.72)(0.52)}{1-(0.52)^{2}}=0.7334 \\ & R_{1.23}=+0.8564 \end{aligned} $$ ## Central Limit Theorem If $\mathrm{X}_{1}, \mathrm{X}_{2}, \ldots \ldots \ldots \mathrm{X}_{\mathrm{n}}$ are independent and identically distributed variables with $$ \begin{aligned} & \mathrm{E}\left(\mathrm{X}_{\mathrm{i}}\right)=\mu_{1} \\ & \mathrm{~V}\left(\mathrm{X}_{\mathrm{i}}\right)=\sigma_{1}{ }^{2}, \mathrm{i}=1,2, \ldots \ldots \mathrm{n} \end{aligned} $$ Then the sum $\mathrm{S}_{\mathrm{n}}=\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots \ldots+\mathrm{X}_{\mathrm{n}}$ is asymptotically normal with mean $\mu=$ $\mathrm{n} \mu_{1}$ and variance $\sigma^{2}=\mathrm{n} \sigma_{1}^{2}$ Hence we make the following assumptions (i) The variable are independent and identically distributed (ii) $\mathrm{E}\left(\mathrm{X}_{\mathrm{i}}^{2}\right)$ exists for $\mathrm{i}=1,2, \ldots$. Proof: Let $\mathrm{M}_{1}(\mathrm{t})$ denote the M.G.F. of each of the derivation $\left(\mathrm{X}_{\mathrm{i}}-\mu_{1}\right)$ and $\mathrm{M}(\mathrm{t})$ denote the M.G.F. of the standard variate $$ \mathrm{Z}=\left(\mathrm{S}_{\mathrm{n}}-\mu\right) / \sigma $$ Since $\mu_{1}^{\prime}=E\left(X_{i}-\mu_{1}\right)=0, \quad \mu_{2}^{\prime}=E\left(X_{i}-\mu_{1}\right)^{2}=\sigma_{1}^{2}$ We have $$ \begin{aligned} \mathrm{M}_{1}(\mathrm{t}) & =\left(1+\mu_{1}^{\prime} \mathrm{t}+\mu_{2}^{\prime} \frac{\mathrm{t}^{2}}{2}+\mu_{3}^{\prime} \frac{\mathrm{t}^{2}}{6}+\ldots \ldots .\right) \\ & =\left[1+\frac{\mathrm{t}^{2}}{2} \sigma_{1}^{2}+\mathrm{O}\left(\mathrm{t}^{3}\right)\right] \end{aligned} $$ Where $\mathrm{O}\left(\mathrm{t}^{3}\right)$ contains terms of order $\mathrm{t}^{3}$ and higher powers of $\mathrm{t}$ We have $$ \mathrm{Z}=\frac{\mathrm{S}_{\mathrm{n}}-\mu}{\sigma}=\frac{\left(\mathrm{X}_{1}+\mathrm{X}_{2} \ldots \ldots+\mathrm{X}_{\mathrm{n}}\right.}{\sigma}=\sum_{\mathrm{i}=1}^{\mathrm{n}}\left(\frac{\mathrm{X}_{\mathrm{i}}-\mu_{1}}{\sigma}\right) $$ And since $\mathrm{X}_{\mathrm{i}}$ 's are independent, we get $$ \begin{aligned} & M_{z}(t)=M_{i=1}^{n}\left(\frac{X_{i}-\mu_{1}}{\sigma}\right)(t)=M_{\sum_{i=1}^{n} X_{i}-\mu_{1}}(\sigma / t) \\ & =\prod_{\mathrm{i}=1}^{\mathrm{n}}\left\{\mathrm{M}_{\left(\mathrm{X}_{\mathrm{i}}-\mu_{1}\right)}(\mathrm{t} / \sigma)\right\}=\left[\mathrm{M}_{1}(\mathrm{t} / \sigma)\right]^{\mathrm{n}}=\left[\mathrm{M}_{\left(\mathrm{X}_{\mathrm{i}}-\mu_{1}\right)}\left(\mathrm{t} / \sqrt{\mathrm{n}} \sigma_{1}\right]^{\mathrm{n}}\right. \\ & =\left[1+\frac{\mathrm{t}^{2}}{2 \mathrm{n}}+\mathrm{O}\left(\mathrm{n}^{-\mathrm{n} / 2}\right)\right]^{\mathrm{n}} \end{aligned} $$ For every fixed ' $t$ ', the term $\mathrm{O}\left(\mathrm{n}^{-3 / 2}\right) \rightarrow 0$ as $\mathrm{n} \rightarrow \infty$. Therefore, as $\mathrm{n} \rightarrow \infty$, we get $\operatorname{limit}_{n \rightarrow \infty} M_{z}(t)=\lim _{n \rightarrow \infty}\left[1+\frac{t^{2}}{2 n}+O\left(n^{-n / 2}\right)\right]^{n}=\exp \left[\frac{t^{2}}{2}\right]=e^{\frac{t^{2}}{2}}$ which is the M.G.F. of standard normal variate Hence by uniqueness theorem of M.G.F.'s, $Z=\left(S_{n}-\mu\right) / \sigma$ is asymptotically $N(0,1)$, or $S_{n}=X_{1}+X_{2}+\ldots \ldots \ldots+X_{n}$ is asymptotically $N\left(\mu, \sigma^{2}\right)$, where $\mu=n \mu_{1}$ and $\sigma^{2}=$ $\mathrm{n} \sigma_{1}^{2}$ ## Weak Law of Large Number: Let $X_{1}, X_{2}, \ldots \ldots \ldots, X_{n}$ be a sequence of random variables and $\mu_{1}, \mu_{2}, \ldots \ldots, \mu_{n}$ Be their respective expectation and let $\mathrm{B}_{\mathrm{n}}=\operatorname{Var}\left(\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots \ldots+\mathrm{X}_{\mathrm{n}}\right)<\infty$ Then $\mathrm{P}\left\{\left|\frac{\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots+\mathrm{X}_{\mathrm{n}}}{\mathrm{n}}-\frac{\mu_{1}+\mu_{2}+\ldots \ldots \mu_{\mathrm{n}}}{\mathrm{n}}\right| \leq \varepsilon\right\} \geq 1-\eta$ For all $\mathrm{n}>\mathrm{n}_{0}$, where $\varepsilon$ and $\eta$ are arbitrary small positive numbers, provided $$ \lim _{n \rightarrow \infty} \frac{B_{n}}{n^{2}} \rightarrow 0 $$ Proof. Using Chebychev's inequality, to the random variable $\left(\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots \ldots+\mathrm{X}_{\mathrm{n}}\right) / \mathrm{n}$, we get for any $\varepsilon>0$, $$ \mathrm{P}\left\{\left|\frac{\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots+\mathrm{X}_{\mathrm{n}}}{\mathrm{n}}-\mathrm{E}\left(\frac{\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots+\mathrm{X}_{\mathrm{n}}}{\mathrm{n}}\right)\right| \leq \varepsilon\right\} \geq 1-\frac{\mathrm{B}_{\mathrm{n}}}{\mathrm{n}^{2} \varepsilon^{2}} $$ $$ \left[\text { Since } \operatorname{Var}\left(\frac{X_{1}+X_{2}+\ldots \ldots+X_{n}}{n}\right)\right]=\frac{1}{n^{2}} \operatorname{Var}\left(X_{1}+X_{2}+\ldots \ldots+X_{n}\right)=\frac{B_{n}}{n^{2}} $$ $\Rightarrow \mathrm{P}\left\{\left|\frac{\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots+\mathrm{X}_{\mathrm{n}}}{\mathrm{n}}-\left(\frac{\mu_{1}+\mu_{2}+\ldots \ldots+\mu_{\mathrm{n}}}{\mathrm{n}}\right)\right|<\varepsilon\right\} \geq 1-\frac{\mathrm{B}_{\mathrm{n}}}{\mathrm{n}^{2} \varepsilon^{2}}$ Since $\varepsilon$ is arbitrary, we assume $\frac{\mathrm{B}_{\mathrm{n}}}{\mathrm{n}^{2} \varepsilon^{2}} \rightarrow \infty$, as $\mathrm{n}$ becomes indefinitely large. Thus, having chosen two arbitrary small positive numbers $\varepsilon$ and $\eta$, number $n_{0}$ can be found so that the inequality $$ \frac{\mathrm{B}_{\mathrm{n}}}{\mathrm{n}^{2} \varepsilon^{2}}<\eta, \text { is } 1 $$ Is true for $\mathrm{n}>\mathrm{n}_{0}$. Therefore, we shall have $$ \mathrm{P}\left\{\left|\frac{\mathrm{X}_{1}+\mathrm{X}_{2}+\ldots \ldots+\mathrm{X}_{\mathrm{n}}}{\mathrm{n}}-\frac{\mu_{1}+\mu_{2}+\ldots \ldots \ldots \mu_{\mathrm{n}}}{\mathrm{n}}\right| \leq \varepsilon\right\} \geq 1-\eta . $$ ## Chi-Square Distribution Definition. If $X \sim N\left(\mu, \sigma^{2}\right)$, then $Z=\frac{X-\mu}{\sigma} \sim N(0,1)$ And $Z^{2}=\left(\frac{X-\mu}{\sigma}\right)^{2}$, is the chi-square variate with $1 \mathrm{~d} . \mathrm{f}$ In general, if $X_{i},(i=1 ., 2, \ldots \ldots n)$ are $n$ independent normal variates with men $\mu_{i}$ and variance $\sigma_{i,}^{2}(i=1 ., 2, \ldots . . n)$, then $\chi^{2}=\sum_{i=1}^{n}\left(\frac{X_{i}-\mu_{i}}{\sigma_{i}}\right)^{2}$, is the chi-square variate with $\mathrm{nd.} \mathrm{f}$ Chi-square distribution: if $\mathrm{X}_{\mathrm{i}},(\mathrm{i}=1 ., 2, \ldots \ldots \mathrm{n})$ are independent $\mathrm{N}\left(\mu_{\mathrm{i}}, \sigma_{\mathrm{i}}^{2}\right)$, we want the distribution of $$ \chi^{2}=\sum_{i=1}^{n}\left(\frac{X_{i}-\mu_{i}}{\sigma_{i}}\right)^{2}=\sum_{i=1}^{n} U_{i}^{2} $$ Where $U_{i}=\frac{X_{i}-\mu_{i}}{\sigma_{i}}$ ## Probability density function $$ \begin{array}{rlr} \mathrm{dP}\left(\chi^{2}\right) & =\frac{(1 / 2)^{\mathrm{n} / 2}}{\Gamma(\mathrm{n} / 2)} \cdot\left[\exp \left(-\frac{1}{2} \chi^{2}\right)\right]\left(\chi^{2}\right)^{(\mathrm{n} / 2)-1} \mathrm{~d} \chi^{2} & 0 \leq \chi^{2}<\infty \\ & =\frac{1}{2^{\mathrm{n} / 2} \Gamma(\mathrm{n} / 2)} \cdot\left[\exp \left(-\frac{\chi^{2}}{2}\right)\right]\left(\chi^{2}\right)^{(\mathrm{n} / 2)-1} \mathrm{~d} \chi^{2}, & \end{array} $$ Which is the required probability distribution function of chi-square distribution with $\mathrm{n}$ degree of freedom ## Students t distribution Definition. Let $x_{i}(i=1,2,3, \ldots, n)$ be a random sample of size $n$ from a normal population with mean $\mu$ and variance $\sigma^{2}$. Then student ' $s t$ is defined by the statistic $$ \mathrm{t}=\frac{\overline{\mathrm{x}}-\mu}{\mathrm{s}} \sqrt{\mathrm{n}} $$ where $\overline{\mathrm{x}}$ is the sample mean and $\mathrm{s}^{2}$ is an estimate of population variance of $\sigma^{2}$. It follows Student $\mathrm{t}$-distribution with $v=(\mathrm{n}-1)$ degrees of freedom with probability density function $$ \mathrm{f}(\mathrm{t})=\frac{1}{\sqrt{v} \mathrm{~B}\left(\frac{1}{2}, \frac{v}{2}\right)\left\{1+\frac{\mathrm{t}^{2}}{v}\right\}^{\frac{v+1}{2}}} \quad-\infty<\mathrm{t}<\infty . $$ ## Moments The probability curve of $t$ is symmetrical about $t=0$ and therefore, $$ \text { Mode }=\text { mean }=\text { median }=0 $$ All odd order moments are zero i.e. $\mu_{2 \mathrm{r}+1}=0$ Even order moments are given by $$ \begin{aligned} & \mu_{2 r}=\mu_{2 r}=\int_{-\infty}^{\infty} t^{2 r} f(t) d t=\frac{2}{\sqrt{v} B\left(\frac{1}{2}, \frac{v}{2}\right)^{0}} \int_{\left\{1+\frac{t^{2}}{v}\right\}^{\frac{v+1}{2}}}^{\infty} d t \\ & =\frac{t^{2 r}}{B\left(\frac{1}{2}, \frac{v}{2}\right)} \end{aligned} $$ Put $r=1$, then $\mu_{2}=\frac{n}{n-2}, n>2$ is known as variance. ## F -distribution. If $\mathrm{U}$ and $\mathrm{V}$ are two independent $\chi^{2}$ variates with $v_{1}$ and $v_{2}$ degrees of freedom, respectively, then $\mathrm{F}$-statistic is defined by $$ \mathrm{F}=\frac{\left(\frac{\mathrm{U}}{\mathrm{v}_{1}}\right)}{\left(\frac{\mathrm{V}}{\mathrm{v}_{2}}\right)}, \quad\left(v_{1}>\mathrm{v}_{2}\right), \quad \mid \text { as } \begin{aligned} \mathrm{U} & \sim \chi_{v_{1}}^{2} \\ \mathrm{~V} & \sim \chi_{v_{2}}^{2} \end{aligned} $$ In other words, $\mathrm{F}$ is defined as the ratio of two independent chi-square variates divided by their respective degrees of freedom and it follows Snedecor's Fdistribution with $\left(v_{1}, v_{2}\right)$ degrees of freedom with density function given by ## Moments. The rth moment about origin is given by $$ \begin{aligned} & \mu_{\mathrm{r}}{ }^{\prime}=\mathrm{E}\left(\mathrm{F}^{\mathrm{r}}\right)=\int_{0}^{\infty} \mathrm{F}^{\mathrm{r}} \mathrm{f}(\mathrm{F}) \mathrm{dF}=\int_{0}^{\infty} \frac{\mathrm{F}^{\mathrm{r}}\left(\frac{v_{1}}{v_{2}}\right)^{\frac{v_{1}}{2}}}{\mathrm{~B}\left(\frac{v_{1}}{2}, \frac{v_{2}}{2}\right)} \frac{\mathrm{F}^{\frac{v_{1}}{2}-1}}{\left\{1+\frac{v_{1}}{v_{2}} \mathrm{~F}\right\}} \mathrm{dF} \\ & \mu_{\mathrm{r}}^{\prime}=\frac{\left(\frac{v_{2}}{v_{1}}\right)^{\mathrm{r}}}{\mathrm{B}\left(\frac{v_{1}}{2}, \frac{v_{2}}{2}\right)} \mathrm{B}\left(\mathrm{r}+\frac{v_{1}}{2}, \frac{v_{2}}{2}-\mathrm{r}\right) \end{aligned} $$ Putting $r=1$, we get mean $=\mu_{1}^{\prime}=\frac{v_{2}}{v_{2}-2}$ Further $\mu_{2}^{\prime}=\frac{v_{2}^{2}\left(v_{1}+2\right)}{v_{2}\left(v_{2}-2\right)\left(v_{2}-4\right)}, \quad v_{2}>4$ Now variance is obtained from $\mu_{2}=\mu_{2}^{\prime}-\mu_{1}^{\prime 2}=\frac{v_{2}^{2}}{\left(v_{2}-2\right)}\left[\frac{v_{1}^{2}-4-v_{1} v_{2}+4 v_{1}}{v_{1}\left(v_{2}-4\right)\left(v_{2}-2\right)}\right]$
Textbooks
D-module In mathematics, a D-module is a module over a ring D of differential operators. The major interest of such D-modules is as an approach to the theory of linear partial differential equations. Since around 1970, D-module theory has been built up, mainly as a response to the ideas of Mikio Sato on algebraic analysis, and expanding on the work of Sato and Joseph Bernstein on the Bernstein–Sato polynomial. Early major results were the Kashiwara constructibility theorem and Kashiwara index theorem of Masaki Kashiwara. The methods of D-module theory have always been drawn from sheaf theory and other techniques with inspiration from the work of Alexander Grothendieck in algebraic geometry. The approach is global in character, and differs from the functional analysis techniques traditionally used to study differential operators. The strongest results are obtained for over-determined systems (holonomic systems), and on the characteristic variety cut out by the symbols, which in the good case is a Lagrangian submanifold of the cotangent bundle of maximal dimension (involutive systems). The techniques were taken up from the side of the Grothendieck school by Zoghman Mebkhout, who obtained a general, derived category version of the Riemann–Hilbert correspondence in all dimensions. Introduction: modules over the Weyl algebra The first case of algebraic D-modules are modules over the Weyl algebra An(K) over a field K of characteristic zero. It is the algebra consisting of polynomials in the following variables x1, ..., xn, ∂1, ..., ∂n. where the variables xi and ∂j separately commute with each other, and xi and ∂j commute for i ≠ j, but the commutator satisfies the relation [∂i, xi] = ∂ixi − xi∂i = 1. For any polynomial f(x1, ..., xn), this implies the relation [∂i, f] = ∂f / ∂xi, thereby relating the Weyl algebra to differential equations. An (algebraic) D-module is, by definition, a left module over the ring An(K). Examples for D-modules include the Weyl algebra itself (acting on itself by left multiplication), the (commutative) polynomial ring K[x1, ..., xn], where xi acts by multiplication and ∂j acts by partial differentiation with respect to xj and, in a similar vein, the ring ${\mathcal {O}}(\mathbf {C} ^{n})$ of holomorphic functions on Cn (functions of n complex variables.) Given some differential operator P = an(x) ∂n + ... + a1(x) ∂1 + a0(x), where x is a complex variable, ai(x) are polynomials, the quotient module M = A1(C)/A1(C)P is closely linked to space of solutions of the differential equation P f = 0, where f is some holomorphic function in C, say. The vector space consisting of the solutions of that equation is given by the space of homomorphisms of D-modules $\mathrm {Hom} (M,{\mathcal {O}}(\mathbf {C} ))$. D-modules on algebraic varieties The general theory of D-modules is developed on a smooth algebraic variety X defined over an algebraically closed field K of characteristic zero, such as K = C. The sheaf of differential operators DX is defined to be the OX-algebra generated by the vector fields on X, interpreted as derivations. A (left) DX-module M is an OX-module with a left action of DX on it. Giving such an action is equivalent to specifying a K-linear map $\nabla :D_{X}\rightarrow \operatorname {End} _{K}(M),v\mapsto \nabla _{v}$ satisfying $\nabla _{fv}(m)=f\,\nabla _{v}(m)$ $\nabla _{v}(fm)=v(f)m+f\,\nabla _{v}(m)$ (Leibniz rule) $\nabla _{[v,w]}(m)=[\nabla _{v},\nabla _{w}](m)$ Here f is a regular function on X, v and w are vector fields, m a local section of M, [−, −] denotes the commutator. Therefore, if M is in addition a locally free OX-module, giving M a D-module structure is nothing else than equipping the vector bundle associated to M with a flat (or integrable) connection. As the ring DX is noncommutative, left and right D-modules have to be distinguished. However, the two notions can be exchanged, since there is an equivalence of categories between both types of modules, given by mapping a left module M to the tensor product M ⊗ ΩX, where ΩX is the line bundle given by the highest exterior power of differential 1-forms on X. This bundle has a natural right action determined by ω ⋅ v := − Liev (ω), where v is a differential operator of order one, that is to say a vector field, ω a n-form (n = dim X), and Lie denotes the Lie derivative. Locally, after choosing some system of coordinates x1, ..., xn (n = dim X) on X, which determine a basis ∂1, ..., ∂n of the tangent space of X, sections of DX can be uniquely represented as expressions $\sum f_{i_{1},\dots ,i_{n}}\partial _{1}^{i_{1}}\cdots \partial _{n}^{i_{n}}$, where the $f_{i_{1},\dots ,i_{n}}$ are regular functions on X. In particular, when X is the n-dimensional affine space, this DX is the Weyl algebra in n variables. Many basic properties of D-modules are local and parallel the situation of coherent sheaves. This builds on the fact that DX is a locally free sheaf of OX-modules, albeit of infinite rank, as the above-mentioned OX-basis shows. A DX-module that is coherent as an OX-module can be shown to be necessarily locally free (of finite rank). Functoriality D-modules on different algebraic varieties are connected by pullback and pushforward functors comparable to the ones for coherent sheaves. For a map f: X → Y of smooth varieties, the definitions are this: DX→Y := OX ⊗f−1(OY) f−1(DY) This is equipped with a left DX action in a way that emulates the chain rule, and with the natural right action of f−1(DY). The pullback is defined as f∗(M) := DX→Y ⊗f−1(DY) f−1(M). Here M is a left DY-module, while its pullback is a left module over X. This functor is right exact, its left derived functor is denoted Lf∗. Conversely, for a right DX-module N, f∗(N) := f∗(N ⊗DX DX→Y) is a right DY-module. Since this mixes the right exact tensor product with the left exact pushforward, it is common to set instead f∗(N) := Rf∗(N ⊗LDX DX→Y). Because of this, much of the theory of D-modules is developed using the full power of homological algebra, in particular derived categories. Holonomic modules Holonomic modules over the Weyl algebra It can be shown that the Weyl algebra is a (left and right) Noetherian ring. Moreover, it is simple, that is to say, its only two-sided ideal are the zero ideal and the whole ring. These properties make the study of D-modules manageable. Notably, standard notions from commutative algebra such as Hilbert polynomial, multiplicity and length of modules carry over to D-modules. More precisely, DX is equipped with the Bernstein filtration, that is, the filtration such that FpAn(K) consists of K-linear combinations of differential operators xα∂β with |α| + |β| ≤ p (using multiindex notation). The associated graded ring is seen to be isomorphic to the polynomial ring in 2n indeterminates. In particular it is commutative. Finitely generated D-modules M are endowed with so-called "good" filtrations F∗M, which are ones compatible with F∗An(K), essentially parallel to the situation of the Artin–Rees lemma. The Hilbert polynomial is defined to be the numerical polynomial that agrees with the function n ↦ dimK FnM for large n. The dimension d(M) of an An(K)-module M is defined to be the degree of the Hilbert polynomial. It is bounded by the Bernstein inequality n ≤ d(M) ≤ 2n. A module whose dimension attains the least possible value, n, is called holonomic. The A1(K)-module M = A1(K)/A1(K)P (see above) is holonomic for any nonzero differential operator P, but a similar claim for higher-dimensional Weyl algebras does not hold. General definition As mentioned above, modules over the Weyl algebra correspond to D-modules on affine space. The Bernstein filtration not being available on DX for general varieties X, the definition is generalized to arbitrary affine smooth varieties X by means of order filtration on DX, defined by the order of differential operators. The associated graded ring gr DX is given by regular functions on the cotangent bundle T∗X. The characteristic variety is defined to be the subvariety of the cotangent bundle cut out by the radical of the annihilator of gr M, where again M is equipped with a suitable filtration (with respect to the order filtration on DX). As usual, the affine construction then glues to arbitrary varieties. The Bernstein inequality continues to hold for any (smooth) variety X. While the upper bound is an immediate consequence of the above interpretation of gr DX in terms of the cotangent bundle, the lower bound is more subtle. Properties and characterizations Holonomic modules have a tendency to behave like finite-dimensional vector spaces. For example, their length is finite. Also, M is holonomic if and only if all cohomology groups of the complex Li∗(M) are finite-dimensional K-vector spaces, where i is the closed immersion of any point of X. For any D-module M, the dual module is defined by $\mathrm {D} (M):={\mathcal {R}}\operatorname {Hom} (M,D_{X})\otimes \Omega _{X}^{-1}[\dim X].$ Holonomic modules can also be characterized by a homological condition: M is holonomic if and only if D(M) is concentrated (seen as an object in the derived category of D-modules) in degree 0. This fact is a first glimpse of Verdier duality and the Riemann–Hilbert correspondence. It is proven by extending the homological study of regular rings (especially what is related to global homological dimension) to the filtered ring DX. Another characterization of holonomic modules is via symplectic geometry. The characteristic variety Ch(M) of any D-module M is, seen as a subvariety of the cotangent bundle T∗X of X, an involutive variety. The module is holonomic if and only if Ch(M) is Lagrangian. Applications One of the early applications of holonomic D-modules was the Bernstein–Sato polynomial. Kazhdan–Lusztig conjecture The Kazhdan–Lusztig conjecture was proved using D-modules. Riemann–Hilbert correspondence The Riemann–Hilbert correspondence establishes a link between certain D-modules and constructible sheaves. As such, it provided a motivation for introducing perverse sheaves. Geometric representation theory D-modules are also applied in geometric representation theory. A main result in this area is the Beilinson–Bernstein localization. It relates D-modules on flag varieties G/B to representations of the Lie algebra ${\mathfrak {g}}$ of a reductive group G. D-modules are also crucial in the formulation of the geometric Langlands program. References • Beilinson, A. A.; Bernstein, Joseph (1981), "Localisation de g-modules", Comptes Rendus de l'Académie des Sciences, Série I, 292 (1): 15–18, ISSN 0249-6291, MR 0610137 • Björk, J.-E. (1979), Rings of differential operators, North-Holland Mathematical Library, vol. 21, Amsterdam: North-Holland, ISBN 978-0-444-85292-2, MR 0549189 • Brylinski, Jean-Luc; Kashiwara, Masaki (1981), "Kazhdan–Lusztig conjecture and holonomic systems", Inventiones Mathematicae, 64 (3): 387–410, Bibcode:1981InMat..64..387B, doi:10.1007/BF01389272, ISSN 0020-9910, MR 0632980, S2CID 18403883 • Coutinho, S. C. (1995), A primer of algebraic D-modules, London Mathematical Society Student Texts, vol. 33, Cambridge University Press, ISBN 978-0-521-55119-9, MR 1356713 • Borel, Armand, ed. (1987), Algebraic D-Modules, Perspectives in Mathematics, vol. 2, Boston, MA: Academic Press, ISBN 978-0-12-117740-9 • M.G.M. van Doorn (2001) [1994], "D-module", Encyclopedia of Mathematics, EMS Press • Hotta, Ryoshi; Takeuchi, Kiyoshi; Tanisaki, Toshiyuki (2008), D-modules, perverse sheaves, and representation theory (PDF), Progress in Mathematics, vol. 236, Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-4363-8, MR 2357361, archived from the original (PDF) on 2016-03-03, retrieved 2009-12-10 External links • Bernstein, Joseph, Algebraic theory of D-modules (PDF) • Gaitsgory, Dennis, Lectures on Geometric Representation Theory (PDF), archived from the original (PDF) on 2015-03-26, retrieved 2011-12-14 • Milicic, Dragan, Lectures on the Algebraic Theory of D-Modules
Wikipedia
\begin{definition}[Definition:Isomorphism (Abstract Algebra)/Monoid Isomorphism] Let $\struct {S, \circ}$ and $\struct {T, *}$ be monoids. Let $\phi: S \to T$ be a (monoid) homomorphism. Then $\phi$ is a monoid isomorphism {{iff}} $\phi$ is a bijection. That is, $\phi$ is a monoid isomorphism {{iff}} $\phi$ is both a monomorphism and an epimorphism. If $S$ is isomorphic to $T$, then the notation $S \cong T$ can be used (although notation varies). \end{definition}
ProofWiki
(The Cayley-Hamilton Theorem) If $p(t)$ is the characteristic polynomial for an $n\times n$ matrix $A$, then the matrix $p(A)$ is the $n \times n$ zero matrix. Let $A=\begin{bmatrix} 1& 1 \\ 1& 3 \end{bmatrix}$. The characteristic polynomial $p(t)$ of $A$ is p(t)&=\det(A-tI)=\begin{bmatrix} 1-t& 1 \\ 1& 3-t \end{bmatrix} &=t^2-4t+2. Then the Cayley-Hamilton theorem says that the matrix $p(A)=A^2-4A+2I$ is the $2\times 2$ zero matrix. In fact, we can directly check this: p(A)&=A^2-4A+2I=\begin{bmatrix} \end{bmatrix}\begin{bmatrix} \end{bmatrix}-4\begin{bmatrix} \end{bmatrix}+2\begin{bmatrix} 1& 0\\ \end{bmatrix}\\[6pt] &=\begin{bmatrix} 4& 10 +\begin{bmatrix} -4& -4 \\ -4& -12 =\begin{bmatrix} \end{bmatrix}. =solution Let $T=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &1 \\ 0 & 0 & 2 \end{bmatrix}$. Calculate and simplify the expression $-T^3+4T^2+5T-2I$, where $I$ is the $3\times 3$ identity matrix. (The Ohio State University) Find the inverse matrix of the matrix $A=\begin{bmatrix} \end{bmatrix}$ using the Cayley–Hamilton theorem. Find the inverse matrix of the $3\times 3$ matrix $A=\begin{bmatrix} 7 & 2 & -2 \\ -6 &-1 &2 \\ 6 & 2 & -1 \end{bmatrix}$ using the Cayley-Hamilton theorem. \[A=\begin{bmatrix} 1 & -1\\ \end{bmatrix}.\] Find the eigenvalues and the eigenvectors of the matrix \[B=A^4-3A^3+3A^2-2A+8E.\] (Nagoya University) Let $A, B$ be complex $2\times 2$ matrices satisfying the relation $A=AB-BA$. Prove that $A^2=O$, where $O$ is the $2\times 2$ zero matrix. In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not. (a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$. (b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. Suppose that $A$ is $2\times 2$ matrix that has eigenvalues $-1$ and $3$. Then for each positive integer $n$ find $a_n$ and $b_n$ such that $A^{n+1}=a_nA+b_nI$, where $I$ is the $2\times 2$ identity matrix. Suppose that the $2 \times 2$ matrix $A$ has eigenvalues $4$ and $-2$. For each integer $n \geq 1$, there are real numbers $b_n , c_n$ which satisfy the relation $A^{n} = b_n A + c_n I$, where $I$ is the identity matrix. Find $b_n$ and $c_n$ for $2 \leq n \leq 5$, and then find a recursive relationship to find $b_n, c_n$ for every $n \geq 1$. Let $A$ be an $n\times n$ complex matrix. Let $p(x)=\det(xI-A)$ be the characteristic polynomial of $A$ and write it as \[p(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0,\] where $a_i$ are real numbers. Let $C$ be the companion matrix of the polynomial $p(x)$ given by \[C=\begin{bmatrix} 0 & 0 & \dots & 0 &-a_0 \\ 1 & 0 & \dots & 0 & -a_1 \\ \vdots & & \ddots & & \vdots \\ 0 & 0 & \dots & 1 & -a_{n-1} \end{bmatrix}= [\mathbf{e}_2, \mathbf{e}_3, \dots, \mathbf{e}_n, -\mathbf{a}],\] where $\mathbf{e}_i$ is the unit vector in $\C^n$ whose $i$-th entry is $1$ and zero elsewhere, and the vector $\mathbf{a}$ is defined by $\mathbf{a}=\begin{bmatrix} a_0 \\ \vdots \\ a_{n-1} \end{bmatrix}$. Then prove that the following two statements are equivalent. (1) There exists a vector $\mathbf{v}\in \C^n$ such that \[\mathbf{v}, A\mathbf{v}, A^2\mathbf{v}, \dots, A^{n-1}\mathbf{v}\] form a basis of $\C^n$. (2) There exists an invertible matrix $S$ such that $S^{-1}AS=C$. (Namely, $A$ is similar to the companion matrix of its characteristic polynomial.) Let $n>1$ be a positive integer. Let $V=M_{n\times n}(\C)$ be the vector space over the complex numbers $\C$ consisting of all complex $n\times n$ matrices. The dimension of $V$ is $n^2$. Let $A \in V$ and consider the set \[S_A=\{I=A^0, A, A^2, \dots, A^{n^2-1}\}\] of $n^2$ elements. Prove that the set $S_A$ cannot be a basis of the vector space $V$ for any $A\in V$. Let $A$ be a $3\times 3$ real orthogonal matrix with $\det(A)=1$. (a) If $\frac{-1+\sqrt{3}i}{2}$ is one of the eigenvalues of $A$, then find the all the eigenvalues of $A$. (b) Let $A^{100}=aA^2+bA+cI$, where $I$ is the $3\times 3$ identity matrix. Using the Cayley-Hamilton theorem, determine $a, b, c$. (Kyushu University) Let $A$ and $B$ be $2\times 2$ matrices such that $(AB)^2=O$, where $O$ is the $2\times 2$ zero matrix. Determine whether $(BA)^2$ must be $O$ as well. If so, prove it. If not, give a counter example. Linear Algebra Version 0 (11/15/2017)
CommonCrawl
Axel Sophus Guldberg Axel Sophus Guldberg (2 November 1838 – 28 February 1913) was a Norwegian mathematician.[1][2][3] Axel Sophus Guldberg Born(1838-01-02)2 January 1838 Christiania, Norway Died28 February 1913(1913-02-28) (aged 75) Aker, Norway NationalityNorwegian SpouseKathinka Fredrikke Marie Borchsenius ChildrenAlf Victor Guldberg Parents • Carl August Guldberg (father) • Hanna Sophie Theresia Bull (mother) RelativesCato Maximilian Guldberg Cathinka Guldberg Hans Riddervold Guldberg Carl Johan Guldberg Fredrik Oscar Guldberg Gustav Adolph Guldberg Ansgar Guldberg AwardsOrder of St. Olav (Knight 1st Class) Scientific career FieldsMathematics InstitutionsNorwegian Military Academy Norwegian Academy of Science and Letters Royal Norwegian Society of Sciences and Letters Biography Born in Christiania (now called Oslo), Guldberg was the second oldest out of 11 siblings.[4] He and his siblings were initially homeschooled, but he and his older brother, Cato Maximilian Guldberg, later began going to school in Fredrikstad, where they lived together with relatives.[4] He completed his examen artium in 1856,[4] cand.real. in 1863[4] and dr.philos. in 1867.[1][4] In 1863, he was an adjunct professor in Drammen.[4] From 1864 to 1865, he studied mathematics in Germany and France, while simultaneously on his honeymoon.[4] In 1865, Guldberg became a rector in Stavanger.[1][4] The same year, he began teaching mathematics at the Norwegian Military Academy until 1899.[1][4] He was an important figure in the insurance industry.[1][4] He also served in the Norwegian law commission.[5] In 1866, he had a son, Alf Victor Guldberg, with his wife, Fredrikke Borchsenius.[6] References 1. "Axel Sophus Guldberg". SNL (in Norwegian Bokmål). 30 November 2021. Retrieved 7 September 2022. 2. Blangstrup, Christian (1915–30). "Guldberg, 1. Axel Sophus". Salmonsens Konversationsleksikon (in Danish) (2 ed.). p. 393. Retrieved 13 October 2022.{{cite encyclopedia}}: CS1 maint: date format (link) 3. "Guldberg, 3. Axel Sofus". Nordisk familjebok (in Swedish) (2 ed.). 1909. p. 591. Retrieved 13 October 2022. 4. Birkeland, Bent (29 June 2022). "Axel Guldberg". NBL (in Norwegian Bokmål). Retrieved 7 September 2022. 5. Langeland, Nils Rune (16 March 2021). Ulvund, Frode (ed.). "Aksjelova av 1910" [The Shares Act of 1910]. Historisk Tidsskrift (in Norwegian Nynorsk). Universitetsforlaget. 100 (1): 22–35. doi:10.18261/issn.1504-2944-2021-01-03. S2CID 233650444. Retrieved 16 October 2022. 6. Steenstrup, Bjørn (1930). "GULDBERG Alf Victor Emanuel". Hvem er hvem? (2 ed.). p. 152. Retrieved 16 October 2022.
Wikipedia
Long and short range order structural analysis of In-situ formed biphasic calcium phosphates Dong-Hyun Kim1, Kyu-Hong Hwang2, Ju Dong Lee3, Hong-Chae Park1 & Seog-Young Yoon1 Biphasic calcium phosphates (BCP) have attracted considerable attention as a bone graft substitute. In this study, BCP were prepared by aqueous co-precipitation and calcination method. The crystal phases of in-situ formed BCP consisting of hydroxyapatite (HAp) and β-tricalcium phosphate (β-TCP) were controlled by the degree of calcium deficiency of precursors. The long and short range order structures of biphasic mixtures was investigated using Rietveld refinement technique and high resolution Raman spectroscopy. The refined structural parameters of in-situ formed BCP confirmed that all the investigated structures have crystallized in the corresponding hexagonal (space group P63/m) and rhombohedral (space group R3c) structures. The crystal phases, Ca/P molar ratio, and lattice parameters of in-situ formed BCP consisting of HAp and β-TCP were controlled by the degree of calcium deficiency of calcium phosphate precursors. The significant short range order structural change of BCP was determined by Raman analysis. The long and short range order structural changes of in-situ formed BCP might be due to the coexistence of β-TCP and HAp crystal phases. Biphasic calcium phosphates (BCP) have attracted considerable attention as a bone graft substitute [1–4]. Generally, BCP consisting of biocompatible hydroxyapatite (HAp) and biodegradable β-tricalcium phosphate (β-TCP) have better bio-resorbability and osseointegration than the individual HAp or β-TCP components because of their different dissolution behaviors under in vitro and in vivo biological conditions [5, 6]. Several researchers have recently attempted to develop BCP comprising HAp and β-TCP as well using various synthetic routes, such as the blending of different calcium phosphates in solid state reactions, precipitation, liquid mix techniques, treatment of natural bone, spray pyrolysis, and microwave and combustion processing [7–12]. For examples, BCP are usually produced either by the mechanical mixing and sintering of monophasic HAp/β-TCP powder mixtures (i.e., ex-situ formation) or by calcining of a single phase calcium deficient hydroxyapatite (CDHA) powders (i.e., in-situ formation). In the case of CDHA, It is well known that BCP and bi-phases can be obtained and controlled through in-situ process of heating CDHA (Ca/P = 1.5 ~ 1.67) above 700 °C according to the Equation (1): $$ \mathrm{C}{\mathrm{a}}_{10\hbox{-} \mathrm{x}}{\left(\mathrm{H}\mathrm{P}{\mathrm{O}}_4\right)}_{\mathrm{x}}{\left(\mathrm{P}{\mathrm{O}}_4\right)}_{6\hbox{-} \mathrm{x}}{\left(\mathrm{O}\mathrm{H}\right)}_{2\hbox{-} \mathrm{x}}\circledR\ \left(1\hbox{-} \times \right)\mathrm{C}{\mathrm{a}}_{10}{\left(\mathrm{P}{\mathrm{O}}_4\right)}_6{\left(\mathrm{O}\mathrm{H}\right)}_2 + 3\times \mathrm{C}{\mathrm{a}}_3\left(\mathrm{P}{\mathrm{O}}_4\right) + \mathrm{x}{\mathrm{H}}_2\mathrm{O} $$ In addition, the in-situ formation method of BCP can be also applied to various studies of ionic substitutions, biopolymer/calcium phosphate composites, local drug delivery system, and porous scaffolds [13–16]. Because the studies of various attempts for BCP still need to be focused in order to optimize the biological performances. Consequently, fundamental efforts to improve the biological response of in-situ formed BCP have recently based on studies of various biphasic controls of HAp/TCP ratios. The research for crystal structure of in-situ formed BCP has been the subject of specific interest owing to its essential biological role in the comprehension for coexistence of two crystal phases. Despite having the similar elemental composition, HAp (Ca10(PO4)6(OH)2) and β-TCP (Ca3(PO4)2) differ considerably in their crystal system. For example, crystal system of β-TCP has generally a rhombohedral (space group R3c) or hexagonal structure (space group R3c). HAp has a hexagonal (space group P63/m) or monoclinic structure (space group P21/b). Such different crystallographic forms of HAp and β-TCP can be performed the different biological properties related to biodegradation and dissolution rate. Therefore, if a crystal system of BCP can be controlled and varied by in-situ formed bi-phases, it can suggest a new paradigm of bioceramic applications for improve biological properties. In this study, the ability to clearly identify an individual crystal structure of in-situ formed BCP demonstrate in the long and short range order structural analysis using the Rietveld refinement of X-ray diffraction (XRD) spectra and high resolution Raman spectroscopy. β-TCP, HAp, and BCP powders were synthesized by the co-precipitation and calcination process. Firstly, an appropriate amount (Ca/P molar ratio 1.5 ~ 1.67) of calcium nitrate tetrahydrate (Ca(NO3)2°4H2O, Sigma-Aldrich) and diammonium hydrogen phosphate ((NH4)2∙HPO4, Sigma-Aldrich) was dissolved in distilled water by vigorously stirring at a rate of 1000 rpm. The pH of the mixed solution was maintained at 8 and 11 by the addition of ammonium hydroxide (NH4OH, Junsei) solution. The co-precipitated suspension was discharged from the reactor and allowed to settle for 24 h for the maturation of precipitate. After 24 h, the precipitates were separated through vacuum filtration technique and dried at 80 °C for 24 h in a drying oven. The as-dried precipitates were calcined at 1000 °C for 24 h in air. X-ray diffraction analysis (X'Pert Pro, Philips), at 40 kV and 40 mA with a scanning speed of 1o/min, was performed to identify the phases of the as-calcined powders. A standard Bragg-Brentano geometry was applied with a Kα1 monochromatic beam from the Cu anode. Phase identification, quantitative analysis, determination of Ca/P ratio, and lattice parameters for the BCP powders were characterized using Phillips X'Pert HighScore Plus software with a full-pattern fit using Rietveld method [17, 18]. The Raman spectra were recorded on a Sentinel Raman spectrometer (Bruker Optics Ltd.) with a Unilab II probe (fiber optic) and a CCD detector was used in this study. A 532 nm Nd: YAG laser source was used for excitation with an incident laser power of 30 mW. The spectral range was 500 to 4400 cm−1 with a resolution of 4 to 6 cm−1. Figure 1 shows Rietveld analysis, determined crystal phase %, and calculated Ca/P ratio for as-synthesized β-TCP, HAp, and BCP powders. Figure 1(a) shows fitting deviations of XRD patterns of β-TCP, HAp, and BCP powders with Rietveld analysis. Pattern fitting was carried out between 10 and 70°. As shown in Fig. 1(a), Rietveld refinement was performed using the structural model of ICSD card number # 6191 and 97500 for β-TCP phase. The structural model of ICSD card number # 26205 and 87670 were used for HAp phase. Figure 1(b) shows the percentage of relative determined crystal phase and calculated Ca/P ratio of the β-TCP, HAp, and BCP powders through Rietveld analysis. An attempt was also made to determine the reliability of the resultant phase mixtures obtained through refinement with those of experimental Ca/P ratio of the precursors according to Equation (2): a Rietveld analysis patterns of powder diffraction data of BCP and b molar Ca/P ratio of BCP calculated using Rietveld method $$ \mathrm{Expected}\ \mathrm{C}\mathrm{a}/\mathrm{P} = \mathrm{wt}\ \%\ \mathrm{of}\ \mathrm{H}\mathrm{A}\mathrm{p}\ \left(\mathrm{Rietveld}\right)*\ 1.67 + \mathrm{wt}\ \%\ \mathrm{of}\ \mathrm{T}\mathrm{C}\mathrm{P}\ \left(\mathrm{Rietveld}\right)\ *\ 1.5 $$ As shown in Fig. 1(b), BCP powders have indicated only the presence of HAp and β-TCP in its composition but their quantitative phase contents determined through Rietveld analysis were found to show significant variations, and their Ca/P ratio were totally dependent on the percentage of determined crystal phase. Figure 2 shows biphasic behaviors and effects of crystal system of in situ formed BCP with different phase contents. The expanded XRD patterns of BCP in the close scan around the main peak region (2θ = 30-35°) as shown in Fig. 2(a). The β-TCP (0210) peak is well distinguished and differentiated from the HAp (211) peak. Figure 2(b) shows axis ratio of β-TCP and HAp phase in the lattice parameters of in-situ formed BCP with different phase contents. In the case of in-situ formed BCP, the as-calculated c/a axis ratio of β-TCP phase was increased with increasing HAp phase contents as shown in Fig. 2(b). On the other hand, the a/c axis ratio of HAp showed a little decrease to closed a/c ratio of monophasic HAp. In contrast to result of β-TCP, the as-calculated a/c axis ratio of HAp phase of in-situ formed BCP was decreased to the theoretical a/c axis ratio (a/c ratio = 1.3689, ICSD # 87670). In addition, a volume for unit cell of in-situ formed BCP showed the volumetric expansion, compared to the monophasic HAp and β-TCP, as shown in Fig. 2(c). a XRD patterns, b axial ratio, and c volumetric expansion behavior of unit cells of in-situ formed BCP. (Note: axis of β-TCP and HAp is a = b and axial angle is α = β = 90°, γ = 120°) Figure 3 shows the short range order structural analysis of in-situ formed BCP. Figure 3(a) shows the Raman spectra of β-TCP, HAp, mechanical mixed BCP, and in-situ formed BCP, assigned to a symmetric P-O vibration (stretching mode, ν1) within the PO3 4− group. Among the spectra shown in Fig. 3(a), it is spectrum of the in-situ formed BCP that is most similar to spectra observed from mixture of pure β-TCP and HAp. However, it is evident that P-O vibration peak (i.e., 968 cm−1 of β-TCP) of in-situ formed BCP (i.e., 43.8 % of β-TCP and 56.2 % of HAp determined by Rietveld analysis) indicated a peak shift, compared with monophasic β-TCP and mechanical mixed BCP (i.e., 44 % of β-TCP and 56 % of HAp), as shown in Fig. 3(b). Therefore, the short range order structure of in-situ formed BCP could be considered to be affected by the coexistence of β-TCP and HAp phase. a Symmetric vibration (stretching mode, ν1) of PO3 4− of β-TCP, HAp, mechanical mixed BCP, and in-situ formed BCP and b pseudo Gaussian fitting results from Raman spectra of in-situ formed BCP HAp, β-TCP and in situ formed BCP powders were synthesized by the co-precipitation and calcination process. The refined structural parameters of in-situ formed BCP confirmed that all the investigated structures have crystallized in the corresponding hexagonal (space group P63/m) and rhombohedral (space group R3c) structures. The molar Ca/P ratio of in-situ formed BCP was also determined by Rietveld analysis. The crystal phases, Ca/P molar ratio, and lattice parameter of in-situ formed BCP consisting of HAp and β-TCP were controlled by the degree of calcium deficiency of calcium phosphate precursors. The significant short range order structural change of BCP was determined by Raman analysis. The short range order structures of in-situ formed BCP was considered to be affected by the coexistence of β-TCP and HAp phase. Availability of supporting data The data set supporting the results of this article is included within the article Tables 1, 2, 3, 4 are represented structure model of HAp and β-TCP crystal in XRD results of this study (Additional file 1). De Groot K. Clinical applications of calcium phosphate biomaterials: a review. Ceram Int. 1993;19:363–6. Jarcho M. Biomaterial aspects of calcium phosphates. Dent Clin North Am. 1986;30:25–47. Bucholz RW, Carlton A, Holmes RE. Hydroxyapatite and tricalcium phosphate bone graft substitutes. Orthop Clin North Am. 1987;18:323–34. Hench LL, Am J. Bioceramics. Ceram Soc. 1998;81:1705–28. Klein CPAT, Driessen AA, De Groot K, van den Hooff A, Biomed J. Biodegradation behaviour of various calcium phosphate materials in bone tissue. Mater Res A. 1983;17:769–84. Dorozhkin SV, Epple M. Biological and Medical Significance of Calcium Phosphates. Angew Chem Int Ed. 2002;41:3130–46. Yamada S, Heyman D, Bouler JM, Daculsi G. Osteoclastic resorption of calcium phosphate ceramics with different hydroxyapatite/β-tricalcium phosphate ratios. Biomaterials. 1997;18:1037–41. Bouler JM, LeGeros RZ, Daculsi G, Biomed J. Biphasic calcium phosphates: influence of three synthesis parameters on the HA/β-TCP ratio. Mater Res A. 2000;51:680–4. Jarcho M. Calcium phosphate ceramics as hard tissue prosthetics. Clin Orthop Relat Res. 1981;157:259–78. Pena J, Vallet-Regi M, Euro J. Hydroxyapatite, tricalcium phosphate and biphasic materials prepared by a liquid mix technique. Ceram Soc. 2003;23:1687–96. LeGeros RZ, Lin S, Rohanizadeh R, Mijares D, LeGeros JP. Biphasic calcium phosphate bioceramics: preparation, properties and applications. Mater Sci Mater Med. 2003;14:201–9. Vallet-Regi M. Ceramics for medical applications. J Chem Soc Dalton Trans. 2001;97–108. Chang BS, Lee CK, Hong KS, Youn HJ, Ryu HS, Chung SS, et al. Osteoconduction at porous hydroxyapatite with various pore configurations. Biomaterials. 2000;21:1291–8. Santos C, Martins MA, Franke RP, Almeida MM, Costa MEV. Calcium phosphate granules for use as a 5-Fluorouracil delivery system. Ceram Int. 2009;35:1587–94. Nihouannen DL, Guehennec LL, Rouillon T, Pilet P, Bilban M, Layrolle P, et al. Micro-architecture of calcium phosphate granules and fibrin glue composites for bone tissue engineering. Biomaterials. 2006;27:2716–22. Kamitakahara M, Imai R, Ioku K. Preparation and evaluation of spherical Ca-deficient hydroxyapatite granules with controlled surface microstructure as drug carriers. Mater Sci Eng C. 2013;33:2446–50. Kim DH, Chun HH, Lee JD, Yoon SY. Evaluation of phase transformation behavior in biphasic calcium phosphate with controlled spherical micro-granule architecture. Ceram Int. 2014;40:5145–55. Kim DH, Park SS, Lee JD, Park HC, Yoon SY. Phase transformation behavior of spherical tricalcium phosphate micro-granules prepared by a jet wheel impact atomization and calcination process. Powder Technol. 2014;257:74–82. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2013R1A1A4A01009089). This work was also supported by Korea Institute of Industrial Technology. School of Materials Science and Engineering, Pusan National University, Busan, 609-735, Republic of Korea Dong-Hyun Kim, Hong-Chae Park & Seog-Young Yoon School of Nano and Advanced Materials, Gyeongsang National University, Jinju, Gyeongnam, 660-701, Republic of Korea Kyu-Hong Hwang Korea Institute of Industrial Technology, Busan, 618-230, Republic of Korea Ju Dong Lee Dong-Hyun Kim Hong-Chae Park Seog-Young Yoon Correspondence to Seog-Young Yoon. DHK and SYY designed the experiments, carried out crystal structure analysis of biphasic calcium phosphates by Rietveld refinement and drafted the manuscript. JDL participated in the design of the study and carried out Raman analysis. KHH and HCP conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. Additional file 1: Table S1. Structure model of HAp (ICSD card number: # 26205). Table S2. Structure model ofHAp (ICSD card number: # 87670). Table S3. Structure model of β-TCP (ICSD card number: # 6191). Table S4. Structure model of β-TCP (ICSD card number: # 97500). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Kim, DH., Hwang, KH., Lee, J.D. et al. Long and short range order structural analysis of In-situ formed biphasic calcium phosphates. Biomater Res 19, 14 (2015). https://doi.org/10.1186/s40824-015-0036-0 Biphasic calcium phosphate Rietveld refinement Ceramics in bio-applications
CommonCrawl
\begin{document} \title{A Unique Perfect Power Decagonal Number} \begin{abstract} Let $\mathcal{P}_s(n)$ denote the $n$th $s$-gonal number. We consider the equation \[\mathcal{P}_s(n) = y^m, \] for integers $n,s,y,$ and $m$. All solutions to this equation are known for $m>2$ and $s \in \{3,5,6,8,20 \}$. We consider the case $s=10$, that of decagonal numbers. Using a descent argument and the modular method, we prove that the only decagonal number $>1$ expressible as a perfect $m$th power with $m>1$ is $\mathcal{P}_{10}(3) = 3^3$. \end{abstract} \section{Introduction} The $n$th $s$-gonal number, with $s \geq 3$, which we denote by $\mathcal{P}_s(n)$, is given by the formula \[ \mathcal{P}_s(n) = \frac{(s-2)n^2-(s-4)n}{2}. \] Polygonal numbers have been studied since antiquity \citep[pp.1--39]{history} and relations between different polygonal numbers and perfect powers have received much attention in the literature (see \citep{poly}, for example, and references therein). Kim, Park, and Pint{\'e}r \citep[Theorem 1.2]{poly} find all solutions to the equation $\mathcal{P}_s(n) = y^m$ when $m>2$ and $s \in \{3,5,6,8,20 \}$ for integers $n$ and $y$. We extend this result (for $m>1$) to the case $s=10$, that of decagonal numbers. \begingroup \renewcommand\thetheorem{1} \begin{theorem}\label{mainthm} All solutions to the equation \begin{equation}\label{maineq} \mathcal{P}_{10}(n) = y^m,\quad n,y,m \in \Z,~ m > 1, \end{equation} satisfy $n=y=0, n=\abs{y}=1,$ or $n=y=m=3$. In particular, the only decagonal number $>1$ expressible as a perfect $m$th power with $m>1$ is $\mathcal{P}_{10}(3) = 3^3$. \end{theorem} \endgroup We will prove Theorem \ref{mainthm} by carrying out a descent argument to obtain various ternary Diophantine equations, to which one may associate Frey elliptic curves. The difficulty in solving the equation $\mathcal{P}_{s}(n) = y^m$ for a fixed value of $s$ is due to the existence of the trivial solution $n=y=1$ (for any value of $m$). We note that adapting our method of proof also works for the cases $s \in \{3,5,6,8,20 \}$ mentioned above, but will not extend to any other values of $s$ (see Remark \ref{rem}). \section{Descent and Small Values of $m$} We note that it will be enough to prove Theorem \ref{mainthm} in the case $m=p$, prime. We write (\ref{maineq}) as \begin{equation}\label{eq2} n(4n-3) = y^p, \quad n,y \in \Z,~ p \text{ prime}, \end{equation} and suppose that $n,y \in \Z$ satisfy this equation with $n \ne 0$. \subsubsection*{Case 1: $3 \nmid n$.} If $3 \nmid n$, then $n$ and $4n-3$ are coprime, so there exist coprime integers $a$ and $b$ such that \[ n = a^p \quad \text{ and } \quad 4n-3 = b^p. \] It follows that \begin{equation} \label{case1} 4a^p-b^p = 3. \end{equation} If $p=2$ we see that $(2a-b)(2a+b)=3$, so that $a = b = \pm 1$ and so $n=\abs{y}=1$. If $p=3$ or $p=5$, then using \texttt{Magma}'s \citep{magma} Thue equation solver, we find that $a=b=1$ also. \subsubsection*{Case 2: $3 \parallel n$.} Suppose $3 \parallel n$ (i.e. $\mathrm{ord}_3(n)=1$). Then after dividing (\ref{eq2}) by $3^{\mathrm{ord}_3(y)p}$, we see that there exist coprime integers $t$ and $u$, with $3 \nmid t$ such that \[n = 3t^p \quad \text{ and } \quad 4n-3 = 3^{p-1}u^p. \] Then \begin{equation} \label{case2} 4t^p-3^{p-2}u^p = 1. \end{equation} If $p=2$ we have $(2t-u)(2t+u)=1$ which has no solutions. If $p=3$, then $4t^3-3u^3=1$, and using \texttt{Magma}'s Thue equation solver, we verify that $u=t=1$ is the only solution to this equation, and this gives $n=y=3$. If $p=5$, we use \texttt{Magma}'s Thue equation solver to see that there are no solutions. \subsubsection*{Case 3: $3^2 \mid n$.} If $3^2 \mid n$ then $3 \parallel 4n-3$, and arguing as in Case 2, there exist coprime integers $v$ and $w$, with $3 \nmid w$ such that \[n = 3^{p-1}v^p \quad \text{ and } \quad 4n-3 = 3w^p. \] So \begin{equation} \label{case3} 4 \cdot 3^{p-2}v^p- w^p = 1. \end{equation} If $p=2$, then as in Case 2 we obtain no solutions. If $p=3$ or $p=5$ then we use \texttt{Magma}'s Thue equation solver to verify that there are no solutions with $v \ne 0$. \section{Frey Curves and the Modular Method} To prove Theorem \ref{mainthm}, we will associate Frey curves to equations (\ref{case1}), (\ref{case2}), and (\ref{case3}), and level-lower to obtain a contradiction. We have considered the cases $p=2,3,$ and $5$ in Section 2, and so we will assume $m=p$ is prime, with $p \geq 7$. We note that at this point we could directly apply \citep[Theorem 1.2]{pp2} to conclude that the only solutions to (\ref{tern1}) are $a=b=1$, giving $n=1$, and apply \citep[Theorem 1.2]{consecutive} that (\ref{tern2}) and (\ref{tern3}) have no solutions. The computations for (\ref{tern1}) are not explicitly carried out in \citep{pp2}, so for the convenience of the reader and to highlight why the case $s=10$ is somewhat special, we provide some details of the arguments. \subsubsection*{Case 1: $3 \nmid n$.} We write (\ref{case1}) as \begin{equation} \label{tern1} -b^p +4a^p = 3 \cdot 1^2, \end{equation} which we view as a generalised Fermat equation of signature $(p,p,2)$. We note that the three terms are integral and coprime. We suppose $ab \ne \pm 1$. Following the recipes of \citep[pp.~26--31]{pp2}, we associate Frey curves to (\ref{tern1}). We first note that $b$ is odd, since $b^p = 4n-3$. If $a \equiv 1 \pmod{4}$, we set \[E_1: Y^2 = X^3 -3X^2+3a^pX. \] If $a \equiv 3 \pmod{4}$, we set \[E_2: Y^2 = X^3 + 3X^2+3a^pX. \] If $a$ is even, we set \[E_3: Y^2+XY = X^3 - X^2 + \frac{3a^p}{16} X. \] We level-lower each Frey curve and find that for $i=1,2,3,$ we have $E_i \sim_p f_i$, for $f_i$ a newform at level $N_{p_i}$, where $N_{p_1} = 36, N_{p_2} = 72,$ and $N_{p_3} = 18$. The notation $E \sim_p f$ means that the mod-$p$ Galois representation of $E$ arises from $f$. There are no newforms at level $18$ and so we focus on the curves $E_1$ and $E_2$. There is a unique newform, $f_1$, at level $36$, and a unique newform, $f_2$, at level $72$. The newform $f_1$ has complex multiplication by the imaginary quadratic field $\Q(\sqrt{-3})$. This allows us to apply \citep[Proposition 4.6]{pp2}. Since $2 \nmid ab$ and $3 \nmid ab$, we conclude that $p=7$ or $13$, and that all elliptic curves of conductor $2p$ have positive rank over $\Q(\sqrt{-3})$. However, it is straightforward to check that this is not the case for $p=7$ and $13$. We conclude that $E_1 \not\sim_p f_1$. Let $F_2$ denote the elliptic curve with Cremona label 72a2 whose isogeny class corresponds to $f_2$. This elliptic curve has full two-torsion over the rationals and has $j$-invariant $2^{4} \cdot 3^{-2} \cdot 13^{3}$. We apply \citep[Proposition 4.4]{pp2}, which uses an image of inertia argument, to obtain a contradiction in this case too. \begin{remark}\label{rem} The trivial solution $a=b=1$ (or $n=y=1$) corresponds to the case $i=1$ above. The only reason we are able to discard the isomorphism $E_1 \sim_p f_1$ is because the newform $f_1$ has complex multiplication. The modular method would fail to eliminate the newform $f_1$ otherwise. For each value of $s$, we can associate to (\ref{maineq}) generalised Fermat equations of signature $(p,p,2)$, $(p,p,3)$, and $(p,p,p)$. We found we could only obtain newforms with complex multiplication (when considering the case corresponding to the trivial solution) when $s = 3, 6, 8, 10,$ or $20$. A similar strategy of proof also works for $s=5$ using the work of Bennett \citep[p.~3]{approx} on equations of the form $(a+1)x^n-ay^n = 1$ to deal with the trivial solution. \end{remark} \subsubsection*{Case 2: $3 \parallel n$.} We rewrite (\ref{case2}) as \begin{equation} \label{tern2} 4t^p-3^{p-2}u^p = 1 \cdot 1^3 , \end{equation} which we view as a generalised Fermat equation of signature $(p,p,3)$. The three terms are integral and coprime. We suppose $tu \ne \pm 1$. Using the recipes of \citep[pp.~1401--1406]{pp3}, we associate to (\ref{tern2}) the Frey curve \[E_4: Y^2 + 3XY - 3^{p-2}u^p \, Y = X^3. \] We level-lower $E_4$ and find that $E_4 \sim_p f$, where $f$ is a newform at level $6$, an immediate contradiction, as there are no newforms at level $6$. \subsubsection*{Case 3: $3^2 \mid n$.} We rewrite (\ref{case3}) as \begin{equation} \label{tern3} -w^p + 4 \cdot 3^{p-2}v^p = 1 \cdot 1^3, \end{equation} which we view as a generalised Fermat equation of signature $(p,p,3)$. The three terms are integral and coprime. We suppose $vw \ne \pm 1$. The Frey curve we attach to (\ref{tern3}) is \[ E_5: Y^2 + 3XY + 4 \cdot 3^{p-2} v^p \, Y = X^3. \] We level-lower, and find that $E_5 \sim_p f$, where $f$ is a newform at level $6$, a contradiction as in Case 2. This completes the proof of Theorem \ref{mainthm}. \Addresses \end{document}
arXiv
The spectrum of discrete Dirac operator with a general boundary condition Nimet Coskun ORCID: orcid.org/0000-0001-9753-01011 & Nihal Yokus1 In this paper, we aim to investigate the spectrum of the nonselfadjoint operator L generated in the Hilbert space \(l_{2}(\mathbb{N},\mathbb{C}^{2})\) by the discrete Dirac system $$ \textstyle\begin{cases} y_{n+1}^{ (2 )} - y_{n}^{ (2 )} + p_{n} y_{n}^{ (1 )} =\lambda y_{n}^{ (1 )},\\ - y_{n}^{ (1 )} + y_{n-1}^{ (1 )} + q_{n} y_{n}^{ (2 )} =\lambda y_{n}^{ (2 )}, \end{cases}\displaystyle \quad n\in \mathbb{N}, $$ and the general boundary condition $$ \sum_{n = 0}^{\infty } h_{n}y_{n} = 0, $$ where λ is a spectral parameter, Δ is the forward difference operator, (\(h_{n}\)) is a complex vector sequence such that \(h_{n} = ( h_{n}^{(1)}, h_{n}^{(2)} )\), where \(h_{n}^{(i)} \in l^{1} ( \mathbb{N} ) \cap l^{2} ( \mathbb{N} )\), \(i = 1,2\), and \(h_{0}^{(1)} \ne 0\). Upon determining the sets of eigenvalues and spectral singularities of L, we prove that, under certain conditions, L has a finite number of eigenvalues and spectral singularities with finite multiplicity. Along with the invention of the Schrödinger equation, the physical scope of mathematical problems connected with the spectra of differential equations with prescribed boundary conditions was enormously enlarged. The types of equations that previously had applications only to mechanical vibrations now were to be used for the description of atoms and molecules. There are important and altogether astonishing applications of the results obtained in the spectral theory of linear operators in Hilbert spaces to scattering theory, inverse problems, and quantum mechanics. For instance, the Hamiltonian of a quantum particle confined to a box involves a choice of boundary conditions at the box ends. Since different choices of boundary conditions imply different physical models, spectral theory of operators with boundary conditions constitues a progressing field of investigation [1, 2]. Let T denote a matrix operator $$ T = \begin{pmatrix} p_{11}(x) & p_{12}(x) \\ p_{21}(x) & p_{22}(x) \end{pmatrix},\quad p_{12}(x) = p_{21}(x), $$ where \(p_{ik} ( x ) \) (\(i,k= 1,2 \)) are real continuous functions on the interval \([ 0,\pi ]\). Let also \(y(x)\) denote a two-component vector function $$ y(x) = \begin{pmatrix} y_{1}(x) \\ y_{2}(x) \end{pmatrix}. $$ $$ B = \begin{pmatrix} 0 & 1 \\ - 1 & 0 \end{pmatrix},\qquad I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, $$ and λ is a parameter, then the equation $$ \biggl(B \frac{d}{dx} +T-\lambda I \biggr)y= 0 $$ is equivalent to a system of two simultaneous first-order ordinary differential equations $$ \begin{gathered} \frac{dy_{2}}{dx} + p_{11}(x)y_{1} + p_{12}(x)y_{2} = \lambda y_{1}, \\ - \frac{dy_{1}}{dx} + p_{21}(x)y_{1} + p_{22}(x)y_{2} = \lambda y_{2}. \end{gathered} $$ In the case of \(p_{12} ( x ) = p_{21} ( x ) = 0\), \(p_{11} ( x ) =V ( x ) +m\), and \(p_{22} ( x ) =V ( x ) -m\), where \(V(x)\) is a potential function, and m is the mass of a particle, system (1.1) is called a stationary one-dimensional Dirac system in relativistic quantum theory. Levitan and Sargsjan [1] have introduced some basic concepts regarding the general spectral theory of self-adjoint Sturm–Liouville and Dirac operators and presented a discrete analogue of system (1.1) using the method of finite differences. If the functions \(p_{ik} ( x ) \) (\(i,k= 1,2 \)) are complex valued, then the operator T is called nonselfadjoint. Also, if the operator T is defined on an infinite interval, then it is said to be singular. The structure of the spectrum of the operator T differs drastically in the nonselfadjoint singular case. The basic spectral theory of nonselfadjoint singular second-order operators consisting of Sturm–Liouville theory was begun by Naimark, whose works initiated a deep study of spectral theory of nonselfadjoint operators [3, 4]. He proved that the spectrum of a nonselfadjoint Sturm–Liouville operator consists of the continuous spectrum, the eigenvalues, and the spectral singularities. He also showed that these eigenvalues and spectral singularities are of finite number with finite multiplicities under certain conditions. Later developments in this area concerned spectral analysis of the boundary value problems of the differential and discrete operators including Sturm–Liouville, Klein–Gordon, quadratic pencils of Schrödinger and Dirac-type operators within the context of determination of Jost solution and providing suffcient conditions guaranteeing the finiteness of the eigenvalues and spectral singularities [5–19]. In particular, boundary value problems including the integral boundary condition were first considered by Krall [20, 21]. He extended the work of Naimark [3] by applying a suitable integral boundary condition and generated the ordinary and nonhomogeneous expansion of a Sturm–Liouville operator. Note that investigation of discrete analogues of ordinary differential operators is an important research area since difference equtions are well suited to find solutions with the aid of computers and can model many contemporary problems arising in control theory, biology, and engineering [7–11]. Let us denote by \(l_{2}(\mathbb{N},\mathbb{C}^{2})\) the Hilbert space of all complex vector sequences y= ( y n ( 1 ) y n ( 2 ) ) n ∈ N with the inner product $$ \langle y,u \rangle = \sum_{n \in \mathbb{N}} \bigl( y_{n}^{(1)}\overline{u_{n}^{(1)}} + y_{n}^{(2)}\overline{u_{n}^{(2)}} \bigr). $$ Consider the nonselfadjoint singular operator \(L_{0}\) generated in the Hilbert space \(l_{2}(\mathbb{N},\mathbb{C}^{2})\) by the discrete Dirac system $$ \textstyle\begin{cases} y_{n+1}^{ ( 2 )} - y_{n}^{ ( 2 )} + p_{n} y_{n}^{ ( 1 )} =\lambda y_{n}^{ ( 1 )},\\ - y_{n}^{ ( 1 )} + y_{n-1}^{ ( 1 )} + q_{n} y_{n}^{ ( 2 )} =\lambda y_{n}^{ ( 2 )}, \end{cases}\displaystyle \quad n \in \mathbb{N}, $$ and the boundary condition $$ y_{0}^{(1)} = 0, $$ where λ is a spectral parameter, Δ is the forward difference operator, and \(p_{n},q_{n} \in \mathbb{C}\). In [9] the integral representation for the Weyl function of \(L_{0}\) and spectral expansion of the operator \(L_{0}\) in terms of principal functions have been investigated in detail. Some generalization problems of the nonselfadjoint discrete Dirac operator have been subject to extensive studies in the literature. For instance, in [13] the general form of the operator \(L_{0}\) has been considered for \(n \in \mathbb{Z}\). Also, some authors investigated the problem with eigenparameter-dependent boundary conditions [10, 14, 15]. In this paper, we consider the operator L generated in the Hilbert space \(l_{2}(\mathbb{N},\mathbb{C}^{2})\) by the nonselfadjoint discrete Dirac equation (1.2) and boundary condition where (\(h_{n}\)) is complex vector sequence such that \(h_{n} = ( h_{n}^{(1)}, h_{n}^{(2)} )\), \(h_{n}^{(i)} \in l^{1} ( \mathbb{N} ) \cap l^{2} ( \mathbb{N} )\), \(i = 1,2\), \(h_{0}^{(1)} \ne 0\). Clearly, \(L_{0}\) is a particular case of L for \(h_{n} = ( 0, 0 )\), \(n \in \mathbb{N} = \{ 1,2,\ldots \} \). Differently from other studies, rather than considering an eigenparameter dependent boundary condition, we generalize the boundary condition (1.3) by using the orthogonality properties of \(( y_{n} )\) with respect to vectors \(( h_{n} )\). Therefore the conditions required for the finiteness of the eigenvalues and spectral singularities of the operator L differ from the studies mentioned. Thus this paper presents the results in a more general and different approach. The main objective of this paper is investigating the quantitative properties of the spectrum of the operator L. We apply and adopt the Naimark and Pavlov conditions on the potential and examine the eigenvalues and spectral singularities of the operator L using the boundary uniqueness theorems of analytic functions. Although the tools we use in this paper are basicly functional analysis techniques, the paper may lay the groundwork for future studies concerning the topics in direct and inverse problems, scattering theory, and applied physics. The paper contains three sections. The first two are introductory, surveying all necessary results of the BVP (1.2)–(1.4). The last section focuses on the quantitative properties of the spectrum of the operator L. Jost solution of the operator L We will assume that $$ \sum_{n = 1}^{\infty } n \bigl( \vert p_{n} \vert + \vert q_{n} \vert \bigr) < \infty . $$ It is known from [9] that equation (1.2) has the solution $$\begin{aligned}& f_{0}^{(1)} ( z ) = e^{i\frac{z}{2}} \Biggl[ 1 + \sum _{m = 1}^{\infty } K_{0m}^{11}e^{imz} \Biggr] - i\sum_{m = 1}^{\infty } K_{0m}^{12}e^{imz}, \\& f_{0}^{(2)} ( z ) = 0, \end{aligned}$$ $$ f_{n}(z) = \begin{pmatrix} f_{n}^{(1)}(z) \\ f_{n}^{(2)}(z) \end{pmatrix}_{n \in \mathbb{N}} = \left \{ \Biggl[ E_{2} + \sum_{m = 1}^{\infty } K_{nm}e^{imz} \Biggr] \begin{pmatrix} e^{i\frac{z}{2}} \\ - i \end{pmatrix}e^{inz} \right \} ,\quad n = 1,2,3,\ldots, $$ for \(\lambda = 2\sin \frac{z}{2}\), E 2 = ( 1 0 0 1 ) , K n m = ( K n m 11 K n m 12 K n m 21 K n m 22 ) , \(z \in \overline{\mathbb{C}}_{ +} \). Note that the expressions \(K_{nm}^{ij}\), \(i,j = 1,2\), can be written uniquely in terms of \(\{ p_{n} \} _{n \in \mathbb{N}}\) and \(\{ q_{n} \} _{n \in \mathbb{N}}\). Moreover, the inequality $$ \bigl\vert K_{nm}^{ij} \bigr\vert \leq C \sum _{k=n+[ \vert \frac{m}{2} \vert ]}^{\infty } \bigl( \vert p_{k} \vert + \vert q_{k} \vert \bigr) $$ is satisfied for \(i,j = 1,2\), where \([ \vert \frac{m}{2} \vert ]\) is the integer part of \(\frac{m}{2}\), and \(C > 0\) is a constant. Hence \(f_{n}(z)\) is analytic in \(\mathbb{C}_{+}:=\{z \in \mathbb{C}: \operatorname{Im} z>0\}\) and continuous in \(\overline{\mathbb{C}}_{+}:=\{z \in \mathbb{C}: \operatorname{Im} z\geq 0\}\). The function \(f_{n}(z)\) is called the Jost solution of equation (1.2). Also, the following asymptotics hold [9]: $$\begin{aligned}& \begin{pmatrix} f_{n}^{(1)}(z) \\ f_{n}^{(2)}(z) \end{pmatrix}_{n \in \mathbb{N}} = \bigl[ E_{2} + o(1) \bigr] \begin{pmatrix} e^{i\frac{z}{2}} \\ - i \end{pmatrix}e^{inz},\quad z \in \overline{ \mathbb{C}}_{ +},n \to \infty , \\& \begin{pmatrix} f_{n}^{(1)}(z) \\ f_{n}^{(2)}(z) \end{pmatrix}_{n \in \mathbb{N}} = \bigl[ E_{2} + o(1) \bigr] \begin{pmatrix} e^{i\frac{z}{2}} \\ - i \end{pmatrix}e^{inz},\quad n \in \mathbb{N},z \in \overline{\mathbb{C}}_{ +},\operatorname{Im} z \to \infty . \end{aligned}$$ Let \(\varphi _{n}(z)\) be a solution of (1.2) subject to the initial conditions $$ \varphi _{0}^{(1)}(z) = 0,\quad\quad \varphi _{1}^{(2)}(z) = 1, $$ $$ \varphi _{n}(z) = \widetilde{\varphi }_{n}(\lambda ) = \biggl\{ \widetilde{\varphi }_{n} \biggl(2\sin \frac{z}{2} \biggr) \biggr\} ,\quad z \in \overline{\mathbb{C}}_{ +},n \in \mathbb{N} \cup \{ 0 \} . $$ Then φ is an entire function, and $$ \varphi (z) = \varphi (z + 4\pi ). $$ The Wronskian of two solutions y n = ( c y n ( 1 ) y n ( 2 ) ) n ∈ N , u n = ( c u n ( 1 ) u n ( 2 ) ) n ∈ N of (1.2) is defined by $$ W [ y,u ] = y_{n}^{(1)}u_{n + 1}^{(2)} - y_{n + 1}^{(2)}u_{n}^{(1)}. $$ Using the usual definition of Wronskian, we have $$ W \bigl[ f_{n}(z),\varphi _{n}(z) \bigr] = f_{0}^{(1)} ( z ),\quad z \in \overline{\mathbb{C}}_{ +}. $$ Let us define the semistrips \(P_{0}: = \{ z:z \in \mathbb{C}, = x + iy,0 \le x < 4\pi ,y > 0 \} \) and \(P = P_{0} \cup [ 0,4\pi )\). Let us define $$ N(z): = \sum_{n = 0}^{\infty } h_{n}f_{n}(z), $$ and also the functions, $$\begin{aligned}& \widetilde{N}(z): = \sum_{n = 0}^{\infty } h_{n}\varphi _{n}(z), \\& \widehat{\varphi }_{n}(z): = \bigl( \varphi _{n}^{(1)}(z), \varphi _{n + 1}^{(2)}(z) \bigr), \\& \widehat{f}_{n}(z): = \bigl( f_{n}^{(1)}(z), f_{n + 1}^{(2)}(z) \bigr), \\& \widehat{\varOmega }_{n}(z): = \begin{pmatrix} \varOmega _{n}^{(1)}(z) \\ \varOmega _{n + 1}^{(2)}(z) \end{pmatrix}, \\& S_{k}(z): = \frac{ - 1}{W [ f,\varphi ]} \Biggl\{ N(z)\widehat{\varphi }_{k}(z) + \widehat{N}(z)\widehat{f}_{k}(z) - \widehat{ \varphi }_{k}(z)\sum_{n = k + 1}^{\infty } h_{n}f_{n}(z) - \widehat{f}_{k}(z)\sum _{n = k + 1}^{\infty } h_{n}\varphi _{n}(z) \Biggr\} . \end{aligned}$$ For all \(z \in P\) and \(f_{0}^{(1)} ( z ) \ne 0\), the Green's function of the operator L is obtained by standard techniques as $$ G_{nk}(z) = G_{nk}^{(1)}(z) + G_{nk}^{(2)}(z), $$ $$ G_{nk}^{(1)}(z) = \frac{S_{k}(z)f_{n}(z)}{N(z)}, $$ $$ G_{nk}^{(2)}(z) = \textstyle\begin{cases} 0,&k < n, \\ \frac{\widehat{f}_{k}(z)\varphi _{n}(z) + \widehat{\varphi }_{k}(z)f_{n}(z)}{f_{0}^{(1)}(z)},&k \ge n. \end{cases} $$ Obviously, for Ω= Ω n = ( Ω n ( 1 ) Ω n ( 2 ) ) ∈ l 2 (N, C 2 ), $$ R_{\lambda } (L)\varOmega _{n}: = \sum _{k = 0}^{\infty } G_{nk}(z)\widehat{\varOmega }_{k},\quad n \in \mathbb{N} \cup \{ 0 \} , $$ is the resolvent of the operator L. It is also clear that \(N(z)\) is the Jost function of the operator L defined by using the Jost solution and boundary condition (1.4). The determination of Jost solutions plays an important role in spectral theory of discrete and differential operators. We refer the reader to books [1–4] for further details, which explain how this single function contains all the information about the spectrum of operators. Eigenvalues and spectral singularities of L Let us denote the set of eigenvalues and spectral singularities of the operator L by \(\sigma _{d}\) and \(\sigma _{ss}\), respectively. From (2.5)–(2.7) and the definition of the eigenvalues and spectral singularities we have $$\begin{aligned}& \sigma _{d} = \biggl\{ \lambda :\lambda = 2\sin \frac{z}{2},z \in P_{0},N(z) = 0 \biggr\} , \end{aligned}$$ $$\begin{aligned}& \sigma _{ss} = \biggl\{ \lambda :\lambda =2 \sin \frac{z}{2},z \in [ 0,4 \pi ),N ( z ) =0 \biggr\} . \end{aligned}$$ Let us define the sets $$\begin{aligned}& M_{1}: = \bigl\{ z:z \in P_{0},N(z) = 0 \bigr\} , \\& M_{2}: = \bigl\{ z:z \in [ 0,4\pi ),N(z) = 0 \bigr\} . \end{aligned}$$ We also denote the set of all limit points of \(M_{1}\) and \(M_{2}\) by \(M_{3}\) and \(M_{4}\), respectively, and the set of all zeros in P of \(N(z)\) with infinite multiplicity by \(M_{5}\). It then also follows that $$ M_{1} \cap M_{5} = \emptyset ,\quad\quad M_{3} \subset M_{2},\quad\quad M_{4} \subset M_{2},\quad \quad M_{5} \subset M_{2}, $$ and the linear Lebesgue measures of \(M_{2}\), \(M_{3}\), \(M_{4}\), and \(M_{5}\) are zero. From the continuity of all derivatives of \(N(z)\) on the real axis we have $$ M_{3} \subset M_{5}\quad \text{and} \quad M_{4} \subset M_{5}. $$ It is convenient to rewrite the sets of eigenvalues and spectral singularities of L as $$\begin{aligned}& \begin{gathered} \sigma _{d} = \biggl\{ \lambda :\lambda = 2\sin \frac{z}{2},z \in M_{1} \biggr\} , \\ \sigma _{ss} = \biggl\{ \lambda :\lambda = 2\sin \frac{z}{2},z \in M_{2} \biggr\} . \end{gathered} \end{aligned}$$ Under conditions (2.1) and\(h_{n}^{(i)} \in l^{1} ( \mathbb{N} ) \cap l^{2} ( \mathbb{N} )\), \(i = 1,2\), we have: The set of eigenvalues ofLis bounded and countable, and its limit points lie in\([ - 2,2 ]\). \(\sigma _{ss} \subset [ - 2,2 ]\), \(\sigma _{ss} = \overline{\sigma _{ss}}\), and\(\mu (\sigma _{ss}) = 0\), whereμstands for the linear Lebesgue measure. From (2.3) and (2.4) we have the analyticity of \(N(z)\) in the upper half-plane and the continuity of \(N(z)\) on the real axis. For \(\beta (z): = e^{ - i\frac{z}{2}}N(z)\), we have the asymptotics $$ \beta ( z ) = h_{0}^{ ( 1 )} +o ( 1 ), \quad \operatorname{Im} z> 0, \operatorname{Im} z\rightarrow \infty . $$ Note that \(\beta (z)\) and \(N(z)\) have the same zeros except at infinity. Using (3.1), (3.2), and (3.4) and boundary uniqueness theorems of analytic functions [22], we arrive at (i) and (ii). □ The multiplicity of a zero of \(N(z)\) in the region P is introduced as the multiplicity of the corresponding eigenvalue or spectral singularity of the operator L. Now let us consider the condition $$ \sum_{n = 1}^{\infty } e^{\varepsilon n} \bigl( \vert p_{n} \vert + \vert q_{n} \vert + \bigl\vert h_{n}^{(i)} \bigr\vert \bigr) < \infty ,\quad \varepsilon > 0,i = 1,2. $$ Under condition (3.5), the operatorLhas a finite number of eigenvalues and spectral singularities, and each of them is of finite multiplicity. From (2.3) and (3.5) we observe that \(N(z)\) has analytic continuation to the half-plane \(\operatorname{Im} z > \frac{ - \varepsilon }{2}\). Since \(N(z)\) is a 4π-periodic function, the limit points of its zeros in P cannot lie in \([ 0,4\pi )\). Hence, using Theorem 3.1, we obtain the finiteness of eigenvalues and spectral singularities of L. □ Note that condition (3.5), which is also known as Naimark's condition in the literature, ensures the analytic continuation of \(N(z)\) from the real axis to the lower half-plane. Now we will consider the Pavlov condition $$ \sum_{n = 1}^{\infty } e^{\varepsilon n^{\beta }} \bigl( \vert p_{n} \vert + \vert q_{n} \vert + \bigl\vert h_{n}^{(i)} \bigr\vert \bigr) < \infty ,\quad \varepsilon > 0,i = 1,2,\frac{1}{2} \le \beta < 1, $$ which is weaker than (3.5). Clearly, the function \(N(z)\) is analytic in the upper half-plane and infinitely differentiable on the real axis. It is essential to notice at this point that \(N(z)\) has no analytic continuation from the real axis to the lower half-plane. For this reason, we need to use a different method to investigate the finiteness of the eigenvalues and spectral singularities of L. We will benefit from the following lemma. ([9]) Suppose that the 4π-periodic functionξis analytic in the open half-plane, all of its derivatives are continuous in the closed upper half-plane, and $$ \sup_{z \in P} \bigl\vert \xi ^{(k)}(z) \bigr\vert \le \eta _{k},\quad k \in \mathbb{N} \cup \{ 0 \} . $$ If the setGwith linear Lebesgue measure zero is the set of all zeros of the functionξwith infinite multiplicity inP, and $$ \int _{0}^{\omega } \ln t(s)\,d\mu (G_{s}) > - \infty , $$ where\(t(s) = \inf_{k}\frac{\eta _{k}s^{k}}{k!}\), \(k \in \mathbb{N} \cup \{ 0 \} \), \(\mu ( G_{s} )\)is the Lebesgue measure of thes-neighborhood ofG, and\(\omega \in ( 0,4\pi )\)is an arbitrary constant, then\(\xi \equiv 0\). Assume that (3.6) holds. Then\(M_{5} = \emptyset \). Under conditions (3.6), (2.2), (2.3), and (2.4), we obtain that $$ \bigl\vert N^{(k)}(z) \bigr\vert \le \eta _{k},\quad k \in \mathbb{N} \cup \{ 0 \} , $$ $$ \eta _{k} = 2^{k}C\sum_{m = 1}^{\infty } m^{k}\exp \bigl( - \varepsilon m^{\beta } \bigr), $$ and \(C > 0\) is a constant. We have the following estimate: $$ \eta _{k} \le 2^{k}C \int _{0}^{\infty } x^{k}e^{ - \varepsilon x^{\beta }} \,dx \le Dd^{k}k!k^{k\frac{1 - \beta }{\beta }}, $$ where D and d are constants depending C, ε, and β. Applying the previous lemma to our case, we get that $$ \int _{0}^{\omega } \ln t(s)\,d\mu (M_{5,s}) > - \infty , $$ where \(t(s) = \inf_{k}\frac{\eta _{k}s^{k}}{k!}\), \(k \in \mathbb{N} \cup \{ 0 \} \), \(\mu (M_{5,s})\) is the Lebesgue measure of the s-neighborhood of \(M_{5}\) and \(\eta _{k}\) is defined by (3.8). Now we have $$ t(s) \le D\exp \biggl\{ - \frac{1 - \beta }{\beta } e^{ - 1}d^{ - \frac{\beta }{1 - \beta }} s^{ - \frac{\beta }{1 - \beta }} \biggr\} . $$ From (3.9) and (3.10) we get $$ \int _{0}^{\omega } s^{ - \frac{\beta }{1 - \beta }} \,d\mu (M_{5,s}) > - \infty . $$ Since \(\frac{1 - \beta }{\beta } \ge 1\), (3.11) holds for arbitrary s if and only if \(\mu (M_{5,s}) = 0\) or \(M_{5} = \emptyset \). □ If condition (3.6) is satisfied, then the operatorLhas a finite number of eigenvalues and spectral singularities, and each of them is of finite multiplicity. We have to show that the function \(N(z)\) has a finite number of zeros with finite multiplicities in P. From (3.3) and the previous theorem we obtain that \(M_{3} = M_{4} = \emptyset \). Hence the bounded sets \(M_{1}\) and \(M_{2}\) have no accumulation points, that is, \(N(z)\) has only a finite number of zeros in P. Since \(M_{5} = \emptyset \), these zeros are of finite multiplicity. □ Levitan, B.M., Sargsian, I.S.: Introduction to Spectral Theory: Selfadjoint Ordinary Differential Operators, vol. 39. Am. Math. Soc., Providence (1975) Chadan, K., Sabatier, P.C.: Inverse Problems in Quantum Scattering Theory. Springer, Berlin (2012) Naimark, M.A.: Investigation of the spectrum and the expansion in eigenfunctions of a nonselfadjoint differential operator of the second order on a semi-axis. Transl. Am. Math. Soc. (2) 16, 103–193 (1960) Naimark, M.A.: Linear Differential Operators. Part II: Linear Differential Operators in Hilbert Space. Ungar, New York (1968) (Eng. Trans.) Bairamov, E., Yokus, N.: Spectral singularities of Sturm–Liouville problems with eigenvalue-dependent boundary conditions. Abstr. Appl. Anal. 2009, 289596 (2009) MathSciNet MATH Article Google Scholar Adıvar, M., Bohner, M.: Spectral analysis of q-difference equations with spectral singularities. Math. Comput. Model. 43(7–8), 695–703 (2006) Adıvar, M., Bairamov, E.: Spectral properties of non-selfadjoint difference operators. J. Math. Anal. Appl. 261(2), 461–478 (2001) Yokus, N., Coskun, N.: Spectral properties of discrete Klein–Gordon s-wave equation with quadratic eigenparameter-dependent boundary condition. Iran. J. Sci. Technol. Trans. A, Sci. 43(4), 1951–1955 (2019) Bairamov, E., Celebi, A.O.: Spectrum and spectral expansion for the non-selfadjoint discrete Dirac operators. Q. J. Math. 50(200), 371–384 (1999) Bairamov, E., Koprubasi, T.: Eigenparameter dependent discrete Dirac equations with spectral singularities. Appl. Math. Comput. 215(12), 4216–4220 (2010) Coskun, N., Yokus, N.: A note on the spectrum of discrete Klein–Gordon s-wave equation with eigenparameter dependent boundary condition. Filomat 33(2), 449–455 (2019) Yokus, N., Coskun, N.: Spectral analysis of quantum Dirac systems. J. Nonlinear Sci. Appl. 10, 4524–4531 (2017) Bairamov, E., Coskun, C.: The structure of the spectrum of a system of difference equations. Appl. Math. Lett. 18(4), 387–394 (2005) Koprubasi, T., Mohapatra, R.N.: Spectral analysis of discrete Dirac equation with generalized eigenparameter in boundary condition. Filomat 33(18), 6039–6054 (2019) Koprubasi, T.: Spectrum of the quadratic eigenparameter dependent discrete Dirac equations. Adv. Differ. Equ. 2014(1), 1 (2014) Bairamov, E., Çakar, Ö., Krall, A.M.: Non-selfadjoint difference operators and Jacobi matrices with spectral singularities. Math. Nachr. 229(1), 5–14 (2001) Bairamov, E., Karaman, O.: Spectral singularities of Klein–Gordon s-wave equations with an integral boundary condition. Acta Math. Hung. 97(1–2), 121–131 (2002) Krall, A.M., Bairamov, E., Cakar, O.: Spectrum and spectral singularities of a quadratic pencil of a Schrödinger operator with a general boundary condition. J. Differ. Equ. 151(2), 252–267 (1999) MATH Article Google Scholar Yardimci, Ş., Arpat, E.K., Can, C.: On the structure of discrete spectrum of a non-selfadjoint system of differential equations with integral boundary condition. J. Math. Chem. 55(5), 1202–1212 (2017) Krall, A.M.: A nonhomogeneous eigenfunction expansion. Trans. Am. Math. Soc. 117, 352–361 (1965) Krall, A.M.: The adjoint of a differential operator with integral boundary conditions. Proc. Am. Math. Soc. 16(4), 738–742 (1965) Dolzhenko, E.P.: Boundary-value uniqueness theorems for analytic functions. Math. Notes Acad. Sci. USSR 25(6), 437–442 (1979) Department of Mathematics, Karamanoglu Mehmetbey Univercity, Yunus Emre Yerleşkesi, 70100, Karaman, Turkey Nimet Coskun & Nihal Yokus Nimet Coskun Nihal Yokus Both authors contributed equally to each part of this work. Both authors read and approved the final manuscript. Correspondence to Nimet Coskun. Coskun, N., Yokus, N. The spectrum of discrete Dirac operator with a general boundary condition. Adv Differ Equ 2020, 409 (2020). https://doi.org/10.1186/s13662-020-02851-2 Eigenparameter Spectral analysis Eigenvalues Spectral singularities Discrete equation Dirac equation
CommonCrawl
\begin{document} \title{Iso-entangled mutually unbiased bases, symmetric quantum measurements and mixed-state designs} \author{Jakub Czartowski} \affiliation{Institute of Physics, Jagiellonian University, ul. {\L}ojasiewicza 11, 30--348 Krak\'ow, Poland} \author{Dardo Goyeneche} \affiliation{Departamento de F\'{i}sica, Universidad de Antofagasta, Casilla 170, Antofagasta, Chile} \author{Markus Grassl} \affiliation{Max Planck Institute for the Science of Light, 91058 Erlangen, Germany} \author{Karol {\.Z}yczkowski} \affiliation{Institute of Physics, Jagiellonian University, ul. {\L}ojasiewicza 11, 30--348 Krak\'ow, Poland} \affiliation{Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotnik\'{o}w 32/46, 02-668 Warszawa, Poland} \pacs{03.67.-a, 03.65.Ud} \date{November 3, 2019} \begin{abstract} \noindent Discrete structures in Hilbert space play a crucial role in finding optimal schemes for quantum measurements. We solve the problem whether a complete set of five iso-entangled mutually unbiased bases exists in dimension four, providing an explicit analytical construction. The reduced density matrices of these $20$ pure states forming this generalized quantum measurement form a regular dodecahedron inscribed in a sphere of radius $\sqrt{3/20}$ located inside the Bloch ball of radius $1/2$. Such a set forms a mixed-state $2$-design --- a discrete set of quantum states with the property that the mean value of any quadratic function of density matrices is equal to the integral over the entire set of mixed states with respect to the flat Hilbert-Schmidt measure. We establish necessary and sufficient conditions mixed-state designs need to satisfy and present general methods to construct them. Furthermore, it is shown that partial traces of a projective design in a composite Hilbert space form a mixed-state design, while decoherence of elements of a projective design yields a design in the classical probability simplex. We identify a distinguished two-qubit orthogonal basis such that four reduced states are evenly distributed inside the Bloch ball and form a mixed-state $2$-design. \end{abstract} \maketitle \emph{Introduction.}---Recent progress of the theory of quantum information triggered renewed interest in foundations of quantum mechanics. Problems related to measurements of an unknown quantum state attract particular interest. The powerful technique of state tomography \cite{AJK04,BK10}, allowing one to recover a density matrix, can be considered as a generalized quantum measurement, determined by a suitable set of pure quantum states of a fixed size $d$. Notable examples include {\sl symmetric informationally complete} (SIC) measurements \cite{RBSC04,S06} consisting of $d^2$ pure states, which form a regular simplex inscribed inside the convex set $\Omega_d \subset {\mathbb R}^{d^2-1}$ of density matrices of size $d$, and complete sets of $(d+1)$ {\sl mutually unbiased bases} (MUBs) \cite{WF89} such that the overlap of any two vectors belonging to different bases is constant. \emph{Introduction.}---Recent progress of the theory of quantum information triggered renewed interest in foundations of quantum mechanics. Problems related to measurements of an unknown quantum state attract particular interest. The powerful technique of state tomography \cite{AJK04,BK10}, allowing one to recover a density matrix, can be considered as a generalized quantum measurement, determined by a suitable set of pure quantum states of a fixed size $d$. Notable examples include {\sl symmetric informationally complete} (SIC) measurements \cite{RBSC04,S06} consisting of $d^2$ pure states, which form a regular simplex inscribed inside the convex set $\Omega_d \subset {\mathbb R}^{d^2-1}$ of density matrices of size $d$, and complete sets of $(d+1)$ {\sl mutually unbiased bases} (MUBs) \cite{WF89} such that the overlap of any two vectors belonging to different bases is constant. The above schemes are distinguished by the fact that they allow to maximize the information obtained from a measurement and minimize the uncertainty of the results obtained under the presence of errors in both state preparation and measurement stages \cite{S06,AS10}. Interestingly, it is still unknown, whether these configurations exist for an arbitrary dimension. In the case of SIC measurements analytical results were known in some dimensions up to $d=48$, see \cite{SG10} and references therein. More recently, a putative infinite family of SICs starting with dimensions $d =4,8,19,48,124,323$ has been constructed \cite{GS17}, while the general problem remains open. Nonetheless, numerical results suggest \cite{SG10} that such configurations might exist in every finite dimension $d$. For MUBs explicit constructions are known in every prime power dimension $d$ \cite{WF89}, and it is uncertain whether such a solution exists otherwise, in particular \cite{BBELTZ07,DEBZ10} in dimension $d=6$. If the dimension is a square, $d=N^2$, the system can be considered as two subsystems of size $N$ and the effects of quantum entanglement become relevant. It is possible to prove that the average entanglement of all bi-partite states forming a SIC or a complete set of MUBs is fixed \cite{L78}. It is natural to ask whether there exists a particular configuration such that all the states forming the generalized measurements share the same amount of entanglement so that they are locally equivalent, $|\phi'\rangle=U_A \otimes U_B |\phi\rangle$. In the simplest case of $d=4$ a set of $16$ iso-entangled vectors forming a SIC was analytically constructed by Zhu, Teo and Englert \cite{ZTE10}, thus such a set can be obtained from a selected {\sl fiducial} state $|\phi\rangle$ by local unitary operations. Further entanglement properties of SICs were studied in \cite{LLZZ18,CGZ18}. Although entanglement of the states forming MUBs in composite dimensions was analyzed \cite{La04,RBKS05,WPZ11,GG15}, the analogous problem of finding a full set of iso-entangled MUBs remained open till now even for two-qubit system. Collections of states forming a SIC measurement or a set of MUBs find numerous applications in the theory of quantum information \cite{Re05,ZTE10,AS10,Ta12}. They belong to the class of {\sl projective designs}: finite sets of evenly distributed pure quantum states in a given dimension $d$ such that the mean value of any function from a certain class is equal to the integral over the set of pure states with respect to the unitarily invariant Fubini-Study measure \cite{DGS77,RBSC04,BZ17}. These discrete sets of pure quantum states, and analogous sets of unitary operators called {\sl unitary designs} \cite{GAE07}, proved to be useful for process tomography \cite{Sc08}, construction of unitary codes \cite{RS09}, realization of quantum information protocols \cite{DCEL09}, derandomization of probabilistic constructions \cite {GKK15}, and detection of entanglement \cite{BHM18}. A cognate notion of quantum conical design was recently proposed \cite{GA16, Szymusiak}, which concerns operators of an arbitrary rank from the cone of mixed quantum states. However, these designs are not suitable to sample the set $\Omega_d$ of mixed states according to the flat, Hilbert-Schmidt measure. On the other hand the general theory of {\sl averaging sets} developed in \cite{SZ84} implies that such configurations of mixed quantum states do exist. In this letter we solve the longstanding problem of existence of iso-entangled MUBs in dimension four. Secondly, we introduce the notion of a quantum mixed-state design, such that mean values of selected functions over this discrete set of density matrices equals to the average value integrated over the set $\Omega_d$, and provide a notable example with dodecahedral symmetry constructed from the constellation of iso-entangled MUBs. Furthermore, we show that a projective $t$-design induces by the coarse graining map a $t$-design in the classical probability simplex, and establish general links between the designs in the sets of classical and quantum states. \emph{MUBs for bi-partite systems.}---The standard solution of $5$ MUBs in dimension $d=4$ consists of $12$ separable states forming three bases and $8$ maximally entangled states corresponding to the remaining two bases \cite{RBKS05,AS10}. Thus the partial trace of these states yields a peculiar configuration inside the Bloch ball: $6$ corners of a regular octahedron inscribed into the Bloch sphere, covered by two points each, correspond to $3$ MUBs in ${\cal H}_2$. The other $8$ points sit degenerated at the center of the ball representing the maximally mixed state, ${\mathbb I}/2$. The total configuration consists thus of $7$ points, at the expense of weighing the central point as four points at the surface. Note that the Schmidt vectors of the first twelve pure product states are $\lambda_{\rm sep}=(1,0)$, while for the other eight states this vector reads $\lambda_{\rm ent}=(1/2, 1/2)$. As this set of vectors in ${\cal H}_4$ forms a projective $2$-design, the average degree of entanglement measured by purity is fixed, $ \langle \lambda_1^2+\lambda_2^2 \rangle=4/5$. For any dimension being a power of a prime, $d=p^k$, the standard solution of the MUB problem consist of $p+1$ separable bases and $p^k-p$ maximally entangled bases \cite{La11}. In the case of $d=9$ the set of MUBs consisting of $4$ separable and $6$ maximally entangled bases was studied by Lawrence \cite{La04}. \emph{Two--qubit iso-entangled MUBs.}---As the set of iso-entangled vectors forming a SIC is known for two \cite{ZTE10} and three \cite{H82} qubit systems, it is natural to ask whether there exists an analogous configuration of iso-entangled MUBs. In other words, we wish to find a global unitary rotation $U\in U(4)$ acting on the standard constellation in such a way that the degeneracy of the configuration of $20$ points is lifted and all of them become equally distant from the center of the Bloch ball. Then the corresponding vectors in ${\cal H}_4$ share the same degree of entanglement and can be obtained from a selected {\sl fiducial vector} $|\phi_1\rangle$ by local unitaries, $|\phi_j\rangle = U_j \otimes W_j |\phi_1\rangle$ with $j=2,\dots, 20$. We construct the desired set of five iso-entangled MUBs in ${\cal H}_4$ making use of the fact that the group of local unitary operations is in this case isomorphic to the double cover of the alternating group $A_5$. It has two faithful irreducible representations of degree two and it admits a tensor product representation that allows us to construct the necessary local two-qubit gates $U_j \otimes W_j$. As shown in Appendix \ref{20MUB}, the full analytic solution can be generated by local unitaries from the following fiducial state, \begin{equation} \label{fiducial_vector} \ket{\phi_1} = \frac{1}{20}\qty(a_+\ket{00} - 10i\ket{01} + \qty(8i-6)\ket{10} + a_- \ket{11}), \end{equation} where $a_\pm = -7 \pm 3\sqrt{5} + i(1 \pm\sqrt{5})$. Since the states forming five bases are iso-entangled, their partial traces with respect to the first (or the second) subsystem share the same purity and belong to a sphere of radius $r=\sqrt{3/20}$, embedded inside the Bloch ball of radius $R=1/2$. The set of $20$ points enjoys a dodecahedral symmetry, shown in Fig. \ref{2221}. Reductions of the four states stemming from each of the five bases in $\mathcal{H}_4$ form a regular tetrahedron in both reductions, so up to rescaling their Bloch vectors form a SIC for a single qubit. In both reductions the mixed states corresponding to all five bases form a five-tetrahedron compound with the same chirality, while their convex hull yields a regular dodecahedron. This configuration is not directly related to the arrangement of $20$ pure states in dimension $4$ forming the {\sl magic dodecahedron} of Penrose \cite{Pe94,ZP93,MA99}. It differs also from the regular dodecahedron of Zimba \cite{Zi06}, which describes a basis of five orthogonal anticoherent states in ${\cal H}_5$ in the stellar representation. \begin{figure} \caption{One-qubit mixed-state design composed of $20$ points inside the Bloch ball of radius $1/2$ obtained by partial trace of the $20$ states in $\mathcal{H}_2 \otimes \mathcal{H}_2$, which form a set of iso-entangled mutually unbiased bases for two qubits. Each basis is represented by the vertices of a regular tetrahedron inscribed in the sphere of radius $r=\sqrt{3/20}$. The reduced density matrices on both subsystems are shown in a) and b).} \label{2221} \end{figure} \emph{Projective and unitary designs.}---Recall that a {\sl projective $t$-design} is an ensemble of $M$ pure states, $\qty{\ket{\psi_j}\in {\cal H}_d}_{j=1}^M$, such that for any polynomial $f_t$ of the state $\psi $ of degree at most $t$ its average value is equal to the integral with respect to the unitarily invariant Fubini--Study measure $\dd{\psi}_{FS}$ over the entire complex projective space of pure states, $\Xi_d={\mathbbm C}P^{\; d-1}$, \begin{equation}\label{cptd} \frac{1}{M} \sum_{j=1}^{M} f_t\qty(\psi_j) = \int_{\Xi_d} f_t\qty(\psi) \dd{\psi}_{FS}. \end{equation} The notions of pure-state $t$-designs and unitary $t$-designs, consisting of matrices evenly distributed over the unitary group \cite{GAE07}, found numerous applications in quantum information processing \cite{Sc08,RS09,DCEL09,GKK15} and have been applied in experiments \cite{Ta12, BQTSLSKB15, BHM18}. They can be considered as a special case of {\sl averaging sets}, which are known to exist for arbitrary sets endowed with a probability measure \cite{SZ84}. Below we shall adopt this notion to the set of density matrices and show how such mixed-state designs can be constructed. \emph{Mixed-state designs.}---We shall start by introducing a formal definition of mixed-state $t$-designs with respect to the Hilbert-Schmidt measure in the space of density matrices. \begin{defi} A collection of $M$ density matrices $\{\rho_i \in \Omega_N\}_{i=1}^M$ is called a \textbf{mixed-state t-design} if for any polynomial $g_t$ of the state $\rho$ of degree $t$ the average over the collection is equal to the mean value over the set $\Omega_N$ of mixed states in dimension $N$ with respect to the normalized Hilbert-Schmidt measure $\dd{\rho_{\rm HS}}$, \begin{equation} \frac{1}{M} \sum_{i=1}^{M} g_t \qty(\rho_i) = \int_{\Omega_N} g_t\qty(\rho) \dd{\rho_{\rm HS}}. \label{defi1} \end{equation} \end{defi} The above condition, analogous to the definition of projective $t$-designs (\ref{cptd}), is equivalent to the following relation, \begin{equation} \frac{1}{M}\sum_{i=1}^M \rho_i^{\otimes t} = \int_{\Omega_N} \rho^{\otimes t}\dd{\rho_{\rm HS}} =: \omega_{N,t}, \label{omega} \end{equation} where the mean product state of a system consisting of $t$ copies of a state $\rho$ in dimension $N$ averaged over the entire space ${\Omega_N}$ of mixed states is denoted by $\omega_{N,t}$. The measure $\dd{\rho_{\rm HS}}$ is defined by the requirement that each unit ball with respect to the Hilbert-Schmidt distance has the same volume. Observe that for $t=1$ Definition \eqref{defi1} reduces to a resolution of the maximally mixed state, $ \frac{1}{M}\sum_{i=1}^M \rho_i = \frac{1}{N}{\mathbb{I}_N} $ so any mixed-state design forms a generalized quantum measurement (also called POVM). To check whether a given configuration of density matrices forms a $t$-design we establish the following necessary and sufficient condition. \begin{prop}\label{prop1} A set consisting of $M$ states from the set $\Omega_N$ of density matrices of size $N$ forms a mixed-state $t$-design if and only if the following bound is saturated, \begin{equation} 2 \Tr( \omega_{N,t} \; \frac{1}{M} \sum_{i=1}^M \rho_i^{\otimes t} ) - \frac{1}{M^2}\sum_{i,j=1}^M \Tr(\rho_i\rho_j)^t \leq \gamma_{N,t} \label{welchprop1} \end{equation} where $\gamma_{N,t} := \Tr \omega^2_{N,t}$ with $\omega_{N,t}$ defined by Eq. (\ref{omega}). \end{prop} This condition, proved in Appendix \ref{APP_proof_1} is closely related to saturation of the Welch bound \cite{We74} for projective and unitary designs \cite{Sc08}. Such an tool allows one to construct such designs by numerical minimization. Exact values of $\gamma_{N,t}$ for $t\leq5$ are given in Appendix \ref{APP_general_N}. Using the bound \eqref{welchprop1}, we were able to find numerical lower bounds for the number $M$ of states in a mixed $2$-design: $M\ge 4$ for $N=2$ and $M\ge 9$ for $N=3$. In particular, for $N=2$ the minimal mixed-state $2$-design forms a tetrahedron inside the Bloch ball, an example of Platonic designs, equivalent to a single tetrahedron out of five plotted in Fig. \ref{2221} -- see Appendix \ref{platonic_des}. \emph{Connection between pure- and mixed-state designs.}---We will show that a mixed-state design for a single system of size $N$ can be generated from a bipartite pure-state design of size $N\times N$. Since such constellations exist for all dimensions, the following result, proved in Appendix \ref{App_prop2}, implies that mixed-state $t$-designs exist for every $N$. \begin{prop} \label{prop2} Any complex projective $s$-design $\qty{\ket{\psi_j}}_{j=1}^M$ in the composite Hilbert space $\mathcal{H}_{A}\otimes\mathcal{H}_{B}$ of dimension $d=N^2$ induces by partial trace a mixed-state $t$-design $\qty{\rho_{j}}_{j=1}^M$ in $\Omega_N$ with $t\ge s$ and $\rho_{j} = \Tr_B\ketbra{\psi_j}$. The same property holds also for the dual set $\{ \rho'_{j} = \Tr_A\ketbra{\psi_j}\}_{j=1}^M$. \end{prop} In particular, Proposition \ref{prop2} implies that taking partial trace of pure states forming a SIC in ${\cal H}_{N^2}$, or any other pure state $2$-design, one obtains a mixed-state $2$-design in the set $\Omega_N$ of density matrices of size $N$. Interestingly, there exist distinguished cases for which the degree of the design increases, $t>s$: In Appendix \ref{examples} we demonstrate that partial trace of any orthogonal basis, $t=1$, of the five iso-entangled MUBs yields a mixed state $2$-design, while the complete set of these MUBs, $t=2$, leads to a mixed state $3$-design. Furthermore, the following one-to-one relation between a class of mixed-state $2$-designs and projective $2$-designs is proven in Appendix \ref{app_dilut}. \begin{prop} \label{dilution_prop} Any projective 2-design $\qty{\ket{\psi_i}}_{i=1}^N$ of dimension $N$ can be diluted into a mixed 2-design by taking projectors onto all states forming the projective 2-design with weight $p = \frac{1+N}{1+N^2}$ and the maximally mixed state $\mathbb{I}_N/N$ with weight $1-p = \frac{N^2 - N}{1 + N^2}$. \end{prop} \emph{Designs in classical probability simplex. }---To construct one-qubit mixed-state designs one needs to determine the radial distribution of points inside the Bloch ball. It is related to an averaging set on the interval $[-1/2, 1/2]$ with respect to the Hilbert--Schmidt (HS) measure \cite{ZS01} determining the distribution of eigenvalues of a random mixed quantum state. Returning to the general case of an arbitrary dimension $N$, consider any fixed probability measure $\mu\qty(x)$ defined on the simplex $\Delta_{N}$ of $N$-point probability vectors. We wish to find an averaging set over the simplex, i.e., a sequence of $M$ points $\qty{x_i\colon x_i\in \Delta_N}_{i=1}^M$ which satisfy the condition analogous to $t$-designs, with respect to the integration measure $\mu(x)$: \begin{equation} \frac{1}{M}\sum_{i=1}^M f_t(x_i) = \int_{\Delta_N} f_t(x) \mu\qty(x)\dd{x}, \label{eigtdes} \end{equation} where $f_t$ denotes an arbitrary polynomial of order $t$. Exemplary minimal solutions of this problem for low values of $t$ and $N=2$, so that the integration is done over the interval $\Delta_2=[-1/2, 1/2]$, are presented in Appendix \ref{APP_interval}. Here we shall concentrate on the cases of $t=1,2$ for the Lebesgue and HS measure, as these results are linked to one-qubit pure and mixed-state designs, respectively. 1-design in the interval with respect to both measures consists of a single point in its center, corresponding to the projection on the $x$ axis of the basis $|0\rangle, |1\rangle$, which yields both projective and mixed-state $1$-design. Interval $2$-design with respect the flat Lebesgue measure, $\mu_{\rm L}(x)=1$, gives coordinates of vertices of a tetrahedron inscribed in a sphere of unit radius, i.e., a SIC-POVM in dimension $2$. The analogous design with respect to $\mu_{\rm HS}$ provides the radius of a sphere in the Bloch ball containing mixed-state 2-designs with constant purity. An exemplary 2-design obtained by partial trace of 16 states forming an iso-entangled SIC-POVM for 2 qubits is shown in Fig. \ref{fig:my_label}d. \begin{figure} \caption{Simplicial $t$-designs on $\Delta_2=[-1/2,1/2]$ for $t=1,2,3,4$ with respect to a) flat measure and b) Hilbert-Schmidt measure. c) $2$-design with respect to the flat measure $\mu_{\rm L}$ corresponds to the $x$ coordinates of a tetrahedron inscribed in a Bloch sphere, related to one-qubit projective $2$-design produced by a SIC-POVM in ${\cal H}_2$. d) $2$-design with respect to the HS measure corresponds to the radius of the sphere containing the mixed-state $2$-design -- the cube induced by the iso-entangled SIC-POVM in ${\cal H}_4$.} \label{fig:my_label} \end{figure} Positions of both points at the unit interval, which form $2$-designs with respect to both measures, $ x_{\pm}^{\rm L} = \pm 1/2\sqrt{3}$ and $ x_{\pm}^{\rm HS} = \pm \sqrt{3/20}$, can be thus related to the geometry of regular bodies inscribed into a sphere. Note that the design on $[0, 1]$ with respect to the flat measure is formed by probabilities $p_i=|\langle i|\psi_j\rangle|^2$ related to projections of the states of the design onto the computational basis. This observation, corresponding to the decoherence of a quantum state to the classical probability vector, can be generalized for higher dimensions. \begin{prop} \label{prop3} Any complex projective $t$-design $\qty{\ket{\psi_j}}_{j=1}^M$ in the Hilbert space $\mathcal{H}_N$ induces, by the coarse graining map, $|\psi\rangle \langle \psi|\to {\vec p}:={\rm diag}(|\psi\rangle \langle \psi|)$, a $t$-design in the $N$--point classical probability simplex $\Delta_{N}$ with respect to the flat measure $\mu_{\rm L}$. \end{prop} To prove this fact it is sufficient to recall that the natural, unitarily invariant measure in the space of pure states induces, by decoherence, the flat measure $\mu_{\rm L}$ in the probability simplex, see Appendix \ref{APP_interval}. The notion of $t$-designs formulated for a probability simplex allows one to select classical states which are useful to approximate an integral over the entire set $\Delta_{N}$. This also implies a simple, yet important observation that a mixed-state design in dimension $N=2$ with $t>3$ cannot be generated from iso-entangled pure states in ${\cal H}_4$. Furthermore, we suggest a general approach to obtain mixed designs of a product form. It will be convenient to use an asymmetric part $\tilde \Delta_N$ of the simplex $\Delta_N$, which corresponds to ordering of eigenvalues, $\lambda_1 \ge \lambda_2 \ge \dots \ge \lambda_N$. \begin{prop} \label{prop4} Consider a $t$-design $\qty{\lambda_i}_{i=1}^n$ in the simplex $\Delta_N$ with respect to the measure $\mu_{\rm HS}$, the corresponding set of diagonal matrices $\Lambda_i = {\rm diag}\qty(\lambda_i)$ and any unitary $t$-design $\qty{U_j}_{j=1}^m$. Let $n'$ denote the number of points of the simplicial design belonging to the asymmetric part $\tilde \Delta_N$. Then the Cartesian product consisting of $n'm$ density matrices, $\rho_{ij}=U_j\Lambda_i U_j^{\dagger}$, $i=1,\dots, n'$ and $j=1, \dots, m$, forms a mixed-state $t$-design in $\Omega_N$. \end{prop} This statement, demonstrated in Appendix \ref{App_prop5}, allows us to construct Platonic mixed-state $t$-designs inside the Bloch ball: restricting the HS $2$-design in $\Delta_2$ to its half $\tilde \Delta_2=[0,1/2]$ we arrive at a single point $x_{+}^{\rm HS} =\sqrt{3/20}$, which determines the radius of the sphere inside the Bloch ball. Taking the corresponding spectrum, $\Lambda={\rm diag}(1/2+ x_{+}^{\rm HS},1/2- x_{+}^{\rm HS})$, and rotating it by unitaries $U_i$ from a unitary design in $SU(2)$ we arrive at a mixed-state design. In the simplest case of the tetrahedral group the mixed-state $2$-design consists of four points forming one of the five tetrahedrons shown in Fig. \ref{2221}, which arise by partial trace of the iso-entangled bases listed in Appendix \ref{platonic_des}. This example shows that there exist mixed state $t$-designs which cannot be purified to a pure state $t$-design. \emph{Outlook and conclusions.}--- In this work we introduced the notion of mixed-state $t$-designs and established necessary and sufficient conditions for their existence. As any mixed-state $1$-design forms a POVM, any design of a higher order $t$ can be considered as a generalized measurement with additional symmetry properties \cite{Zh15}. From the physical perspective such a deterministic sequence of density matrices approximates a sample of random states and describes projective designs on a bipartite system AB, under the restriction that Alice receives no information from Bob. Analyzing mixed-states designs we solved the problem of existence of $20$ locally equivalent two-qubit states which form a set of five MUBs. The obtained configuration defines a remarkable measurement scheme, useful for quantum state estimation \cite{RTH15} and for constructing symmetric entanglement witnesses based on MUBs \cite{SHBAH12, CSW18}, different from those analyzed earlier \cite{MBPKIN14,BCLM16}. We analytically derived a two-qubit fiducial state, so that the other states forming the five bases were obtained by applying local unitaries. The partial trace of these two-qubit states forms a structure with dodecahedral symmetry inscribed into a sphere inside the Bloch ball. This particular configuration consisting of five tetrahedrons, visualized in Fig. \ref{2221}, leads to a notable example of a mixed-state $3$-design. Each single tetrahedron, obtained by partial trace of a single basis, forms a $2$-design. The paper establishes a direct link between designs in various sets which serve as a scene for quantum information processing: any projective $t$-design composed of pure states in dimension $d=N^2$ induces by partial trace a mixed-state design in the set of density matrices in dimension $N$, while by the decoherence channel it produces a design in the classical $d$-point probability simplex. A class of mixed-state designs can be constructed by the Cartesian product of a unitary design and a simplicial Hilbert-Schmidt design. These relations, based on transformations of measures, put the notion of designs in various spaces into a common framework, and show how to approximate averaging over continuous sets by discrete sums. Such an approach is not only of direct interest for theoretical work on foundations of quantum mechanics, but also for experimental realization of an approximate ensemble of random quantum or classical states. We shall conclude the paper with a brief list of open problems: \emph{(i)} Find the minimal number of elements $M(N)$ forming a minimal mixed $t$-design in dimension $N$; \emph{(ii)} Find minimal mixed-state $t$-designs, for which the variance of the purity of all the states is the smallest; \emph{(iii)} Numerical calculations performed for $N=3,4,5 $ suggest that there exist orthogonal bases in ${\cal H}_N \otimes {\cal H}_N$ such that their partial trace gives a mixed state $2$-design in $\Omega_N$. Determine, whether this conjecture, proved here for $N=2$, holds also for higher dimensions. \emph{Acknowledgements.}---It is a pleasure to thank M. Appleby, K. Bartkiewicz, I. Bengtsson, B.C. Hiesmayr, P. Horodecki, {\L}. Rudnicki and A. Szymusiak for inspiring discussions. Financial support by Narodowe Centrum Nauki under the grant number DEC-2015/18/A/ST2/00274 and by Foundation for Polish Science under the Team-Net project is gratefully acknowledged. DG is supported by MINEDUC-UA project, code ANT 1855 and Grant FONDECYT Iniciaci\'{o}n number 11180474, Chile. \begin{widetext} \appendix \section{Explicit form of $20$ iso-entangled states forming $5$ MUBs} \label{20MUB} The standard construction of a complete set of two-qubit mutually unbiased bases using finite fields yields the following five bases $\qty{\ket{\psi_i}}_{i=1}^{20}$, written row-wise, with normalization omitted \cite{RBKS05}, \begin{alignat}{10} &\ket{00}& &\ket{01}& &\ket{10}& &\ket{11}\label{eq:MUB1} \\ &(\ket{0}+\ket{1})(\ket{0}+\ket{1})& &(\ket{0}+\ket{1})(\ket{0}-\ket{1})& &(\ket{0}-\ket{1})(\ket{0}+\ket{1})& &(\ket{0}-\ket{1})(\ket{0}-\ket{1})\label{eq:MUB2} \\ &(\ket{0}+i\ket{1})(\ket{0}+i\ket{1})& &(\ket{0}+i\ket{1})(\ket{0}-i\ket{1})& &(\ket{0}-i\ket{1})(\ket{0}+i\ket{1})& &(\ket{0}-i\ket{1})(\ket{0}-i\ket{1})\label{eq:MUB3} \\ &\ket{00}-\ket{10}+i\ket{01}+i\ket{11}&\quad &\ket{00}-\ket{10}-i\ket{01}-i\ket{11}&\quad &\ket{00}+\ket{10}-i\ket{01}+i\ket{11}&\quad &\ket{00}+\ket{10}+i\ket{01}-i\ket{11}\label{eq:MUB4} \\ &\ket{00}-i\ket{10}+\ket{01}+i\ket{11}&\quad &\ket{00}+i\ket{10}+\ket{01}-i\ket{11}&\quad &\ket{00}-i\ket{10}-\ket{01}-i\ket{11}&\quad &\ket{00}+i\ket{10}-\ket{01}+i\ket{11}. \label{eq:MUB5} \end{alignat} The first three bases consist of product vectors, while the states in the last two bases are all maximally entangled, as the corresponding matrices of coefficients are unitary. The group $G_{\text{sym}}$ of unitary matrices that map the set of $20$ vectors onto itself up to phases, $G \ket{\psi_k} = e^{i \chi_{jk}}\ket{\psi_j}$, is generated by two complex Hadamard matrices, (One can always add multiples of identity $e^{i\phi}I$ to the group, but we consider the smallest possible group here), \begin{alignat}{5} G_{\text{sym}}=\left\langle \frac{1}{2} \begin{pmatrix} i & -i & 1 & 1\\ i & -i & -1 & -1\\ -i & -i & -1 & 1\\ -i & -i & 1 & -1 \end{pmatrix}, \frac{1}{2} \begin{pmatrix} -i & i & -i & i\\ 1 & 1 & -1 & -1\\ 1 & 1 & 1 & 1\\ i & -i & -i & i \end{pmatrix} \right\rangle. \end{alignat} The group is a subgroup of the so-called Clifford group that maps tensor products of Pauli matrices onto itself. The group $G_{\text{sym}}$ has order $7680$, and its center is generated by $iI$, i.e., it has order four. The action on the $20$ states modulo phases is a permutation group $G_{\text{perm}}$ of order $1920$. The group acts transitively, i.e., any state can be mapped to any other state. Assume that we can find a subgroup $H\le G_{\text{sym}}$ that acts transitively on the $20$ states and that, after a global change of basis, all elements of $H$ can be written as tensor products. In the transformed basis, we will then obtain a complete set of MUBs such that all the states forming the bases are equivalent up to local unitaries, so they share the same Schmidt vector. Unfortunately, the problem of deciding whether a finite matrix group can be expressed as a tensor product appears to be non-trivial in general. There are both necessary and sufficient conditions, but there does not seem to be a simple general criterion. In our case, there are transitive subgroups of $G_{\text{perm}}$ of order $20$, $60$, $80$, $120$, $160$, $320$, $960$, and $1920$. By direct solving the equations for a change of basis that transforms all elements of the corresponding matrix group into tensor products, we find that only the subgroup $H_{\text{perm}}$ of order $60$ affords a representation as a tensor product. The group $H_{\text{perm}}$ is isomorphic to the alternating group $A_5$ on five letters. The corresponding subgroup $H_{\text{sym}}$ is generated by \begin{alignat}{5}\label{eq:gens_H_sym} H_{\text{sym}}=\left\langle \frac{1}{2} \begin{pmatrix} -1 & 1 & -i & -i\\ 1 & -1 & -i & -i\\ i & i & 1 & -1\\ i & i & -1 & 1 \end{pmatrix}, \frac{1}{2} \begin{pmatrix} i & i & i & i\\ -1 & 1 & -1 & 1\\ -1 & -1 & 1 & 1\\ -i & i & i & -i \end{pmatrix} \right\rangle. \end{alignat} The group $H_{\text{sym}}$ is also isomorphic to $A_5$, and its center is trivial. The group $A_5$ does not have a faithful representation of degree $2$, and hence $H_{\text{sym}}$ cannot be written as the tensor product of two representations of $A_5$. However, the double cover of $A_5$, which is isomorphic to the group $\mathrm{SL}(2,5)$ of $2\times 2$ matrices over the integers modulo $5$ with determinant $1$, has two faithful irreducible representations of degree $2$. The tensor product of these two representation yields a group of order $60$ that is conjugate to $H_{\text{sym}}$. A global change of basis that transforms $H_{\text{sym}}$ into a tensor product is given by \begin{alignat}{5} H_{\text{local}}= T H_{\text{sym}} T^\dag, \end{alignat} where the global unitary transform reads \begin{small} \begin{alignat}{5} \def\zeta{\zeta} T=\frac{1}{20}\begin{pmatrix} -(\sqrt{5} + 1)i - 3\sqrt{5} + 7 & (\sqrt{5} - 1)i + 3\sqrt{5} + 7 & (\sqrt{5} - 1)i + 3\sqrt{5} + 7 & (-3\sqrt{5} + 7)i + \sqrt{5} + 1 \\ 10i & 10 & -10 & 10 \\ -8i + 6 & 6i + 8 & -6i - 8 & -6i - 8 \\ (\sqrt{5} - 1)i + 3\sqrt{5} + 7 & -(\sqrt{5} + 1)i - 3\sqrt{5} + 7 & -(\sqrt{5} + 1)i - 3\sqrt{5} + 7 & (3\sqrt{5} + 7)i - \sqrt{5} + 1 \end{pmatrix}. \end{alignat} \end{small} Explicitly, we obtain two local generators $h_1, h_2$ (which do not directly correspond to those in \eqref{eq:gens_H_sym}) \begin{small} \begin{alignat}{5} H_{\text{local}}=\Biggl\langle & \frac{1}{50}\begin{pmatrix} 5 & -2i\sqrt{5} + \sqrt{5}\\ i\sqrt{5} - 2\sqrt{5} & -5i \end{pmatrix} \otimes \begin{pmatrix} 5 & 2i\sqrt{5} + \sqrt{5}\\ -i\sqrt{5} - 2\sqrt{5} & 5i \end{pmatrix},\\ & \frac{1}{20^2}\begin{pmatrix} (5\sqrt{5} + 5)i & (4\sqrt{5} - 10)i + 3\sqrt{5} + 5\\ (4\sqrt{5} - 10)i - 3\sqrt{5} - 5 & -(5\sqrt{5} + 5)i \end{pmatrix} \otimes \begin{pmatrix} (5\sqrt{5} - 5)i & -(4\sqrt{5} + 10)i + 3\sqrt{5} - 5\\ -(4\sqrt{5} + 10)i - 3\sqrt{5} + 5 & (-5\sqrt{5} + 5)i \end{pmatrix} \Biggr\rangle. \end{alignat} \end{small} In this basis, we see that the first and the second tensor factor are similar, but not identical; they correspond to inequivalent representations of $\mathrm{SL}(2,5)$. Applying the transformation $T$ to the complete MUB in \eqref{eq:MUB1}--\eqref{eq:MUB5} we obtain the $20$ states of the iso-entangled complete set of MUBs shown in Table \ref{table:isoMUB}. Partial traces over both subsystems of these 20 states form regular dodecahedra in the Bloch ball, shown in Fig. 2. Both configurations are related by an antiunitary transformation, which includes multiplication by a diagonal matrix with diagonal $\qty(1,\,i)$ and complex conjugation. The phases are chosen such that the action of $H_{\text{local}}$ on these states does not introduce additional phase factors. \begin{table}[hbt] \begin{alignat*}{10}\def1.4{1.4} \begin{array}{|c|c|c|c|c|} \hline \text{group element}& \ket{00} &\ket{01} & \ket{10} & \ket{11}\\ \hline id & (\sqrt{5} + 1)i + 3\sqrt{5} - 7 & -10i & 8i - 6 & (-\sqrt{5} + 1)i - 3\sqrt{5} - 7 \\ h_2h_1^2h_2h_1h_2 & (\sqrt{5} + 1)i + 3\sqrt{5} - 7 & 10i & -8i + 6 & (-\sqrt{5} + 1)i - 3\sqrt{5} - 7 \\ h_2h_1h_2h_1h_2h_1^2h_2 & (-\sqrt{5} + 1)i - 3\sqrt{5} - 7 & 10 & 6i + 8 & (\sqrt{5} + 1)i + 3\sqrt{5} - 7 \\ h_1h_2h_1^2h_2h_1h_2 & (-\sqrt{5} + 1)i - 3\sqrt{5} - 7 & -10 & -6i - 8 & (\sqrt{5} + 1)i + 3\sqrt{5} - 7 \\ \hline h_1^2h_2h_1h_2 & (2\sqrt{5} + 11)i + \sqrt{5} - 2 & 5i - 5 & -i + 7 & (-2\sqrt{5} + 11)i - \sqrt{5} - 2 \\ h_1h_2h_1h_2h_1^2h_2 & (3\sqrt{5} - 4)i + 4\sqrt{5} + 3 & -5i - 5 & 7i + 1 & (-3\sqrt{5} - 4)i - 4\sqrt{5} + 3 \\ h_2h_1h_2h_1^2h_2h_1h_2 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 & 5i - 15 & -7i - 1 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 \\ h_2 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 & 5i + 5 & 5i + 15 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 \\ \hline h_2h_1h_2h_1^2h_2h_1h_2h_1^2h_2 & (2\sqrt{5} + 11)i + \sqrt{5} - 2 & -5i + 5 & i - 7 & (-2\sqrt{5} + 11)i - \sqrt{5} - 2 \\ h_2h_1h_2h_1h_2 & (3\sqrt{5} - 4)i + 4\sqrt{5} + 3 & 5i + 5 & -7i - 1 & (-3\sqrt{5} - 4)i - 4\sqrt{5} + 3 \\ h_1h_2h_1^2h_2 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 & -5i + 15 & 7i + 1 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 \\ h_2h_1^2h_2 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 & -5i - 5 & -5i - 15 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 \\ \hline h_2h_1h_2 & (-2\sqrt{5} + 11)i - \sqrt{5} - 2 & -5i - 5 & -7i - 1 & (2\sqrt{5} + 11)i + \sqrt{5} - 2 \\ h_2h_1^2h_2h_1h_2h_1^2h_2 & (-3\sqrt{5} - 4)i - 4\sqrt{5} + 3 & -5i + 5 & -i + 7 & (3\sqrt{5} - 4)i + 4\sqrt{5} + 3 \\ h_1^2h_2h_1h_2h_1^2h_2 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 & -15i - 5 & i - 7 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 \\ h_1h_2 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 & 5i - 5 & -15i + 5 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 \\ \hline (h_1h_2)^2 & (-2\sqrt{5} + 11)i - \sqrt{5} - 2 & 5i + 5 & 7i + 1 & (2\sqrt{5} + 11)i + \sqrt{5} - 2 \\ h_1^2h_2 & (-3\sqrt{5} - 4)i - 4\sqrt{5} + 3 & 5i - 5 & i - 7 & (3\sqrt{5} - 4)i + 4\sqrt{5} + 3 \\ (h_1h_2h_1^2h_2)^2 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 & 15i + 5 & -i + 7 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 \\ h_2h_1h_2h_1^2h_2 & (-\sqrt{5} - 4)i + 2\sqrt{5} + 3 & -5i + 5 & 15i - 5 & (\sqrt{5} - 4)i - 2\sqrt{5} + 3 \\ \hline \end{array} \end{alignat*} \caption{Coefficients of the $20$ locally equivalent states (scaled by a factor of $20$) which form a complete set of iso-entangled MUBs for two qubits. The first row corresponds to the fiducial vector given in Eq. (1). The ordering of the bases, separated by horizontal lines, is the same as in \eqref{eq:MUB1}--\eqref{eq:MUB5}. In the first column we list a group element in terms of the generators $h_1$ and $h_2$ that maps the first vector to the particular vector. Note that the first vector is an eigenvector of $h_1$.} \label{table:isoMUB} \end{table} Furthermore, due to the symmetry of the group $H_{\rm perm}$, for each tensor factor the sets of $20$ unitary single-qubit matrices acting in both subsystems to generate elements of all five MUBs from the fiducial state (1), form a unitary $5$-design. It is worth to emphasize here that a given configuration treated as a design in various spaces may lead to designs of a different degree. For instance, the set of five iso-entangled MUBs in ${\cal H}_4$ forms a projective $2$-design, the partial traces of these 20 vectors lead to a mixed-state $3$-design inside the Bloch ball $\Omega_2$, while the corresponding $20$ unitary matrices form an unitary $5$-design in $U(2)$. A single iso-entangled basis is a projective $1$-design, its partial traces form a mixed-state $2$-design in $\Omega_2$, and the corresponding 4 unitary matrices lead to a unitary $2$-design. \section{Proof of Propositions}\label{proofprops} \subsection{Proof of Proposition~\ref{prop1}}\label{APP_proof_1} Following the steps of the proof of an analogous statement for unitary designs by Scott \cite{Sc08}, we start by introducing the following operator in dimension $N^t$ determined by a constellation of $M$ states $\rho_i$ in dimension $N$: \begin{equation}\label{OpS} S = \frac{1}{M}\sum_{i=1}^M \rho_i^{\otimes t} - \int_{\Omega_N} \rho^{\otimes t}\dd{\rho_{\rm HS}}. \end{equation} Next we consider the trace of the positive operator $S^\dag S$, \begin{align} 0 & \leq \Tr(S^\dag S) = \nonumber\\ & = \frac{1}{M^2}\sum_{i,j=1}^M \Tr(\rho_i^{\otimes t}\rho_j^{\otimes t}) - 2 \Tr(\frac{1}{M}\sum_{i=1}^M \rho_i^{\otimes t} \int_{\Omega_N} \sigma^{\otimes t} \dd{\sigma_{\rm HS}}) + \Tr(\int_{\Omega_N} \rho^{\otimes t}\dd{\rho_{\rm HS}} \int_{\Omega_N} \sigma^{\otimes t}\dd{\sigma_{\rm HS}}). \end{align} From this inequality we derive an analogue of the Welch inequality for mixed-state $t$-designs: \begin{equation} \label{proofprop1} 2 \Tr(\frac{1}{M}\sum_{i=1}^M \rho_i^{\otimes t} \int_{\Omega_N} \sigma^{\otimes t}\dd{\sigma_{\rm HS}}) - \frac{1}{M^2}\sum_{i,j=1}^M \Tr(\rho_i^{\otimes t}\rho_j^{\otimes t}) \leq \Tr(\int_{\Omega_N} \rho^{\otimes t}\dd{\rho_{\rm HS}} \int_{\Omega_N} \sigma^{\otimes t}\dd{\sigma_{\rm HS}}). \end{equation} Eq. \eqref{OpS} implies that the above inequality is saturated if and only if the set of mixed states $\{\rho_i\}$ forms a mixed-state $t$-design, which implies Proposition \ref{prop1} and leads to Eq. \eqref{welchprop1}. \qed As a simple consequence of \eqref{proofprop1}, we can see that every mixed-state $t$-design consisting of $M$ states satisfies \begin{equation} \frac{1}{M^2}\sum_{i,j=1}^M \Tr(\rho_i\rho_j)^t = \gamma_{N,t}, \end{equation} which is a necessary property of a mixed-state $t$-design. Here $\gamma_{N,t} = \Tr \omega^2_{N,t}$ denotes the purity of the averaged state $ \omega_{N,t}= \int_{\Omega_N} \rho^{\otimes t}\dd{\rho_{\rm HS}}$. \subsection{Purity of a random 2- and 3-quNit product state after twirling} We start with the case $t=2$ by evaluating a two-copy average product state. It is convenient to use the twirling operation acting on any bipartite state $\rho_{AB}$ of dimension $N^2$, defined by an integral with respect to the Haar measure on the unitary group $U(N)$, corresponding to the local operations, \begin{equation} \mathcal{T}_2(\rho_{AB}) = \int_{U(N)} (U\otimes U)\rho_{AB}(U^\dag\otimes U^\dag) \dd_{\rm H}(U) . \end{equation} The result of this operation can be given in terms of projection operators $\pi_\pm = (\mathbb{I}\pm O_{\text{SWAP}})/2$, projecting on symmetric and antisymmetric subspace respectively, as \begin{equation} \mathcal{T}_2(\rho_{AB}) = \alpha_+ \pi_+ + \alpha_- \pi_- , \end{equation} with coefficients given by \begin{equation} \alpha_\pm = \frac{2\Tr(\rho_{AB}\pi_\pm)}{N(N\pm1)}, \end{equation} with the SWAP-operation defined as $O_{\text{SWAP}}\qty(\ket{\psi}\otimes\ket{\phi}) = \ket{\phi}\otimes\ket{\psi}$. Making use of the fact that $\ev{\sum_{i\neq j} \lambda_i\lambda_j} = 1 - \ev{\Tr\rho^2}$, it is easy to show that \begin{align} \ev{{\rm Tr}(\rho\pi_\pm)} = \frac{1 \pm \ev{\Tr\rho^2_{AB}}}{2}, \end{align} where $\lambda_i$ with $i = 1,\hdots,N$ denote eigenvalues of $\rho$, while the average $\ev{\cdot}$ is taken over the entire set $\Omega_N$ of mixed states of size $N$ with respect to Hilbert-Schmidt measure. Let us consider $\rho_{AB}$ to be a diagonal bipartite product state, composed of two copies of a state in dimension $N$, $\rho_{AB} = \Lambda^{\otimes 2}$ with $\Lambda = \text{diag}(\lambda_1,\hdots,\lambda_N)$. With this assumption, using the known average purity of the state $\rho$ \cite{ZS01} it can be shown that the coefficients $\alpha_\pm$ averaged with respect to the HS measure are then given by \begin{equation} \ev{\alpha_\pm} = \frac{N \pm 1}{N^3 + N}. \end{equation} Substituting $\alpha_\pm$ into the expression for $\mathcal{T}_2(\Lambda^{\otimes 2})$, we find the mean state $\omega_{2,N}$ given in terms of the twirled state averaged over the set $\Omega_N$ \begin{equation} \omega_{N,2} = \ev{\mathcal{T}_2(\Lambda^{\otimes 2})} = \frac{N^2\mathbb{I} + N O_{\text{SWAP}}}{N^4 + N^2} , \label{omegaN2_gen} \end{equation} with purity given by $$ \gamma_{N,2} = \frac{N^2+3}{\left(N^2+1\right)^2}. $$ \qed As an example, we give the simplest cases for $N=2,\,3$, which is a two qubit and two qutrit density matrix, respectively, symmetric with respect to the SWAP operation, \begin{equation*} \omega_{2,2} = \frac{1}{10}\mqty(3 & 0 & 0 & 0 \\ 0 & 2 & 1 & 0 \\ 0 & 1 & 2 & 0 \\ 0 & 0 & 0 & 3),\qquad \omega_{3,2} = \frac{1}{30}\mqty( 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 3 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 3 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 3 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 4 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 3 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 3 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 \\ ). \end{equation*} Purities of the states $\omega_{2,2}$ and $\omega_{3,2}$ read $\gamma_{2,2} = 7/25$ and $\gamma_{3,2} = 3/25$. To obtain analogous results in the case of $t=3$ we need to deal with three-copy states and extend the set of permutation operators. First step is to extend the twirling operation to three-partite systems by averaging local rotation $U$ over flat Haar measure of all three subspaces, such that it can be applied to any three-partite state $\rho_{ABC}$. By elementary consideration of symmetry it is found that a twirled tripartite state $\rho_{ABC}$ must be given by a linear combination of all permutation operators, such that the coefficients within conjugacy classes are the same, \begin{equation} \mathcal{T}_3(\rho_{ABC}) = \int_{U(N)} U^{\otimes 3}\rho_{ABC} U^{\dag\otimes 3}\dd_{\rm H}(U) = a_1 \mathbb{I} + a_2\qty(O_{(12)}+O_{(23)}+O_{(13)}) + a_3\qty(O_{(123)} + O_{(132)}), \label{t3ansatz} \end{equation} were $O_\sigma$ denotes the corresponding matrix representation of the permutation $\sigma$. In particular, the twirling operation can be applied to three copies of the same local diagonal state $\Lambda$, and then averaged over all possible spectra. This is equivalent to averaging three-copy state $\rho^{\otimes3}$ over the entire space of mixed states $\Omega_N$ with respect to HS measure \begin{equation} \omega_{N,3} = \ev{\mathcal{T}_3(\Lambda^{\otimes 3})} = \int_{\Delta_N}\int_{U(N)} U^{\otimes 3}\Lambda^{\otimes 3}U^{\dag\otimes 3} \dd{\Lambda_{\rm HS}} \dd_{\rm H}(U), \end{equation} which implies a system of three linear equations, \begin{align} 1 = \Tr(\omega_{N,3}) & = N^3 a_1 + 3N^2 a_2 + 2N a_3 ,\nonumber\\ \Tr(O_{(12)}\omega_{N,3}) & = N^2 a_1 + \qty(N^3 + 2N) a_2 + 2N^2 a_3, \nonumber \\ \Tr(O_{(123)}\omega_{N,3}) & = N a_1 + 3N^2 a_2 + (N^3 + N a_3). \label{t3system} \end{align} First we consider the left-hand sides of the equations, given by well-known values \cite{ZS01}, \begin{align} \Tr(O_{(123)}\omega_{N,3}) & = \ev{\Tr\rho^3} = \frac{5N^2 + 1}{\qty(N^2+1)\qty(N^2 + 2)} . \end{align} In order to evaluate $\Tr(O_{(12)}\omega_{N,3})$, we will use properties of the permutation operator $O_{(12)}$, which imply \begin{align} \Tr(O_{(12)}\omega_{N,3}) = \ev{\Tr \rho^3} + \ev{\sum_{i\neq j}\lambda_i^2\lambda_j} = \ev{\sum_{i, j = 1}^N\lambda_i^2\lambda_j} = \ev{\sum_{i=1}^N \lambda_i^2 \sum_{j=1}^N \lambda_j} = \frac{2N}{N^2 + 1} . \end{align} Upon inserting these into equations into \eqref{t3system} we get \begin{align} a_1 & = \frac{N^3}{N^6 + 3N^4 + 2N^2}, & a_2 & = \frac{N^2}{N^6 + 3N^4 + 2N^2}, & a_3 & = \frac{N }{N^6 + 3N^4 + 2N^2}, \end{align} which solves the case for $\omega_{N,3}$. In order to prove that a $3$-design is also a $2$-design we consider the partial trace over the third subsystem. It is obvious that $\Tr_3 \mathbb{I}_{N^3} = N \mathbb{I}_{N^2}$ and $\Tr_3 O_{(12)} = N O_{\text{SWAP}}$. It is now easy to find that \begin{align} \Tr_3(O_{(13)}) & = \Tr_3(\sum_{i,j,k} \op{ijk}{kji}) = \sum_{i,j,k} \op{ij}{kj}\delta_{ik} = \mathbb{I}_{N^2}, \nonumber \\ \Tr_3(O_{(123)}) & = \Tr_3(\sum_{i,j,k} \op{ijk}{jki}) = \sum_{i,j,k} \op{ij}{jk}\delta_{ik} = O_{\text{SWAP}} . \nonumber \end{align} Using this we obtain \begin{equation} \Tr_3(\omega_{N,3}) = \qty(Na_1 + 2a_2)\mathbb{I} + \qty(Na_2 + 2a_3)O_{\text{SWAP}} = \frac{N^2\mathbb{I} + NO_{\text{SWAP}}}{N^4+N^2}, \end{equation} which is identical to \eqref{omegaN2_gen} and shows that a mixed-state design for $t=3$ is also $2$-design. By explicit calculation we obtain the desired coefficient $\gamma_{N,3}$, \begin{equation} \gamma_{N,3} = \frac{N^6+9 N^4+24 N^2+2}{N \left(N^4+3 N^2+2\right)^2} . \end{equation} \subsection{General scheme for calculating purity of averaged qNit states $\omega_{N,t}$} \label{APP_general_N} The approach for finding $\omega_{N,3}$ and $\gamma_{N,3}$ can be extended to any $t$. First we from similar observation that the twirled state of $t$ copies of diagonal local states $\mathcal{T}_t\qty(\Lambda^{\otimes t})$ is a sum over all permutation operators $O_\sigma$ with coefficients $a_i$, specific to conjugacy classes $C_i$: \begin{equation} \mathcal{T}_t\qty(\Lambda^{\otimes t}) = \int_{\Delta_N}\int_{U(N)} U^{\otimes t}\Lambda^{\otimes t}U^{\dag\otimes t}\dd{\Lambda_{\rm HS} \;\dd_{\rm H}(U) = \sum_{\qty{C_i}} a_i \sum_{\sigma\in C_i} O_\sigma} \end{equation} In order to compute the coefficients $a_i$ we need to solve the following system of linear equations obtained by considering an average twirled state $\ev{\mathcal{T}_t\qty(\Lambda^{\otimes t})}$ \begin{align} 1 &= \Tr(\omega_{N, t}) \nonumber \\ \ev{\Tr(\rho^2)} & {}= \Tr(O_{(12)}\omega_{N, t}) \nonumber \\ & \;\vdots \nonumber \\ \ev{\Tr(\rho^t)} & {}= \Tr(O_{(12\hdots t)}\omega_{N, t}) , \nonumber \end{align} where the left-hand sides can be obtained by similar arguments as for $t=3$. We provide an Ansatz state that solves such system of equations for any given $t$ \begin{equation} \omega_{N,t} = \frac{\sum_{\sigma_\in S_t} \Tr(O_\sigma) O_\sigma}{\sum_{\sigma_\in S_t}\Tr(O_\sigma)^2}, \label{hypo2} \end{equation} and the expression for general $\gamma_{N,t}$ follows: \begin{equation} \gamma_{N,t} = \frac{\sum_{\sigma,\tau_\in S_t} \Tr(O_\sigma) \Tr(O_\tau) \Tr(O_{\sigma\tau})}{\sum_{\sigma,\tau_\in S_t}\Tr(O_\sigma)^2\Tr(O_\tau)^2} . \label{general_gamma_eq} \end{equation} Making use of formula \eqref{general_gamma_eq} one can derive further values of the coefficients $\gamma_{N,t}$, \begin{align} \gamma_{N,2} & = \frac{N^2+3}{\left(N^2+1\right)^2}, \\ \gamma_{N,3} & = \frac{N^6+9 N^4+24 N^2+2}{N \left(N^4+3 N^2+2\right)^2}, \\ \gamma_{N,4} & = \frac{N^8+18 N^6+123 N^4+344 N^2+90}{\left(N^6+6 N^4+11 N^2+6\right)^2}, \\ \gamma_{N,5} & = \frac{N^{12}+30 N^{10}+375 N^8+2420 N^6+7422 N^4+3960 N^2+192}{N \left(N^8+10 N^6+35 N^4+50 N^2+24\right)^2} . \end{align} Due to relation (6), the above results allow to verify whether a given constellation of density matrices forms a $t$-design. \subsection{Proof of Proposition~\ref{prop2}} \label{App_prop2} It is known that the Fubini-Study measure in the space of pure states in dimension $N^2$, related to the Haar measure on the group $U(N^2)$, induces by partial trace the Hilbert-Schmidt measure on the reduced space of mixed states \cite{ZS01}. The density matrix corresponding to a pure state $\rho_\psi = \ketbra{\psi}$ is linear in both the vector coordinates and their conjugates. Also its reduction $\rho_A = \Tr_B \ketbra{\psi}$ retains this property. It is useful to think of the matrix $\rho_A$ as decomposed in the canonical basis $\qty{\ket{i}\bra{j}}_{i,j=1}^N$ in the space of matrices with some coefficients, \begin{equation} \rho_A = \sum_{i,j=1}^N a^{ij}\ket{i}\bra{j}. \end{equation} The Schmidt decomposition of a bipartite state \begin{equation} \ket{\psi} = \sum_{j=1}^{N} \sqrt{\lambda_j} \ket{j'}\otimes\ket{j''}, \end{equation} which provides the eigenvalues $\lambda_i$ of the partial trace $\rho_A$, may be viewed as a decomposition in a certain basis. Therefore each eigenvalue $\lambda_i$ can be represented as \nobreak \begin{equation} \lambda_i = \sum_{k,l=1}^N A_i^{kl}a^{kl}, \end{equation} where $A_i^{kl}$ is a transition matrix for the change of basis. The above shows that every $\lambda_i$ is linear with respect to the entries of the reduced matrix, which leads to conclusion that it is linear with respect to the components of the pure state $\ket{\psi}$. Having established the proper class of polynomials $g_t$ of eigenvalues of order $t$ and the flat Hilbert-Schmidt measure, we have demonstrated that Proposition \ref{prop2} holds true. \subsection{Reconstruction formula} \label{recoform} In this section we demonstrate a way to obtain a reconstruction formula for any state $\rho$ using measurements from a mixed $2$-design. First, in order to properly satisfy the requirements on tomography, we rescale the design in such a way that \begin{equation} \sum_{i=1}^M \tilde{\rho}_i = \mathbb{I}. \end{equation} which is satisfied by setting $\tilde{\rho}_i = \frac{N}{M}\rho_i$. Given the requirement \eqref{omega} on mixed $2$-designs and result \eqref{omegaN2_gen}, we arrive at the equation \begin{equation} \frac{1}{M}\sum_{i=1}^M \rho_i^{\otimes 2} = \frac{M}{N^2}\sum_{i=1}^M \tilde{\rho}_i^{\otimes 2} = \frac{1}{N^4 + N^2}\sum_{j,k} N^2 \ketbra{j}\otimes\ketbra{k} + N\op{j}{k}\otimes\op{k}{j} \end{equation} Multiplying by an arbitrary operator $A\otimes\mathbb{I}$ and taking the partial trace over the first system we obtain \begin{equation} \frac{M}{N^2}\sum_{i=1}^M \Tr(A\tilde{\rho}_i)\tilde{\rho}_i = \frac{1}{N^4 + N^2}\qty(N^2 \Tr(A)\mathbb{I} + N A) \end{equation} Taking $A = \rho$ to be a density matrix, we easily get the reconstruction formula: \begin{equation} \rho = \frac{\qty(N^2 + 1)M}{N}\sum_{i=1}^M p_i \tilde{\rho}_i - N \mathbb{I}_N. \end{equation} where $p_i = \Tr{\tilde{\rho}_i \rho}$. Note that a mixed-state design corresponds to a measurement of a bipartite system, in which party $A$ does not have full control over the subsystem $B$. \subsection{Proof of Proposition~\ref{dilution_prop}} \label{app_dilut} By construction, averaging over two copies of each state in a projective 2-design yields a symmetric state \begin{equation} \sum_{i=1}^m \op{\psi_i}^{\otimes 2} = \frac{\mathbb{I}_{N^2} + O_{SWAP}}{N^2 + N} , \end{equation} which, by elementary manipulation, is turned into a state corresponding to the defining state $\omega_{N, 2}$ for mixed 2-design \begin{equation} (1-p)\mathbb{I}_N^{\otimes 2} + p\sum_{i=1}^m \op{\psi_i}^{\otimes 2} = \frac{N^2\mathbb{I}_{N^2} + N O_{SWAP}}{N^4 + N^2} = \omega_{N, 2} , \end{equation} which completes the proof. In particular, consider the standard complete set of MUBs in the extended dimension $N^2$. Then the states obtained by reduction of the $N+1$ separable bases form $N$ copies of the complete set of $N+1$ MUBs in ${\cal H}_N$. Extending this configuration by the suitably weighted maximally mixed state, obtained by the partial trace of the remaining $N^2 - N$ maximally entangled basis, one obtains the mixed states 2-design in dimension $N$ equivalent to the one implied by Prop. \ref{dilution_prop}. \subsection{Proof of Proposition~\ref{prop4}} \label{App_prop5} Consider a simplicial $t$-design $\qty{\lambda_i}_{i=1}^n$ with respect to the HS measure $\dd{\lambda}_{\rm HS}$ on the simplex of eigenvalues $\Delta_N$, the corresponding set of diagonal matrices $\Lambda_i = {\rm diag}\qty(\lambda_i)$ of order $N$, and a unitary $t$-design $\qty{U_j}_{j=1}^m$ of matrices from $U(N)$. By definition, for any homogeneous function of order $t$ in the diagonal entries of $\Lambda$ and the entries of $U$ and $U^\dag$, respectively, evaluated and averaged over a design, is equal to the average over the entire corresponding space, \begin{align} \frac{1}{n} \sum_{i=1}^n f_t\qty(\lambda_i) & = \int_\Delta f_t(\lambda) \dd{\lambda}_{\rm HS}, & \frac{1}{m} \sum_{j=1}^m g_t\qty(U_j,U_j^\dag) &= \int_{U(N)} g_t\qty(U,U^\dag) \dd_{\rm H}(U). \end{align} To construct a mixed-state $t$-design we will average a homogenous function $h_t$ of degree $t$ over the space $\Omega_N$ of mixed states with respect to the Hilbert-Schmidt measure $\mu_{\rm HS}$. Such an integral factorizes into the average over the space $U(N)$ of unitary matrices with respect to the Haar measure $d_H$ and the average over the simplex of eigenvalues $\Delta_N$ with respect to the HS measure $\dd{\Lambda}_{\rm HS}$, \begin{align} \int_{\Omega_N} h_t\qty(\rho) \dd{\rho_{\rm HS}} = \int_{U(N)} \dd_{\rm H}(U) \int_{\Delta_N} h_t\qty(U\Lambda U^\dag) \dd{\Lambda}_{\rm HS}. \end{align} As the entries of a density matrix $U\Lambda U^\dag$ are linear in the entries of $U$, $U^\dag$, and $\Lambda$, the function $h_t$ is homogeneous of degree $t$ in the entries of $U$, $U^\dag$, and $\Lambda$. Hence the integral over the unitary group can be replaced by the sum over the unitary design, while the remaining integral over the simplex is equal to the average over the simplicial design \begin{align} \int_{\Omega_N} h_t\qty(\rho) \dd{\rho_{\rm HS}} & = \int_{\Delta_N} \sum_{j=1}^m h_t\qty(U_j\Lambda U_j^\dag) \dd{\Lambda}_{\rm HS}\nonumber \\ & = \sum_{j=1}^m\sum_{i=1}^n h_t\qty(U_j\Lambda_i U_j^\dag) = \sum_{j=1}^m\sum_{i=1}^n h_t\qty(\rho_{ij}). \end{align} Thus, the expression for the mean value of $h_t$ averaged over the entire set $\Omega_N$ implies that the set of $mn$ density matrices obtained by a Cartesian product of both designs, $\rho_{ij}=U_j \Lambda_i U_j^{\dagger}$, forms a mixed-state $t$ design. \qed Note that the number $M=nm$ of elements of such a product design can be reduced. Let $\tilde \Delta_N$ denote the $1/N!$ part of the simplex $\Delta_N$ distinguished by a given order of the components of the probability vector, $\lambda_1 \ge \lambda_2 \ge \dots \ge \lambda_N$. Since unitary matrices contain permutations, which change the order of the components $\lambda_i$, integrating over the spectrum of $\rho$ it is possible to restrict the integration domain only to the set $\tilde \Delta_N$. Let $n'$ denotes the number of points of the simplicial design belonging to the asymmetric part $\tilde \Delta_N$. To obtain a mixed-state $t$-design $\Omega_N$ it is thus sufficient to consider the Cartesian product consisting of $n'm$ density matrices, $\rho_{ij}=U_j\Lambda_i U_j^{\dagger}$, $i=1,\dots, n'$ and $j=1, \dots, m$. If a vector $\lambda$ belongs to the boundary of the chamber $\tilde \Delta_N$, (see the example in Fig.~\ref{fig:d=3}), one needs to weigh this point inversely proportional to the number of chambers it belongs to. Note also that the Platonic designs (see Supplemental Material \ref{platonic_des}) can be considered as a product of the HS $2$-design in $\tilde \Delta_2=[0,1/2]$, consisting of a single point and shown in Fig.~3b, and a unitary design in $U(2)$. Due to the morphism between the groups $SU(2)$ and $SO(3)$, the latter sets correspond to the spherical designs on the sphere $S^2$, which guarantee that the angular distribution of the density matrices $\rho_i$ forming the mixed-state design is as uniform as possible. The design corresponding to the tetrahedral group gives a tetrahedron inscribed inside the sphere of radius $r=\sqrt{3/20}$, which is unitarily equivalent any of constellations obtained by partial trace of one of five iso-entangled bases listed in table \ref{table:isoMUB}. Thus the simplest mixed-state $2$-design consisting of four points inside the Bloch ball is obtained by partial trace of one of the iso-entangled bases of size $d=N^2=4$. It is thus natural to ask, whether this fact can be generalized and there exists a basis in the composite $N \times N$ system such that the partial trace of the corresponding projectors forms a mixed-state $2$-design composed out of $N^2$ density matrices of size $N$. Numerical results obtained for $N=3,4$ and $5$ suggest that this might be the case. \section{Examples of mixed-state designs} \subsection{Mixed-state designs in the Bloch ball} \label{examples} \begin{figure} \caption{One-qubit mixed-state $2$-design obtained by partial trace of the standard set of $5$ MUBs in $\mathcal{H}_2^{\otimes2}$ consisting of $20$ points. Two maximally entangled bases induce a point of weight $8$ in the center of the Bloch ball, while each of the remaining three separable bases induces two antipodal points on the Bloch sphere with weight $2$ each.} \label{2210} \end{figure} \begin{figure}\label{229} \end{figure} For the known mixed designs we compute the differences between the theoretical bound (6) and the ensemble value achieved, expressed as \begin{equation*} \delta_{N,t} = \gamma_{N, t} - 2 \Tr(\frac{1}{M}\sum_{i=1}^M \rho_i^{\otimes t} \int_{\Omega_N} \sigma^{\otimes t}\dd{\sigma_{\rm HS}}) + \frac{1}{M^2}\sum_{i,j=1}^M \abs{\Tr(\rho_i\rho_j)}^t , \end{equation*} which are summarized in Table \ref{tab:my_label}. \begin{table}[h!] \centering \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline t & 1 & 2 & 3 & 4 & 5 \\ \hline \hline Standard MUB & 0 & 0 & 0 & 3.37$\times10^{-3}$ & 8.42$\times10^{-3}$ \\ \textbf{IsoMUB} & 0 & 0 & 0 & 5.88$\times10^{-5}$ & 1.47$\times10^{-4}$ \\ IsoSIC & 0 & 0 & 0 & 5.39$\times10^{-4}$ & 1.35$\times10^{-3}$ \\ Witting Poly & 0 & 0 & 0 & 6.25$\times10^{-4}$ & 1.56$\times10^{-3}$ \\ Hoggar Ex24 & 0 & 0 & 0 & 3.37$\times10^{-3}$ & 8.42$\times10^{-3}$ \\ \hline \end{tabular} \caption{Values of the differencies $\delta_{2,t}$ with respect to the global extremum $ \gamma_{N, t}$ computed for known one-qubit mixed designs. Since condition (6) is satisfied, $\delta_{2,t}=0$ for $t=1,2,3$, all these constellations of density matrices form mixed-state $3$-designs. Different values for $t = 4,5$ are implied by differences between the designs. It is easily seen that IsoMUB is the closest solution to be a $4$-design. Moreover, one may observe that values for the standard MUB and the example 24 of Hogar \cite{H82} are identical, as implied by equivalence of the induced mixed designs.} \label{tab:my_label} \end{table} \begin{figure} \caption{One-qubit mixed-state $3$-design obtained by partial trace of the projective $3$-design consisting of $60$ states in $\mathcal{H}_4$ provided by Hoggar \cite{H82} in his Example 24. } \label{2310} \end{figure} \begin{figure} \caption{One-qubit mixed-state $3$-design obtained by partial trace of $40$ states in $\mathcal{H}_4$ leading to the Witting polytope, consisting of $8$ points in reduction A and $14$ in reduction B. The points at the poles in reduction A have weight $1$ and the remaining $6$ points have weight $3$ each. In reduction B, points at the poles have weight $2$ while the remaining $12$ points have weight $3$ each.} \label{2320} \end{figure} In the case of the Witting polytope (which is equivalent to the Penrose dodecahedron \cite{WA17}) we have two regular figures---a parallelepiped (a) and an elongated bipyramid (b) in respective reductions. This suggests that properly resized regular polytopes could serve as templates for $t$-designs of different orders. \subsection{Platonic designs} \label{platonic_des} In this section we consider constellations of states derived from Platonic solids and their relation with mixed-state $t$-designs. One may consider sets of states in $\mathcal{H}_2$ derived from any $3$-dimensional solid via the standard form of a pure state of a qubit \begin{equation} \ket{\psi(\theta,\phi)} = \mqty(\cos\frac{\theta}{2} \\ e^{i\phi}\sin\frac{\theta}{2}). \end{equation} Using its antipodal counterpart \begin{equation} \ket{\tilde{\psi}(\theta,\phi)} = \mqty(\sin\frac{\theta}{2} \\ -e^{i\phi}\cos\frac{\theta}{2}) \end{equation} one can interpolate between the maximally mixed state and the pure state: \begin{equation} \rho(\theta,\phi,a) = a\ketbra{\psi} + (1-a)\ketbra{\tilde{\psi}}. \end{equation} Using this, we have found that for each Platonic solid there exists a corresponding mixed-state $2$-design, given by $a = \frac{1}{10} \left(5-\sqrt{15}\right)$. The analytic form of the tetrahedral design $\qty{\rho_i}_{i=1}^4$, corresponding to a rescaled SIC-POVM, is given below: \begin{align*} \rho_1 & = \left( \begin{array}{cc} \frac{1}{10} \left(5-\sqrt{15}\right) & 0 \\ 0 & \frac{1}{10} \left(5+\sqrt{15}\right) \\ \end{array} \right), & \rho_2 & = \left( \begin{array}{cc} \frac{1}{30} \left(15+\sqrt{15}\right) & e^{-i \frac{2\pi}{3}} \sqrt{\frac{2}{15}} \\ e^{i \frac{2\pi}{3}} \sqrt{\frac{2}{15}} & \frac{1}{30}\qty(15 - \sqrt{15}) \\ \end{array} \right), \\ \rho_3 & = \left( \begin{array}{cc} \frac{1}{30} \left(15+\sqrt{15}\right) & -\sqrt{\frac{2}{15}} \\ -\sqrt{\frac{2}{15}} & \frac{1}{30}\qty(15 - \sqrt{15}) \\ \end{array} \right), & \rho_4 & =\left( \begin{array}{cc} \frac{1}{30} \left(15+\sqrt{15}\right) & e^{i \frac{2\pi}{3}} \sqrt{\frac{2}{15}} \\ e^{-i \frac{2\pi}{3}} \sqrt{\frac{2}{15}} & \frac{1}{30}\qty(15 - \sqrt{15}) \\ \end{array} \right). \end{align*} As mentioned in the main body of the paper, this configuration of four mixed states is equivalent up to a unitary rotation to the $2$-designs obtained by partial trace of any of five iso-entangled bases given in Table \ref{table:isoMUB} and shown in Fig. \ref{2221}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline t & 2 & 3 & 4 & 5 \\ \hline \hline Tetrahedral & 0 & 6$\times10^{-3}$ & 1.25$\times10^{-2}$ & 1.69$\times10^{-2}$ \\ Octahedral & 0 & 0 & 1.14$\times10^{-3}$ & 2.85$\times10^{-3}$ \\ Cubic (IsoSIC) & 0 & 0 & 5.39 $\times10^{-4}$ & 1.35$\times10^{-3}$ \\ Icosahedral & 0 & 0 & 5.88$\times10^{-5}$ & 1.47$\times10^{-4}$ \\ Dodecahedral (IsoMUB) & 0 & 0 & 5.88$\times10^{-5}$ & 1.47$\times10^{-4}$ \\ \hline \end{tabular} \caption{Residual values of $\delta_{2,t}$ for $t\geq 2$ for mixed $2$-designs derived from Platonic solids. By construction the value $\delta_{2,t}=0$ implies that the constellation forms a $t$-design in the Bloch ball $\Omega_2$. Note that the icosahedral and dodecahedral configurations form $3$-designs, while their residual values $\delta_{2,4}$ and $\delta_{2,5}$ are identical.} \label{tab:my_label2} \end{table} \section{Projective designs and averaging sets in the probability simplex}\label{APP_interval} In this section we shall construct averaging sets on the $(N-1)$-dimensional simplex $\Delta_N$ containing all probability vectors of size $N$. In the simplest case of $N=2$ we consider collections of points from the interval $[-1/2,1/2]$. Such designs with respect to the flat Lebesgue measure are related to projective designs for $N=2$, while those with respect to the Hilbert-Schmidt measure allow one to find the radius of the sphere inside the Bloch ball, at which points forming a symmetric mixed-state design should be located. \begin{figure}\label{new_Jakub} \end{figure} A link between projective designs consisting of pure states of an $N \times N$ system and designs formed by the set of density matrices of size $N$ was established in Proposition 2. This result can be treated as an example of a more general construction: an averaging set for a certain space $\Xi$ with respect to the measure $\mu$ allows one to find a corresponding design on the space $\Omega =T(\Xi)$ with respect to the image measure ${\mu}'$ (also called push-forward measure) induced by the transformation $T$. More precisely, for any measurable set $K \subset \Omega$ its image measure reads $\mu'(K)=\mu(T^{-1}(K))$. In the case considered here $\Xi$ represents the complex projective space ${\mathbbm C}P^{\;N^2-1}$, while $T$ denotes the partial trace over an $N$ dimensional subsystem, and $\Omega$ represents the set of density matrices of size $N$. In a similar way one can consider spherical designs on the Bloch sphere, $S^2={\mathbbm C}P^{\;1}$ and analyze their projections onto an interval, Fig.~3. Further examples of averaging sets on the interval induced by spherical designs are shown in Fig. \ref{new_Jakub}. A regular octahedron inscribed in a sphere with two vertices at the antipodal poles and four on the equator (see panel a)) induces by projection an averaging set on the interval with weights $1,4,1$ and corresponds to the Simpson rule for numerical integration. Another projection of the octahedron on a line leads to a set consisting of two points at $x_{\pm}=\pm 1/2\sqrt{3}$ corresponding to the $2$-point Gauss-Legendre integration rule in $[-1/2,1/2]$, see Fig. \ref{new_Jakub}b. In physical terms such a projection of the Bloch sphere onto a line describes decoherence due to interaction of the system with environment. It is then fair to say that any one-qubit projective design decoheres to a design on an interval, while a projective design formed by pure states in dimension $d$ is mapped by the coarse-graining map (dephasing channel), $T\colon \ket{\psi}\bra{\psi} \mapsto {\rm diag} (\ket{\psi}\bra{\psi})$, to an averaging set in the simplex of $d$-point probability vectors. Such a configuration forms a design in the simplex with respect to the flat Lebesgue measure, which is an image of the unitarily invariant Fubini-Study measure on the complex projective space ${\mathbbm C}P^{\;d-1}$ with respect to the coarse-graining map \cite{ZS01}. For completeness, we present here the explicit form of $t$-designs on the interval for some low values of $t$. Working out conditions (10) for the Lebesgue measure on $[-1/2,1/2]$ it is easy to check whether a set consisting of $M$ points leads to a $t$-design. In some cases one may even get more than required: the set of $M=2$ points satisfies not only the condition for a $2$-design, but also for a $3$-design. \begin{align} t=1,\,M=1:\quad x_1 & = 0, \\ t=3,\,M=2:\quad x_1 & = - \frac{1}{2\sqrt{3}}, & x_2 & = \frac{1}{2\sqrt{3}}, \\ t=3,\,M=3:\quad x_1 & = - \frac{1}{2\sqrt{2}}, & x_2 & = 0, & x_3 & = \frac{1}{2\sqrt{2}}, \\ t=5,\,M=4:\quad x_1 & = -\frac{1}{30} \sqrt{75+30 \sqrt{5}}, & x_2 & = -\frac{1}{30} \sqrt{75-30 \sqrt{5}}, \nonumber \\ x_3 & = \frac{1}{30} \sqrt{75-30 \sqrt{5}}, & x_4 & = \frac{1}{30} \sqrt{75+30 \sqrt{5}}, \\ t=5,\,M=5:\quad x_1 & = - \frac{1}{12}\sqrt{15 + 3\sqrt{11}}, & x_2 & = - \frac{1}{12}\sqrt{15-3 \sqrt{11}}, & x_3 & = 0, \nonumber \\ x_4 & = \frac{1}{12}\sqrt{15-3 \sqrt{11}}, & x_5 & = \frac{1}{12}\sqrt{15 + 3\sqrt{11}}. \end{align} Averaging sets on an interval with respect to the Hilbert-Schmidt measure are related to mixed-state designs in the set of one-qubit density matrices. In particular, $2$-design corresponds to the projection of the cube inscribed into the sphere of radius $r=\sqrt{3/20}$ located inside the Bloch ball, see Fig.~3d. \begin{align} t=3,\,M=2:\quad \lambda_1 & = -\sqrt{\frac{3}{20}}, & \lambda_2 & = \sqrt{\frac{3}{20}}. \label{2points} \\ t=3,\,M=3:\quad \lambda_1 & = -\frac{3}{2 \sqrt{10}}, & \lambda_2 & = 0, & \lambda_3 & = \frac{3}{2 \sqrt{10}}. \label{3points} \\ t=5,\,M=4:\quad \lambda_1 & = -\frac{1}{70} \sqrt{735+70 \sqrt{21}}, & \lambda_2 & = -\frac{1}{70} \sqrt{735-70 \sqrt{21}}, \\ \lambda_3 & = \frac{1}{70} \sqrt{735 -70 \sqrt{21}}, & \lambda_4 & = \frac{1}{70} \sqrt{735 + 70 \sqrt{21}}. \label{4points} \end{align} Note that the above results can be used to search for one-qubit mixed-state $t$-designs with $t\ge 3$ as $r_i=|1/2-x_i|$ denotes radii of the spheres inscribed inside the Bloch ball containing density matrices belonging to the design. \subsection{Quantum states and designs in the triangle of $3$-point probability distributions} \label{d3} The standard solution for a complete set of MUB in dimension $d=3 \times 3$ system consists of $4$ separable bases and $6$ maximally entangled bases \cite{La04}. The partial trace of these $10\times 9$ pure states of size $9$ leads to a collection of $90$ density matrices of size $3$, which due to Proposition~2 generates a mixed-state $2$-design in the set $\Omega_3$. Eigenvalues of these states form a $2$-design in the probability simplex with respect to the Hilbert-Schmidt measure, induced by partial trace, see Fig.~\ref{fig:d=3}a. Note that these $3$-point probability distributions represent Schmidt vectors of the original pure states $|\Psi_j\rangle$ of the bipartite system composed of two qutrits. \begin{figure} \caption{Designs in the simplex of $N=3$ probability distributions: a) HS $2$-design obtained by reduction of $90$ states forming the standard set of $10$ MUBs in $\mathcal{H}_9$ \cite{La04}. Points at the vertices, corresponding to pure states, are obtained from $4$ separable bases, while the remaining $6$ maximally entangled bases yield the center point representing the maximally mixed state with spectrum $(1/3,1/3,1/3)$. The numbers in red denote weights assigned to each point, adding up to the total $90$, the total number of states. b) $2$-design with respect to the Lebesgue measure obtained by taking diagonal elements of projections onto $9$ states forming a SIC in dimension $N=3$. } \label{fig:d=3} \end{figure} To obtain a $2$-design in this probability simplex with respect to the flat measure it is sufficient to take an arbitrary realization of a projective $2$-design in the set $\Xi_3$ of pure states in dimension $d=3$ and take the corresponding classical states. Figure~\ref{fig:d=3}b shows such a configuration in the simplex, which stems from $9$ states forming a SIC in dimension three. In the similar spirit, the coarse-graining map, corresponding to complete decoherence and sending projectors onto pure states to classical probability vectors, applied to any SIC configuration in dimension $d=4$ produces a $2$-design with respect to the Lebesgue measure in the regular tetrahedron of $4$-point probability distributions. On the other hand, the Schmidt vectors of $64$ pure states forming a SIC for two subsystems with four levels each lead to a $2$-design with respect to the Hilbert-Schmidt measure induced by partial trace \cite{ZS01}. \end{widetext} \end{document}
arXiv
Search Results: 1 - 10 of 10244 matches for " Charles Wilf " Belief Structures, Common Policy Space and Health Care Reform: A Q Methodology Study [PDF] Charles Wilf Psychology (PSYCH) , 2011, DOI: 10.4236/psych.2011.29143 Abstract: Debate on the merits of health care reform continues even after passage of the Affordable Care Act of 2010. Poll results confirm a split along political party and associated ideological lines with democrats more supportive and republicans generally opposed to the law. As parts of the law are now subject to increasing scrutiny, it may be instructive to question whether a party-centered or surrogate liberal/conservative dichotomy is the best representation of positions in the health care debate. Q Methodology reveals a more complex set of belief structures, suggesting that a simple dichotomy is misleading in terms of the values that underlie the role of health care in society. Five distinct belief structures were found, each with different concerns as to the purpose and potential benefits of various health care initiatives. In addition, Q Methodology allows for the formation of a common policy space within which all belief structures are independently in agreement in four specific areas. It is argued that this empirically derived consensus can serve as a basis for effective political engagement and policy implementation. A Curmudgeon's Complaint Wilf Innerd Brock Education : a Journal of Educational Research and Practice , 2004, Abstract: The student arrives a few minutes early. She is a little out of breath, as it has been quite a dash across town in heavy traffic from school. But she has made it for 4:30 and has a moment or two to collect her thoughts before going in to see her advisor. She has completed the requisite eight courses that are required before she starts on her major paper, including one course on research methods and one on educational statistics. She has an A- average and is quietly confident that she can do what is needed and proceed next June to graduate with an M.Ed. degree. She is the first member of her family to have earned any kind of university degree, and this additional qualification will be greeted with a lot of respect, even awe, by her relatives. All she ever wanted to be was a good primary teacher, and she is that, but the M.Ed. will be the icing on the cake. Accelerated series for universal constants, by the W Zmethod Herbert S. Wilf Discrete Mathematics & Theoretical Computer Science , 1999, Abstract: In this paper, the author presents a method, based on WZ theory, for finding rapidly converging series for universal constants. This method is analogous but different from Amdeberhan and Zeilberger's method. The method of characteristics, and "problem 89" of Graham, Knuth and Patashnik Abstract: We apply the method of characteristics for the solution of pde's to two combinatorial problems. The first is finding an explicit form for a distribution that arises in bio-informatics. The second is a question raised by Graham, Knuth and Patashnik abiout a sequence of generalized binomial coefficients. We find an exact formula, which factors in an interesting way, in the case where one of the six parameters of the problem vanishes. We also show that the associated polynomial sequence has real zeros only, provided that one parameter vanishes, and the other five are nonnegative. The variance of the Stirling cycle numbers Abstract: We show that the probability that two permutations of $n$ letters have the same number of cycles is \[\sim \frac{1}{2\sqrt{\pi\log{n}}}.\] Left-to-right maxima in words and multiset permutations Amy Myers,Herb Wilf Abstract: We extend classical theorems of Renyi by finding the distributions of the numbers of both weak and strong left-to-right maxima (a.k.a. outstanding elements) in words over a given alphabet and in permutations of a given multiset. The distribution of longest run lengths in integer compositions Abstract: We find the generating function for $C(n,k,r)$, the number of compositions of $n$ into $k$ positive parts all of whose runs (contiguous blocks of constant parts) have lengths less than $r$, using recent generalizations of the method of Guibas and Odlyzko for finding the number of words that avoid a given list of subwords. A combinatorial determinant Abstract: A theorem of Mina evaluates the determinant of a matrix with entries $D^j(f(x)^i)$. We note the important special case where the matrix entries are evaluated at $x=0$ and give a simple proof of it, and some applications. We then give a short proof of the general case. Identically Distributed Pairs of Partition Statistics Abstract: We show that many theorems which assert that two kinds of partitions of the same integer $n$ are equinumerous are actually special cases of a much stronger form of equality. We show that in fact there correspond partition statistics $X$ and $Y$ that have identical distribution functions. The method is an extension of the principle of sieve-equivalence, and it yields simple criteria under which we can infer this identity of distribution functions. The Redheffer matrix of a partially ordered set Abstract: R. Redheffer described an $n\times n$ matrix of 0's and 1's the size of whose determinant is connected to the Riemann Hypothesis. We describe the permutations that contribute to its determinant and evaluate its permanent in terms of integer factorizations. We generalize the Redheffer matrix to finite posets that have a 0 element and find the analogous results in the more general situation.
CommonCrawl
Phạm Minh Hoàng Phạm Minh Hoàng (born 1955) is a French-Vietnamese blogger and lecturer in applied mathematics at the Ho Chi Minh City University of Technology, who was arrested in Vietnam for his political writing and activism on August 13, 2010.[1] Phạm Minh Hoàng, who writes with the pen name Phan Kien Quoc, was convicted on August 10, 2011[2] for writing “33 articles that distort the policies and guidelines of the Party and the State.”[3] He was sentenced to three years in jail and three years of probation under Article 79, “subversion of administration”—one of many penal codes defined vaguely and used to detain and arrest political activists but served only 17 months and spend another three years under house arrest.[4] Phạm Minh Hoàng was based in Ho Chi Minh City, Vietnam[5] where he used to lecture at the university and offered free classes for Vietnamese youth on leadership skills.[6][5] In June 2017, he published a call for help on his Facebook page[7] upon receiving the news that he is likely to lose his Vietnamese citizenship and be deported to France.[8] Hoang was detained by local authorities on June 23[9] before being forcibly exiled to France the following day.[10] Background Phạm Minh Hoàng was born on August 8, 1955[5] in Vung Tau, what is now Ba Ria-Vung Tau Province. In 1973, Hoàng left to study in France. Hoàng lived, studied, and worked in France for 28 years, during which time he officially joined the political party, Viet Tan.[11][12] Hoang returned to Vietnam in 2000, and worked as a lecturer at the Ho Chi Minh City Polytechnic University.[13] Before his arrest on August 13, 2010, Hoàng wrote articles about political and social issues in Vietnam, which were published on his blog under pseudonym Phan Kien Quoc.[11] To empower young Vietnamese to become leaders and serve the community, Hoàng also started teaching free classes on leadership skills.[5] Hoàng's political articles and leadership classes were the basis for his arrest in 2010.[14] 2010 Arrest When Phạm Minh Hoàng as arrested on August 13, 2010, he was detained in secret and unlawfully.[15] His arrest was unconfirmed by Vietnamese authorities until September 9, 2010, when Viet Tan published the details online. At the time, Hoàng was 55 years old and lecturing at the Ho Chi Minh City Polytechnic Institute. According to Hoàng's lawyer, Tran Vu Hai, Hoàng admitted to writing the essays, but did not believe he committed any crime against the state.[16] Revocation of Vietnamese citizenship and forced exile On June 1, 2017, the French Consulate in Ho Chi Minh City invited Hoàng to inform him of the Vietnamese government's decision to revoke his Vietnamese citizenship on May 17, 2017.[7] Phạm Minh Hoàng stated he would be separated from his wife due to his family's situation if he were to be deported.[7] Hoàng received the official letter from authorities on June 10, 2017[17] on the decision to revoke his Vietnamese citizenship which was signed by the President of Vietnam Trần Đại Quang.[18] Reporters Without Borders said this was the first incident Hanoi has stripped a dissident of their nationality.[19] He was forcibly exiled to Paris on 25 June 2017 after Vietnamese police surrounded his house at night two days earlier and took him away from his home.[20] This move by the Vietnamese government was condemned by the international community with Phil Robertson from Human Rights Watch stating that Hanoi had "committed a blatantly illegal, rights violating act that deserves forceful, worldwide condemnation".[21] See also • Viet Tan • Nguyen Quoc Quan • Hong Vo • Đặng Xuân Diệu References 1. "A wave of media suppression in Vietnam - Committee to Protect Journalists". www.cpj.org. Retrieved 2016-08-20. 2. "Vietnamese Bloggers Recognized for Commitment to Rights". 2012-12-20. Retrieved 2016-08-20. 3. VTV, BAO DIEN TU. "Tuyên phạt Phạm Minh Hoàng 3 năm tù". BAO DIEN TU VTV (in Vietnamese). Retrieved 2016-08-20. 4. "World Report 2012: Vietnam". 2012-01-22. Retrieved 2016-08-20. 5. "Pham Minh Hoang". viettan.org. Archived from the original on 2016-10-14. Retrieved 2016-08-20. 6. "Episode 45 - March News Review". Broadcasting Vietnam. Archived from the original on 2016-10-11. Retrieved 2016-08-20. 7. "Phạm Minh Hoàng". www.facebook.com. Retrieved 2017-06-11. 8. "Vietnam : French-Vietnamese blogger threatened with expulsion | Reporters without borders". RSF (in French). Retrieved 2017-06-11. 9. "Vietnam police detain dissident for deportation: wife". Reuters. June 24, 2017. Retrieved 2017-07-08. 10. "Vietnamese-French dissident blogger deported to France". Retrieved 2017-07-08. 11. "Vietnam: Overturn Democracy Activist's Sentence". 2011-11-28. Retrieved 2016-08-20. 12. "French Professor Sentenced". Radio Free Asia. Retrieved 2016-08-20. 13. "Man gets three years for subversion". 2012-10-25. Retrieved 2016-08-20. 14. "On the Charges Against Professor Pham Minh Hoang". www.viettan.org. Retrieved 2016-08-20. 15. "Hanoi arrested blogger Pham Minh Hoang for peacefully expressing his political view". Global Voices Advocacy. 2010-09-29. Retrieved 2016-10-23. 16. "French Professor Sentenced". Radio Free Asia. Retrieved 2016-10-23. 17. "Phạm Minh Hoàng". www.facebook.com. Retrieved 2017-06-11. 18. "Nội Dung Văn Bản 'Quyết định 832/QĐ-CTN'". congbao.chinhphu.vn. Retrieved 2017-06-11. 19. "Vietnam expels dissident blogger to France | Reporters without borders". RSF (in French). Retrieved 2017-07-08. 20. "Vietnam exiles dissident after revoking his citizenship". Al Jazeera. 25 June 2017. 21. "Statement by Human Rights Watch on stripping of #Vietnamese citizenship from ..." Retrieved 2017-07-08. External links • Viet Tan Press Release "On the Charges Against Professor Pham Minh Hoang" • "Vietnam jails dissident blogger Pham Minh Hoang", BBC, 10 August 2011 • "Vietnam jails French-Vietnamese teacher Pham Minh Hoang over blogs", Chicago Sun-Times, August 10, 2011
Wikipedia
[ Log In | Register] Journal of Applied Nonlinear Dynamics Access Full Text Reviewer Selection Guideline Ethics & Malpractice Statement Discontinuity, Nonlinearity, and Complexity Journal of Environmental Accounting and Management Most Clicked Articles Journal of Vibration Testing and System Dynamics Books Series Mathematical Methods and Modeling Titles in Series Continuous Dynamical Systems Discrete and Switching Dynamical Systems Book Submission Nonlinear Physics Science Engineering Technology Linear Algebra - Selected Problems Probability and Statistics - Selected Problems 2020 International Online Conference on Nonlinear Dynamics and Complexity 2012 International Conference on Nonlinear Dynamics and Complexity Users Terms and Conditions Terms and Conditions for Individuals Terms and Conditions for Institutions Home > Journals > Discontinuity, Nonlinearity, and Complexity > Access Full Text ISSN:2164-6376 (print) ISSN:2164-6414 (online) Volume 10, Issue 1 New DNC Volume 1, Issue 1 Front/Back Materials Heavy-tailed Distributions in Some Stochastic Dynamical Models pp. 1-40 | DOI:10.5890/DNC.2012.02.001 Ph. Blanchard, T. Kr¨uger, D. Volchenkov Abstract | Access Full Text Heavy-tailed distributions are found throughout many naturally occurring phenomena. We have reviewed some models of stochastic dynamics that lead to heavy-tailed distributions (and power law distributions, in particular) including the multiplicative noise models, the models subjected to the Degree-Mass-Action principle (the generalized preferential attachment principle), the intermittent behavior occurring in complex physical systems near a bifurcation point, queuing systems, and the models of Self-organized criticality. Heavy-tailed distributions appear in them as the emergent phenomena sensitive for coupling rules essential for the entire dynamics. Sources and Sinks of Energy Balance for Nonlinear Atmospheric Motion Perturbed by West-to-east Winds Progressing on a Surface of a Rotating Spherical Shell pp. 41-55 | DOI:10.5890/DNC.2012.02.002 Ranis N. Ibragimov, Michael Dameron and Chamath Dannangoda We study the asymptotic behavior of sources and sinks associated with the effects of rotation and nonlinearity of the energy balance of atmospheric motion perturbed by west-to-east winds progressing on the surface of a rotating spherical shell. The model uses nonlinear viscous and nonviscous incompressible fluid flows on a rotating spherical domain with infinitely small thickness. The energy density and associated sources and sinks were determined and visualized by means of elementary functions that provide the exact solutions of the nonviscous baratropic vorticity equation on the rotating sphere. It is shown that there exists a particular form of west-to east flows for which the source and sink terms associated with the energy balance vanishes providing thus the energy conservation law. Moreover, this particular form of atmospheric perturbation preserves the exact solutions for the case of viscous flows. Secure Communication Based on Chaotic Cipher and Chaos Synchronization Maricela Jim´enez-Rodr´ıguez, Rider Jaimes-Reategui, Alexander N. Pisarchik We proposed a secure communication system, which combines two different techniques of chaotic cryptography: chaotic cipher based on the logistic map for information diffusion, and chaos synchronization of two coupled R¨ossler oscillators for information confusion. The system contains two channels, one for synchronization and another for information transmission. Using the chaotic logistic map, a message is encrypted into one of the variables of the R¨ossler oscillator in the transmitter and sent to the receiver via the communication channel, while the R¨ossler oscillators are synchronized via the synchronization channel by another variable. Due to excellent confusion and diffusion properties, both exploiting important properties of chaos, the proposed communication system is extremely secure and fast enough to provide communication in real time. Regularization of Map-based Neuron Models Using Phase Control Javier Used, Alexandre Wagemakers, Miguel A.F. Sanju´an Recently discrete dynamical systems, maps, have been also used as valid phenomenological neuron models, and are able to furnish an advantageous alternative to continuous dynamical systems for the modelling of the spiking behavior of single neurons and of neuronal networks as well. Periodic and chaotic spiking, phasic and tonic bursting, subthreshold oscillations and many more specific features of the activity of real neurons can be reproduced by maps with a minimum of analytical complexity. As an external stimulation is applied to the neuron, its response can be of two different natures: periodic or erratic. We present a simple method of control that allows to choose one of the possible responses when the perturbation is periodic. The phase difference between the periodical driving and the control plays a decisive role. Statistical Mechanics of Fragmentation-advection Processes and Nonlinear Measurements Problem. I pp. 79-112 | DOI:10.5890/DNC.2012.03.002 Vladimir V. Uchaikin The random fragmentation-advection process is described in terms of generating functional and multiparticle densities. The aim of this article is to connect characteristics of a device performing the measurements of the process at a fixed time with characteristics of elementary fragmentation processes. The measurand is a random number being a symmetrical function of particles distribution at the moment of measurement. This function is assumed to be non-linear, and this part of the article is devoted to development of the general approach to computing of the mathematical expectation of such detector's reading whereas the next one will discuss the adjoint function approach in connection with solving the inverse problem: defining characteristics of elementary processes by using results of non-linear measurements. Unstable and Stable Period-m Motions in a Twin-well Potential Duffing Oscillator pp. 113--145 | DOI:10.5890/DNC.2012.03.001 Albert C. J. Luo, Jianzhe Huang In this paper, unstable and stable period-m motions in the periodically forced Duffing oscillator are predicted analytically through the generalized harmonic balance method. Period-3, period-5 and period-7 periodic motions are investigated as examples for the Duffing oscillator with a twin-well potentials. The Hopf bifurcation of periodic motions yields the onset of period-doubling periodic motions. With increasing period number of periodic motions, there are too many co-existing stable and unstable periodic motions, and such stable periodic motions are much less than the corresponding unstable periodic motions. This investigation provides a complete picture of unstable and stable periodic motions rather than stable motions only. For any unstable periodic motion, if there is at least one co-existing stable periodic motion, then such unstable periodic motion will reach the stable periodic motion through transient motion. Modelling of Synaptic STDP and Analysis in a Two-Neuron Model V. K. Menz, S. Popovych, T.Kupper A mathematical model is developed to describe the behaviour of spiking neurons and adaptation of synaptic weights in the framework of spike-timing-dependent plasticity (STDP) by modifying the model of STDP suggested by Gorchetchnikov, Versace, and Hasselmo [1]. As a result an STDP curve similar to that found experimentally by Bi and Poo [2] is produced. This approach is applied to a system of two integrate-and-fire neurons interacting via adapting synapses and stimulated by a constant external current. The dynamics of the considered system over long time periods is examined for both permanent and short initial stimulations. The obtained results are then compared with real data for in vivo and in vitro neurons. One Kink Solution for a Variety of Nonlinear Fifth-order Equations pp. 161–170 | DOI:10.5890/DNC.2012.05.001 Abdul-MajidWazwaz In this work, we derive one kink solution for each equation of a variety of nonlinear fifth-order equations. Each equation involves a third order temporal derivative and a mixed fifth-order derivative. The simplified form of Hirota's method will be used to derive these solutions. The proposed work presents reliable approach compared to some existing techniques. Statistical Mechanics of Fragmentation-advection Processes and Nonlinear Measurements Problem. II. The continuation of the article [1] contains statement of the adjoint transport theory, derivation and comparatively discussion backward and adjoint equations and their solutions, the theory of statistical fluctuations, the theory of perturbations and the method variational interpolations. On the Existence of Stationary Solutions for Some Systems of Non-Fredholm Integro-Differential Equations Vitaly Volpert, Vitali Vougalter We prove the existence of stationary solutions for certain systems of reaction-diffusion type equations in the corresponding H2 spaces. Our method relies on the fixed point theorem when the elliptic problem involves second order differential operators with and without Fredholm property. A Method for Generating Lie Algebras and Applications pp. 211-224 | DOI:10.5890/DNC.2012.05.004 Yufeng Zhang A way for generating Lie algebras from a Lie subalgebra of the Lie algebra A1 is proposed for which a few enlarged Lie algebras are constructed. By establishing their loop algebras and introducing Lax pairs, we obtain two integrable Hamiltonian hierarchies of evolution type, one of them reduces to the well-known nonlinear Schr¨odinger equation, another is a nonlinear integrable coupling of the Kaup-Newell (KN) hierarchy,which reduces to a coupled nonlinear integrable model with Hamiltonian structure. Existence of Solutions to Weakly Generalized Vector F-implicit Variational Inequalities Salahuddin, Ahmad M.K., Agarwal R.P. In this paper we studied a new class of weakly generalized vector F-implicit variational inequalities in real topological vector spaces. We use the Ferro minimax theorem to discuss the existence of strong solutions for generalized vector F-implicit variational inequalities. Heat Conduction in Anisotropic Media Nail H. Ibragimov, Elena D. Avdonina Nonlinear self-adjointness of the anisotropic nonlinear heat equation is investigated. Mathematical models of heat conduction in anisotropic media with a source are considered and a class of self-adjoint models is identified. Conservation laws corresponding to the symmetries of the equations in question are computed. Global Synchronization of Large Ensembles of Pulse Oscillators with Time-Delay Coupling Vladimir V.Klinshov, Vladimir I.Nekorkin In this paper we study the global synchronization in the ensembles of time-delay coupled pulse oscillators. We prove that the stability of the synchronization regime depends on the fulfillment of the simple inequality for the derivative of the phase reset curve. This finding is checked by numerical simulations which confirm the above inequality to be the sufficient condition for the synchronization. In the case of sin-shape phase reset curve it results in a stripped structure of the parameter space in which the areas with and without synchronic state alternate with the growth of the coupling delay. The Dynamical Relationship Between Vegetation and Sediment in Arid and Semiarid Areas Wei Tang, Huayong Zhang, Tousheng Huang, Liming Dai The main goal of this research is to show the effects of the interaction among sediment, Aeolian sand and vegetation growth. The critical condition is found when determining the existence of stable interior equilibrium point. Intuitive phase graphs are described for three cases of the consequences of the interaction. The systematic perspective analysis provides an explanation for the dynamic interaction between wind erosion and vegetation growth in arid and semiarid areas. Vectorial Inequalities for Integral Operators Involving Ratios of Functions and Convexity George A. Anastassiou Here we present vectorial integral inequalities for products of multivariate convex and increasing functions applied to vectors of ratios of functions. As applications we derive a wide range of vectorial fractional inequalities of Hardy type. They involve the left and right Riemann-Liouville fractional integrals and their generalizations, in particular the Hadamard fractional integrals. Also inequalities for Riemann-Liouville, Caputo, Canavati and their generalizations fractional derivatives. These application inequalities are of Lp type, p ≥ 1, and exponential type. Fractional Maps and Fractional Attractors. Part I: α-Families of Maps M. Edelman In this paper we present a uniform way to derive families of maps from the corresponding differential equations describing systems which experience periodic kicks. The families depend on a single parameter - the order of a differential equation α > 0. We investigate general properties of such families and how they vary with the increase in α which represents increase in the space dimension and the memory of a system (increase in the weight of the earlier states). To demonstrate general properties of the α -families we use examples from physics (Standard α -Family of Maps) and population biology (Logistic α -Family of Maps). We show that with the increase in α systems demonstrate more complex and chaotic behavior. Fractional Electromagnetic Wave J.J. Rosales, M. Guía, J.F. Gómez, V.I. Tkach In this article we propose an alternative procedure for constructing fractional differential equations. The order of the derivative being considered is 0 < γ ≤ 1. In particular, we will consider the propagation of electromagnetic waves in an infinitely extended homogeneous media at rest, characterized by the permittivity ε and permeability μ. Two types of the fractional differential equations will be examined separately; with fractional time derivative and the spatial fractional derivative. The parameters σt and σx are introduced which characterize the existence of the fractional time and space components, respectively. It is shown that in the first case there is a relation between σt and the period T0 of the wave given by the order γ of the fractional differential equation, and in the second case the relation is between σx and the wavelength λ . Due to these relations the solutions of the corresponding fractional differential equations are given in terms of the Mittag-Leffler function depending only on the parameter γ . Exponentially Dichotomous Linear systems of Differential Equations with Piecewise Constant Argument M. Akhmet We consider differential equations with piecewise constant argument of generalized type. It is the first time, an attention is given to the exponential dichotomy of linear systems. Bounded, almost periodic and periodic solutions and their stability are discussed. The study is made in such a way that further construction of the theory will follow for ordinary differential equations. The results are illustrated by examples. Method of Conservation Laws for Constructing Solutions to Systems of PDEs N. H. Ibragimov In the present paper, a new method is proposed for constructing exact solutions for systems of nonlinear partial differential equations. It is called the method of conservation laws. Application of the method to the Chaplygin gas allowed to construct new solutions containing several arbitrary parameters. It is shown that these solutions cannot be obtained, in general, as group invariant solutions. Chaos Generation in Hyperbolic Systems M.U. Akhmet, M.O. Fen In the present paper, we consider extension of chaos in hyperbolic systems with arbitrary large dimensions. Our investigations comprise chaos in the sense of both Devaney and Li-Yorke. We provide a mechanism for unidirectionally coupled systems through the insertion of chaos from one system to another, where the latter is initially nonchaotic. In our procedure for the chaos extension, we take advantage of chaotic sets of functions to provide mathematically approved results. The theoretical results are supported through the simulations for the extension of chaos generated by a Duffing's oscillator. A control procedure for the extended chaos is demonstrated numerically in the paper. Nonlinear Dynamic and Chaotic Saddle in Rectifier Circuit L.F.R. Turci, E.E.N. Macau, T. Yoneyama Chaotic systems are recognized by presenting an evolved dynamics in which sophisticated phenomena as crisis, metamorphoses and transitions may take places. In this work we show that these phenomena are present even in very simple system. Here we analyze a simple rectifier electronic circuit and show the mechanisms that mediate its transition from a simple to a rich dynamics. Furthermore, we identified the presence of a chaotic saddle in the system, which also implies the occurrence of transient chaos even for parameters for which the systems eventually sets in a regular behavior. Invariants in 3D for Classical Superintegrable Systems in Complex Phase Space J. S. Virdi, S. C. Mishra Physical dynamical systems in higher dimensions are always interesting. In this context we present here the possibility of its three-dimensional complex dynamical invariant in extended complex phase space(ECPS). Lie algebraic method is used to study three-dimensional classical superintegrable system on the extended complex phase space. Such complex invariants play an important role in the analysis of complex trajectories, also study of non-hermitian Hamiltonian systems. An Efficient Deformable Image Registration Method with Mexican Hat Wavelets M.A. Akinlar, R. Ibragimov An efficient deformable image registration method using Mexican hat wavelets is presented. Image registration algorithm is expressed as an optimization problem by the minimization of a summation of a similarity measure and a regularization term. A discrete L2-norm type similarity measure is associated with probabilistic wavelet coefficient regularization term. Greedy algorithm is used to solve the aforementioned optimization problem. Computational applications indicate that the present method is quite fast and efficient in the registration of brain MR images. Establishing the Onset, and Parameter and Spatial Domains, of Chaos in a Smooth Quadratic Autonomous System Using Sil'nikov Criteria and Competitive Modes Analysis pp. 1--19 | DOI: 10.5890/DNC.2012.10.001 Thomas K. Vogel and S. Roy Choudhury In this paper we examine the chaotic regimes of a recently discovered hyperchaotic system in greater depth. Towards that end, we first analyze the numerical observations of horseshoe-type chaotic behavior in this system in detail by the use of Shilnikov analysis. Subsequently,we also employ the technique of Competitive Modes analysis to identify "possible chaotic parameter regimes" for this multi-parameter system. We find that the Competitive Modes conjectures may in fact be interpreted and employed slightly more generally than has usually been done in recent investigations, with negative values of the squared mode frequencies in fact being admissible in chaotic regimes, provided that the competition among them persists. This is somewhat reminiscent of, but of course not directly correlated to, "stretching (along unstable manifolds) and folding (due to local volume dissipation)" on chaotic attractors. This new feature allows for the system variables (which also define the position ON the attractor) to grow exponentially during time intervals when mode frequencies are imaginary and comparable, while oscillating at instants when the frequencies are real and locked in or entrained. Finally, in a novel twist, we re-interpret the components of the Competitive Modes analysis as simple geometric criteria to map out the spatial location and extent, as well as the rough general shape, of the system attractor for any parameter sets corresponding to chaos. The accuracy of this mapping adds further evidence to the growing body of recent work on the correctness and usefulness of the Competitive Modes conjectures. Transient Dynamics in Complex Systems: Heteroclinic Sequences with Multidimensional Unstable Manifolds pp. 21--41 | DOI: 10.5890/DNC.2012.11.001 Valentin Afraimovich , Irma Tristan, Pablo Varona and Mikhail Rabinovich We formulate the basic principles of multi-agents complex system dynamics following the lessons from experimental neuro- and cognitive science: 1) the cognitive dynamics in a changing environment is transient and can be considered as a temporal sequence of metastable states; 2) the available resources for the information processing are limited; 3) the transient dynamics is robust against noise and at the same time sensitive to information signals. We suggest the basic dynamical models that describe the evolution of cooperative modes. We focus on two limit cases: a) the unstable manifold of metastable states has one leading direction and many others that are characterized by small positive eigenvalues (system on the edge of instability), and b) the unstable manifold is characterized by small number of positive eigenvalues having the same range (integration of different flows - binding). Relaxation Processes in Many Particle Systems -- Recurrence Relations Approach Anatolii V. Mokshin The general scheme for the treatment of relaxation processes and temporal autocorrelations of dynamical variables for many particle systems is presented in framework of the recurrence relations approach. The time autocorrelation functions and/or their spectral characteristics, which are measurable experimentally (for example, due to spectroscopy techniques) and accessible from particle dynamics simulations, can be found by means of this approach, the main idea of which is the estimation of the so-called frequency parameters. Model cases with the exact and approximative solutions are given and discussed. Who Replaces Whom? Local versus Non-local Replacement in Social and Evolutionary Dynamics Sven Banisch and Tanya Araújo In this paper, we inspect well–known population genetics and social dynamics models. In these models, interacting individuals, while participating in a self-organizing process, give rise to the emergence of complex behaviors and patterns. While one main focus in population genetics is on the adaptive behavior of a population, social dynamics is more often concerned with the splitting of a connected array of individuals into a state of global polarization, that is, the emergence of speciation. Applying computational and mathematical tools we show that the way the mechanisms of selection, interaction and replacement are constrained and combined in the modeling have an important bearing on both adaptation and the emergence of speciation. Differently (un)constraining the mechanism of individual replacement provides the conditions required for either speciation or adaptation, since these features appear as two opposing phenomena, not achieved by one and the same model. Even though natural selection, operating as an external, environmental mechanism, is neither necessary nor sufficient for the creation of speciation, our modeling exercises highlight the important role played by natural selection in the interplay of the evolutionary and the self–organization modeling methodologies. Breathing Instability in Biological Cells, Patterns of Membrane Proteins M. Leonetti, G. Boëdec, M. Jaeger The activity of biological cells involves often the electric activity of its membranes which exhibit various spatiotemporal dynamics, from pulse, oscillatory bifurcation to stationary spatial modulation. This last kind of patterns appears on a typical diffusive time. A model has been proposed implying a coupling between the current flowing through membrane proteins and their electrophoretic motions in the case of mobile proteins. Here, we study the stability of the pattern in a 2D circular model cell versus the appearance of standing waves, the so-called breathing secondary instability. Exact Solutions and Analysis for a Class of Extended Stokes' Problems pp. 85--102 | DOI: 10.5890/DNC.2012.12.003 L. Z. Zhang, H. S. Tang, Jerome P. -Y. Maa, and G. Q. Chen This paper studies a class of unsteady flows as extensions of the classic Stokes' problems to consider influence of solid walls, effect of pressure gradients, and situation of two-layer fluids. The flows are solved using the method of separating variables and the eigenfunction expansion method. With simplifications, the derived solutions will degenerate to solutions to the classic Stokes' problems, the Couette flow, and the Poiseuille flow. The exact solutions of these flows clearly illustrate the complexity of the involved physics including evolution of flow velocity profiles and energy transferring at fluid boundaries. For a single-layer flow driven by a plate moving at a constant speed, energy transferred from the plate decreases with time and tends to a non-zero constant as a result of wall effect. In a single-layer flow with an oscillatory boundary, negative energy input may appear at the boundary. For an air-water flow with a finite depth, the interface velocity is proportional to the air velocity, which is a well-known observation in physical oceanography. In addition, there is no energy transferring at the interface between the two fluids in a purely pressure driven two-layer flow. Fractional Fourier Detection of L´evy Flights: Application to Hamiltonian Chaotic Trajectories pp. 103--114 | DOI: 10.5890/DNC.2013.04.001 Francoise Briolle, Xavier Leoncini, and Benjamin Ricaud A signal processing method designed for the detection of linear (coherent) behaviors among random fluctuations is presented. It is dedicated to the study of data recorded from nonlinear physical systems. More precisely the method is suited for signals having chaotic variations and sporadically regular linear patterns, possibly impaired by noise. We use time-frequency techniques and the Fractional Fourier transform in order to make it robust and easily implementable. The method is illustrated with an example of application: the analysis of chaotic trajectories of advected passive particles. The signal has a chaotic behavior and encounters L´evy flights (straight lines). The method allows to detect and quantify these ballistic transport regions, even in noisy situations. Cosmic Evolution in Fractional Action Cosmology Victor K. Shchigolev For the fractional action cosmological model, derived earlier by the author from the variational principle for a fractional action functional, the exact solutions are obtained. The case of a quasi - vacuum state of matter that fills the universe is considered. Moreover, on the basis of specific ansatz proposed in this paper for the cosmological term, the class of exact solutions of the model equations is obtained. Examples for some given laws of the cosmological term evolution are provided. Besides, a formula for the effective equation of state is derived, and the deceleration parameter of the obtained models is studied. High Degree Multivariate Fuzzy Approximation by Quasi-Interpolation Neural Network Operators Here are considered in terms of multivariate fuzzy high approximation to the multivariate unit sequences of multivariate fuzzy quasiinterpolation neural network operators. These operators are multivariate fuzzy analogs of earlier considered multivariate real ones. The derived results generalize earlier real ones into the fuzzy setting. Here the high degree multivariate fuzzy pointwise and uniform convergence with rates to the multivariate fuzzy unit operator are given through multivariate fuzzy inequalities involving the multivariate fuzzy moduli of continuity of the N th order (N ≥ 1) H-fuzzy partial derivatives, of the involved multivariate fuzzy number valued function. Conservation Laws in Thomas's Model of Ion Exchange in a Heterogeneous Solution N.H. Ibragimov and Raisa Khamitova Physically significant question on calculation of conservation laws of the Thomas equation is investigated. It is demonstrated that the Thomas equation is nonlinearly self-adjoint. Using this property and applying the theorem on nonlocal conservation laws the infinite set of conservation laws corresponding to the symmetries of the Thomas equation is computed. It is shown that the Noether theorem provides only one of these conservation laws. Solvability Conditions for Some Systems of Nonlinear Non-Fredholm Elliptic Equations Vitali Vougalter We establish the existence and uniqueness of standing solitary wavelike solutions in H2 for certain systems of nonlocal nonlinear equations. These problems involve second order differential operatorswithout Fredholm property. Parameter Characteristics of Projective Synchronization of two Gyroscope Systems with Different Dynamical Behaviors Fuhong Min and Albert C.J. Luo In this paper, parameter characteristics of the projective synchronization for two gyroscopes with different dynamical behaviors are investigated. The projective synchronization conditions are presented from the theory of discontinuous dynamical systems. From such synchronization conditions, the parameter characteristics for partial and full projective synchronizations for two gyroscope systems are studied. The full projective synchronization can be achieved exactly in finite time instead of asymptotic synchronization in the traditional projective synchronization. The scaling factors in such synchronization are observed through numerical simulations. Synchronization of Two Identical Restricted Planar Isosceles Three-Body-Problem and a Study on Possible Chaos Control Ayub Khan and Rimpi Pal In this paper, we have investigated Complete Synchronization, Anti- Synchronization and all possible cases of Hybrid Synchronization of two restricted planar isosceles three-body-problem evolving from different initial conditions using Active Control Technique. Further, the possible role of control functions in the control of Chaos is analyzed. Finally, numerical simulations are performed to illustrate the effectiveness of the proposed control techniques. Through the Looking-Glass of the Grazing Bifurcation: Part I - Theoretical Framework James Ing, Sergey Kryzhevich, and Marian Wiercigroch It is well-known for vibro-impact systems that the existence of a periodic solution with a low-velocity impact (so-called grazing) may yield complex behavior of the solutions. In this paper we show that unstable periodic motions which pass near the delimiter without touching it may give birth to chaotic behavior of nearby solutions. We demonstrate that the number of impacts over a period of forcing varies in a small neighborhood of such periodic motions. This allows us to use the technique of symbolic dynamics. It is shown that chaos may be observed in a two-sided neighborhood of grazing and this bifurcation manifests at least two distinct ways to a complex behavior. In the second part of the paper we study the robustness of this phenomenon. Models of impact Particularly, we show that the same effect can be observed in "soft" models of impacts. Pulse Phenomena for Impulsive Dynamical Systems Shasha Zheng and Xilin Fu In this paper, we investigate the pulse phenomena for a class of impulsive dynamical systems. Some sufficient conditions that guarantee the absence or presence of pulse phenomena are obtained, without the boundeness requirement on impulse surfaces. Besides, we also utilize methods of the flow theory, focus on the dynamical behavior in the normal direction to the switching boundary and generalize several known results to apply to an important example. Then, we study the stability of a nontrivial solution in a class of functional differential equations with pulse phenomena by using the concept of quasistability and method of comparison. Spectrum of Dimensions for Escape Time Valentin Afraimovich and Rosendo Vazquez We introduce a new notion-the spectrum of dimensions of escape time- and study its properties. The escape time was defined for an initial point of the trajectory according to its ability to reach a hole in the phase space. In the article we generalize this notion onto "spots" of initial points making the escape time to be a function of a set (spot). Then we apply the Caratheodory-Pesin machinary of fractal dimensions to define the spectrum. For dynamical systems generated by maps of the interval or the circle we obtain explicit formulas in the case where an element of Markov partition is chosen as a hole. Period-m Motions and Bifurcation Trees in a Periodically Excited, Quadratic Nonlinear Oscillator Albert C.J. Luo and Bo Yu In this paper, analytical bifurcation trees of period-1 motions to chaos in the periodically forced, quadratic nonlinear oscillator are discussed from the generalized harmonic balance method. The analytical solutions for stable and unstable periodic motions in such quadratic nonlinear oscillator are achieved, and the corresponding stability and bifurcation were discussed. The analytical bifurcation trees from period-1 motions to period-4 motions in such quadratic oscillator are presented, and numerical illustrations of stable and unstable periodic motions are carried out by the numerical and analytical solutions. This investigation provides a comprehensive picture of complex periodic motion in the periodically excited, quadratic nonlinear oscillator. Markov Chain Scaffolding of Real World Data D. Volchenkov Discrete time scale dependent random walks on a graph model of a relational database can be generated by a variety of equivalence relations imposed between walks (i.e.composite functions, inheritance, property relations, ancestor-descendant relations, data queries, address allocation and assignment polices). The Green function of diffusion process induce by the random walks allows to define scale dependent geometry. Geometric relations on databases can guide the datainterpretation. In particular, first passage times in a urban spatial network help to evaluate the tax assessment value of land. Stochastic Patterns and the Role of Crowding pp. 301–319 | DOI: 10.5890/DNC.2013.11.001 Claudia Cianci and Duccio Fanelli A stochastic variant of the Brusselator model is investigated. The model accounts for a long range coupling among constituents, as well as for the finite capacity of the embedding medium. The mean field limit of the model is studied and the condition for Turing and wave instability obtained. A degenerate, cusp like transition that separates the domains of Turing and wave order can take place. The point of transition is worked out analytically. Interestingly, the region of Turing instability, as delimited by such transition point, can set in also if the inhibitor diffuses slower then the activator. This is a consequence of the generalized diffusion scheme here analyzed and which originates from having imposed an effect of spatial competition. Beyond the deterministic, mean field picture, we elaborate on the role of stochastic corrections. Granularity, endogenous to the system, can eventually materialize in waves or Turing like patterns, that we here categorized in distinct classes. A Special Type of Invariant Solutions and its Connection with Dispersion Relations Nail H. Ibragimov and Ranis N. Ibragimov The concept of dispersion relations is widely used in physics and ap- plied mathematics in investigating wave type solutions of differential equations. On the other hand, Lie group analysis provides another useful method for constructing exact solutions of linear and nonlinear differential equations via the concept of invariant solutions. We show in the present paper that for certain types of differential equations there is a remarkable connection between these two concepts. Namely, the idea of dispersion relations leads to a special type of invariant solu- tions. Synchronization Attack to Chaotic Communication Systems Massimiliano Zanin, J. Ricardo Sevilla-Escoboza, Rider Jaimes-Reátegui, J. Hugo Garcıa-Lopez, Guillermo Huerta-Cuellar, and Alexander N. Pisarchik Synchronization of chaotic oscillators has an important application in cryptography. When two identical oscillators are coupled, they can be completely synchronized and the chaotic output of the transmit- ter oscillator can be used to mask a message. Although the oscilla- tor parameters are usually used as secret keys, the sensitivity of such cryptosystems to parameter changes has never been systematically an- alyzed. To cryptanalyze a communication system based on synchro- nization of chaotic oscillators, we use a synchronization attack that allows estimating all unknown parameters by minimizing the synchro- nization error. Using this attack we cryptanalyze popular communica- tion systems based on the Ro ̈ssler and Chua chaotic electronic circuits. We suggest to include this attack as a standard security test for crypt- analysis of chaotic communication systems. Parallel Computation of Reliable Chaotic Solutions of Saltzman's Equations by Means of the Clean Numerical Simulation Peng Yang, Zhiliang Lin and Shijun Liao The method of the so-called "Clean Numerical Simulation" (CNS) is applied to gain reliable chaotic solutions of Saltzman's dynamic system, a simplified model for convection flows of fluid. Based on the high-order Taylor series method with data in multiple precision library and a validation check of global reliability of result, the CNS provides us a practical way to gain reliable, accurate enough solutions of chaotic dynamic systems in a finite but long enough time interval. The parallel computation is used to greatly increase the computational efficiency. The numerical noises of the CNS can be controlled to be so small that even the influence of the micro-level inherent uncertainty of initial conditions can be investigated in details. It is found that the micro-level inherent physical uncertainty (i.e. the unavoidable statistical fluctuation of temperature and velocity of fluid) of initial conditions of chaotic Saltzman's system transfers into macroscopic randomness. This suggests that chaos might be a bridge between micro-level inherent physical uncertainty and macroscopic randomness. The current work illustrates that the above conclusion holds not only for Lorenz equation with three ODEs but also for Saltzman's equation with up-to nine ODEs, and thus has general meanings. Three Compartmental Model for Propofol Diffusion During General Anesthesia Clara M. Ionescu, Dana Copot, and Robin De Keyser This paper presents the initial steps towards the development of a compartmental model for drug diffusion in the human body, using fractional calculus. The model presented here preserves the mass balance, therefore it maintains the link between physiological and mathematical concepts. The final purpose of this model is to predict drug pharma-cokinetics and pharmacodynamics during general anesthesia. However, in this case the model is derived for a general class of drugs, therefore it can be employed in many biomedical applications. Almost Periodic Solutions of Second Order Neutral Differential Equations with Functional Response on Piecewise Constant Argument We consider second order functional differential equations with generalized piecewise constant argument. Conditions for existence, uniqueness and stability of Bohr almost periodic solutions are defined. Appropriate examples which illustrate the results are provided. Solvability in the Sense of Sequences for Some Non-fredholm Elliptic Problems Vitaly Volpert, and Vitali Vougalter We establish solvability in the sense of sequences in the appropriate H 2 spaces for certain linear nonhomogeneous elliptic problems involving Schro ̈dinger type operators without Fredholm property using the technique developed in our preceding work [23]. We show the existence of bounded solutions for certain nonlinear Lippmann-Schwinger equa- tions. A Novel Moist Carbon Dioxide Generation Enhanced Oil Recovery Technology Geylani M. Panahov, Eldar M. Abbasov, Sayavur I. Bakhtiyarov, Gulshan R. Agayeva, Gabil A. Aliyev, and Sevinj R. Rasulova A new technological technique based on the "moist" carbon dioxide generation in the wellbore zone of the oil formations or at the wellhead is proposed. In this new technology a carbon dioxide is generated as a result of the chemical reaction between an aqueous acid solution and a natural calcium carbonate rock (limestone) as a "gas yielding" component. The mechanism of the process allows controlling a generation rate and a volume of the gas, the thermobaric conditions of the oil formation, mineral and acidity levels of water and a phase state of carbon dioxide. An industrial application of this technology in a large scale has a significant ecological importance as this technology utilizes the wastes of processing plants of the natural minerals, which is considered as a major source of the pollution of atmosphere by greenhouse gas emissions. Treasure Hunting in Virtual Environments Self-Organized Criticality in Searches Amid Uncertainty pp. 1–17 | DOI: 10.5890/DNC.2014.03.001 Dimitri Volchenkov, Jonathan Helbach, Marko Tscherepanow, and Sina Ku ̈hne Searching experiments conducted in different virtual environments over a gender balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatiotemporal scales. The better per- formance of men in virtual environments can be associated with the reg- ularly renewed computer game experience, essentially in games played through a first-person perspective. We suggested a simple self-organized critical model of search, in which the experimentally observed scale-free behavior can be interpreted as a trade-off between the value of exploitation versus exploration amid uncertainty. Synchronization of the Cardiac Pacemaker Model with Delayed Pulse-coupling pp. 19–31 | DOI: 10.5890/DNC.2014.03.002 We reconsider the C. Peskin model of the cardiac pacemaker assuming that pulse-couplings are delayed. Sufficient conditions for synchronization of identical and non-identical oscillators are obtained. The results are demon- strated with numerical simulations. Boundary Value Problems for Impulsive Fractional Evolution Integrodifferential Equations with Gronwall's Inequality in Banach Spaces Dimplekumar N. Chalishajar and K. Karthikeyan In this paper, we study boundary value problems for impulsive fractional evolution integrodifferential equations with Caputo derivative in Banach spaces. A generalized singular type Gronwall inequality is given to obtain an important priori bounds. Some sufficient conditions for the existence solutions are established by virtue of fractional calculus and fixed point method under some mild conditions. An example is given to illustrate the results. Treatment and Vertical Transmission in a HIV-TB Co-infection Model Carla MA Pinto and Ana Carvalho In this paper, it is proposed a mathematical model for co-infection of HIV/AIDS and tuberculosis. The model includes treatment and vertical transmission for HIV/AIDS. The treatment for tuberculosis is not included. The disease-free equilibrium of the model is computed and local stability is proved. The reproduction numbers of the full model and of its two sub- models, concerning single infection by HIV/AIDS and single infection by tuberculosis, are also calculated. Numerical simulations show the effect of the variation of the recruitment rate, of the movement rate, and of the tuberculosis infection rate on the variables of the model. Results are as expected. Namely an increase in the recruitment rate increases the suscep- tible population. As the movement rate is decreased, the individuals single infected with HIV decrease. Moreover, an increase in the tuberculosis in- fection rate translates in an increase of the single infected with tuberculosis and dually-infected with tuberculosis and HIV/AIDS individuals. Future work will consider the inclusion of treatment of tuberculosis. Solvability Conditions For Some Non Fredholm Operators in a Layer in Four Dimensions Vitaly Volpert and Vitali Vougalter We study solvability in H2 of certain linear nonhomogeneous elliptic prob- lems involving the sum of the periodic Laplacian and a Schro ̈dinger oper- ator without Fredholm property and prove that under reasonable technical conditions the convergence in L2 of their right sides implies the existence and the convergence in H2 of the solutions. We generalize the methods of spectral and scattering theory for Schro ̈dinger type operators from our preceding work Properties of a Periodic Ansatz for the Coarsening of Soliton-lattice Simon Villain-Guillot Soliton lattices are periodic solutions of Ginzburg-Landau equation which can be useful tools to explore the coarsening process (or Ostwald ripening) which takes place during a Cahn-Hilliard dynamics.They can be used to identify the stationary solutions of the dynamics and how these intermedi- ate states are destroyed by fluctuations. The coarsening process drives the systems from a stationary solution to the next one which is of period double and of lower energy. Using another family of soliton lattices, this process can be described continuously via a phase field equation. We present here properties of these two families, including the Fourier series decomposition of the non symmetric soliton lattice which we use as building block of our ansatz. Bifurcation Trees of Period-m Motions to Chaos in a Time-Delayed, Quadratic Nonlinear Oscillator under a Periodic Excitation pp. 87–107 | DOI: 10.5890/DNC.2014.03.007 Albert C. J. Luo and Hanxiang Jin In this paper, analytical solutions of periodic motions in a periodi- cally excited, time-delayed, quadratic nonlinear oscillator are obtained through the Fourier series, and the stability and bifurcation of such pe- riodic motions are discussed by eigenvalue analysis. The analytical bifurcation tree of period-1 motion to chaos in such a time-delayed, quadratic oscillator is presented through period-1 to period-8 motion. Numerical illustrations of stable and unstable periodic motions are given by numerical and analytical solutions. Compared to dynami- cal systems without time-delay, the time-delayed dynamical systems possess different periodic motions and the bifurcation trees of periodic motions to chaos are also distinguishing. Variational Iteration Method for Generalized Pantograph Equation with Convergence Analysis Mohsen Alipour, Dumitru Baleanu, Kobra Karimi, and Sunil Kumar In this paper, we solve generalized pantograph equation by changing the problem to a system of ordinary equations and using the variational iteration method. We discuss convergence of the proposed method to the exact solution. Finally, illustrative examples are given to demonstrate the efficiency of the method. Mathematical Modelling and Simulation of the Bifurcational Wobblestone Dynamics Jan Awrejcewicz and Grzegorz Kudra The Celtic stone, sometimes also called wobbles tone or rattleback usually is a semi-ellipsoidal solid with a special mass distribution. Most celts lied on aflat and horizontal base, set in rotational motion around a vertical axis can rotate in only one direction. In this work the dynamics of the celt is simulated numerically, but the solid is forced untypically, i.e. it is situated on a harmonically vibrating base. Essential part of the model are approximate functions describing the contact forces, i.e. dry friction forces and rolling resistance. They are based on previous works of the authors, but some modifications of friction model are made, which can be described as a generalization of the earlier used Pad´e approximants. Periodic, quasiperiodic and chaotic dynamics of a harmonically forced rattleback is found and presented by the use of Poincar´e maps and bifurcation diagrams. The Solvability and Optimal Controls for Some Fractional Impulsive Equation with Finite Delay Xianghu Liu and Yanfang Li This paper is concerned with the existence and uniqueness of mild solution of some fractional impulsive equations withfinite delay. Firstly,we introduce the fractional calculus,Gronwall inequality, leray-schauder'sfixed point theorem,Secondly with the help of them, the sufficient condition for the existence and uniqueness of solutions is presented. Finally we give an example to illustrate our main results. I Dress Like Everyone, I Dress Like No Other D. Volchenkov, E. Banaszak, K.Kocjan ,and T.Kruger By a successive reduction of complexity in the appearance data collected in situ, we have shown that the way people maintain their appearance constitutes a multi-level dynamical process evolving in several incomparable time scales. The slowest component of this process represents a gradual diffusion of features (with a measurable pace of growing variance) from a proto-costume, which is likely has the 19th century military uniform as a prototype for men, and a robe de soir´ee - for women. The rapidly varying component of the process is statistically reminiscent of adiabatic processes in thermodynamics which evolve rapidly, without exchange of heat of a system with its environment. The appearance of other people can be considered as being in a state of "thermal equilibrium" between the almost imperceptible, subtle modifications of a traditional costume and the one day gusts of unpredictable fashion. Extended Mixed AKNS-Lund-Regge Model and Its Self-similarity Reduction D.V. Ruy and G.R. de Melo We discuss the relation between the self-similarity reduction of the generalized mixed mKdV-sinh-Gordon model and the fourth-order equation obtained by Kudryashov inJ. Phys. A: Math. Theor.35(2002) 93-99. Also, it is shown two particular solutions for this equation. Then, we extend the mixed AKNS-Lund-Regge model and study its self-similarity reduction. We obtain thefifth Painlev´ e equation as a particular case of this reduction and a fourth-order second-degree equation otherwise. The relation between a integrable model and a fourth-order second-degree equation is interesting because the general solution of this equation must have the Painlev´ e property due the Ablowitz, Ramani and Segur conjecture. Improving Accuracy of Complex Network Modeling Using Maximum Likelihood Estimation and Expectation-Maximization Ehsan Jahanpour and Xin Chen Structure of a complex network provides important information about its performance and may be used to predict changes in network performance. Degree distributions are used to model the network structure. Four degree distributions, including the power law, Weibull, Poisson, and negative binomial, are applied in this research to three complex networks, the Krebs, HIV, and Power Grid networks. To improve accuracy of network modeling, the maximum likelihood estimation method and expectation-maximization algorithm are used to estimate parameters of the four degree distributions. Several statistical analyses and a simulation study are conducted to determine which degree distribution best describes the network structure. The results show that the degree distributions with two descriptive parameters, Weibull and negative binomial, provide better estimations than the onedescriptive-parameter degree distributions, power law and Poisson. Mathematics of Multi-Level Complex Systems This special issue of the Journal "Discontinuity, Nonlinearity, and Complexity" contains the proceedings of the joint workshop "Mathematics of Complex Systems" that took place at the Center for Interdisciplinary Research of the University of Bielefeld on the 7-9 October, 2013 supported by the Work program of FP7: FET Proactive Intiative of the European Comission: "Dynamics of Multi-Level Complex Systems (DyM-CS)". Q-analysis Based Clustering of Online News David M.S. Rodrigues† With online publication and social media taking the main role in dissemination of news, and with the decline of traditional printed media, it has become necessary to devise ways to automatically extract meaningful information from the plethora of sources available and to make that information readily available to interested parties. In this paper we present a method of automated analysis of the underlying structure of online newspapers based onQ-analysis and modularity optimisation. We show how the combination of the two strategies allows for the identification of well defined news clusters that are free of noise (unrelated stories) and provide automated clustering of information on trending topics on news published online. Balanced Growth in the Structural Dynamic Economic Model SDEM-2 Dmitry V. Kovalevsky The Structural Dynamic Economic Model SDEM-2, a follow-up of the model SDEM developed earlier, is essentially an actor-based, systemdynamic model of a closed economy evolving under conditions of conflict of interests of two powerful aggregated actors: entrepreneurs and wageearners. We derive the model equations applicable to both the balanced and unbalanced growth paths, and then study the balanced growth (with neither idle physical capital nor unemployment). Wefirst consider an inflexible control strategy of entrepreneurs for deterministic and stochastic cases, and then turn to a more sophisticated nonlinear control strategy. We also solve a simple optimization problem by calculating the (time-independent) value of model control parameter maximizing the discounted dividend of entrepreneurs. In view of simplicity of model equations, exact analytical solutions can be obtained in many cases, other cases being studied semianalytically. Even the simplest versions of SDEM-2 are able to produce rather versatile trajectories of the economy, dependent on the values of model parameters and initial conditions. Path Integral Distance for the Automated Data Interpretation The process of data interpretation is always based on the implicit introduction of equivalence relations on the set of walks over the database. Every equivalence relation on the set of walks specifies a Markov chain describing the transitions of a discrete time random walk. In order to geometrize and interpret the data, we propose the new distance between data units defined as a "Feynman path integral", in which all possible paths between any two nodes in a graph model of the data are taken into account, although some paths are more preferable than others. Such a path integral distance approach to the analysis of databases has proven its efficiency and success, especially on multivariate strongly correlated data where other methods fail to detect structural components (urban planning, historical language phylogenies, music, street fashion traits analysis, etc. ). We believe that it would become an invaluable tool for the intelligent complexity reduction and big data interpretation. The Probabilistic Structure of Discrete Agent-Based Models Sven Banisch This paper describes a formalization of agent-based models (ABMs) as random walks on regular graphs and relates the symmetry group of those graphs to a coarse-graining of the ABM that is still Markovian. An ABM in whichNagents can be inδdifferent states leads to a Markov chain with δ N states. In ABMs with a sequential update scheme by which one agent is chosen to update its state at a time, transitions are only allowed between system configurations that differ with respect to a single agent. This characterizes ABMs as random walks on regular graphs. The non-trivial automorphisms of those graphs make visible the dynamical symmetries that an ABM gives rise to because sets of micro configurations can be interchanged without changing the probability structure of the random walk. This allows for a systematic loss-less reduction of the state space of the model Standardization of Agent-based Modeling in Economic System Xihao Li This work combines generic complexity of economic system and economic agents with methodologies of multi-agent system analysis and development. This combination results in an integrative framework that serves as communication protocol for delivering and transmitting agent-based model for economic system. The integrative framework provides guidance for analyzing economic system in micro-level, which embeds with properties of complexity in structure, heterogeneity in agents' beliefs, and interactions among agents' behaviors. It provides routines on developing standardized agent-based model for economic system that can be used and reused among interdisciplinary research. A Hierarchy of Out-of-Equilibrium Actor-Based System-Dynamic Nonlinear conomic Models Dmitry V. Kovalevsky, Klaus Hasselmann The actor-based system-dynamic approach to macroeconomic modeling is illustrated for a simple model hierarchy consisting of a basic twodimensional model with several alternative three-dimensional extensions. The hierarchy is based on an out-of-equilibrium approach: market clearing is not assumed, supply is not equal to demand, and there exists a stock of unsold goods. Depending on actor behaviour, the models exhibit stable exponential growth or instabilities leading to nonlinear oscillations or economic collapse. In most cases, the simplicity and tractability of the models enables analytical solutions. The examples serve as illustration of more realistic models developed within the Multi-Actor Dynamic Integrated Assessment Model System (MADIAMS) to assess the long-term impacts of climate mitigation policies. Critical Phase in Complex Networks: a Numerical Study Takehisa Hasegawa, Tomoaki Nogawa, and Koji Nemoto We compare phase transition and critical phenomena of bond percolation on Euclidean lattices, nonamenable graphs, and complex networks. On a Euclidean lattice, percolation showsa phase transition between the nonpercolating phase and percolating phase at the critical point. The critical point is stretched to a finite region, called the critical phase, on nonamenable graphs. To investigate the critical phase, we introduce a fractal exponent, which characterizes a subextensive order of the system. We perform the Monte Carlo simulations for percolation on two nonamenable graphs – the binary tree and the enhanced binary tree. The former shows the nonpercolating phase and the critical phase, whereas the latter shows all three phases. We also examine the possibility of critical phase in complex networks. Our conjecture is that networks with a growth mechanism have only the critical phase and the percolating phase. We study percolation on a stochastically growing network with and without a preferential attachment mechanism, and a deterministically growing network, called the decorated flower, to show that the critical phase appears in those models. We provide afinite-size scaling by using the fractal exponent, which would be a powerful method for numerical analysis of the phase transition involving the critical phase. Random Parametric Resonance in Time-Dependent Networks of Adaptive Frequency Oscillators We consider a network of interacting phase oscillators endowed with adaptive mechanisms, leading the collective motion to a consensual dynamical state. Specifically, for a given network topology (i.e. an adjacency matrix) governing the mutual interactions, the adaptive mechanisms enable all oscillators to ultimately adopt a consensual frequency. Once reached, the consensual frequency subsists even if interactions between the oscillators are switched off. For the class of models we consider, the consensual frequency is independent of the network topology. Even though this independence might suggest that extension totime-dependent networks is straightforward, this is not true here. For time-dependent networks and spectra of the underlying Laplacian matrices, one may observe the emergence of more complex dynamics. Due to their high degree of complexity, these dynamics generally offer little hope for analytical tractability. In this paper, we focus on connected time-dependent networks with circulant adjacency matrices. The simple spectral structures and commutativity properties enjoyed by circulant matrices enable an analytical stability analysis of the consensus state. Ultimately, we are able to reduce the stability analysis to a dissipative harmonic oscillator with parametric pumping. Adaptive Synchronization of Delayed Chen Chaotic System Tao Fan, Chang-Zhong Chen, Xiao-Hong Ren, and Ping He In this paper, we focus on the adaptive synchronization of delayed Chen chaotic systems with unknown parameters. An adaptive synchronization controller and the adaptive updating law are designed. At last, the numerical simulation is shown to prove the effectiveness of the proposed synchronization controller schemes. Ulam-Hyers-Rassias Stability for Semilinear Equations Jinrong Wang, and Michal Feckan We study the Ulam-Hyers-Rassias stability for linear and semilinear equations on Banach spaces from a functional analysis point of view with several illustrative examples. More precisely, surjective linear equations on Banach spaces, linear equations on Banach spaces with closed ranges and surjective semilinear equations between Banach spaces are investigated one by one. Filaments-nets Structure of the Phase Space of Coin Tossing: Mechanism for Sensitivity and Complexity Zengyuan Yue Detailed analysis of trajectories reveals a filaments-nets structure of the phase space of coin tossing, leading to a simple and unified explanation for the extremely sensitive dependence of the outcome, head or tail, on the initial state, for the extremely complex geometry of the cross sections of basins of attraction for heads and tails, and for the big difference between the transitional region and the "completely random region". A "GDGC" (Great Differentiation & Great Combination) condition is proposed for the stability of statistical regularity, which can also be summarized by the following "Compensation Principle": The more sensitive, i.e. the more unstable, the deterministic process is, the more stable, i.e. the more insensitive, the associated statistical regularity would be. Automatic Recognition and Tagging of Topologically Different Regimes in Dynamical Systems Jesse J. Berwald, Marian Gidea, and Mikael Vejdemo-Johansson Complex systems are commonly modeled using nonlinear dynamical systems. These models are often high-dimensional and chaotic. An important goal in studying physical systems through the lens of mathematical models is to determine when the system undergoes changes in qualitative behavior. A detailed description of the dynamics can be difficult or impossible to obtain for high-dimensional and chaotic systems. Therefore, a more sensible goal is to recognize and mark transitions of a system between qualitatively different regimes of behavior. In practice, one is interested in developing techniques for detection of such transitions from sparse observations, possibly contaminated by noise. In this paper we develop a framework to accurately tag different regimes of complex systems based on topological features. In particular, our framework works with a high degree of success in picking out a cyclically orbiting regime from a stationary equilibrium regime in high-dimensional stochastic dynamical systems. An (2+1)-dimensional Expanding Model of the Davey-Stewartson Hierarchy As Well As Its Hamiltonian Structure Yufeng Zhang, Wenjuan Rui, and Hon-Wah Tam Introducing a new 6-dimensional Lie algebra aims at generating a Lax pair whose compatibility condition gives rise to (1+1)-dimensional integrable hierarchy of equations which can reduce to the nonlinear Schr¨odinger equation and two sets of nonlinear integrable equations by taking various parameters. The Hamiltonian structure of the (1+1)-dimensional hierarchy is also obtained by using the trace identity. The reason for generating the above (1+1)-dimensional integrable hierarchy lies in obtaining (2+1)-dimensional equation hierarchy. That is to say, with the hep of the higher dimensional Lie algebra, we introduce two 4 × 4 matrix operators in an associative algebra A [ ξ ] for which a new (2+1)-dimensional hierarchy of equations is derived by using the TAH scheme and the Hamiltonian operator in the case of 1+1 dimensions , which generalizes the results presented by Tu, that is, the reduced case of the hierarchy obtained by us can be reduced to the Davey-Stewartson (DS) hierarchy. Finally, the Hamiltonian structure of the (2+1)-dimensional hierarchy is produced by the trace identity used for 2+1 dimensions, which was proposed by Tu. As we have known that there is no paper involving such the problem on generating expanding models of (2+1)-dimensional integrable hierarchy. Nonlinear Electron Distribution Function in a Plasma D. Benisti, A. Friou, and L. Gremillet In this paper, we revisit the distribution function derived by making use of the adiabatic approximation, for electrons acted upon by a slowly varying electrostatic wave. This allows us to resolve an apparent paradoxical discrepancy between the results of two published papers, namely, V.B. Krapchev and A.K. Ram (1980), Phys. Rev. A 22, 1229-12242, and, D. B´enisti and L. Gremillet (2007), Phys. Plasmas, 14, 042304. We then briefly recall the relevance of the adiabatic approximation as well as its limitations, and further indicate how to go beyond this approximation to derive very accurate electron distribution functions. On Stationary Solutions of the Reduced Gardner–Ostrovsky Equation Maria Obregon and Yury Stepanyants The detailed analysis of stationary solutions of the reduced Gardner– Ostrovsky (GO) equation is presented. The GO equation (ut + c0ux + αuux + α1u2ux + βuxxx )x = γu is the popular model for the description of large-amplitude internal oceanic waves affected by Earth's rotation. Its reduced version in which the small-scale dispersion is neglected ( β = 0 ) is used when very long internal waves are considered. The equation is also applicable to other types of nonlinear waves in various media (plasma, optical media, relaxing media, etc.) when the large-scale dispersion ∼ γplays a dominant role in comparison with the small-scale dispersion ∼ β. Balancing the nonlinear effect such dispersion gives rise to existence of stationary waves, both periodic and non-periodic. It is shown that only smooth periodic waves make physical sense. Systematic analysis of stationary solutions to the GO equation and their categorisation is presented. Solvability Relations For Some Diffusion Equations With Convection Terms Vitali Vougalter and Vitaly Volpert Linear second order elliptic equations containing the sum of the two Laplace operators with convection terms or a free Laplacian and a Laplacian with drift are considered in Rd. The corresponding operator L may be non Fredholm, such that solvability conditions for the equation Lu = f are unknown. We obtain solvability conditions in H2 (Rd ) for the non selfadjoint problem via relating it to a self-adjoint Schr¨odinger type operator, for which solvability relations are derived in our preceding work. How to Resist Synchronization Attacks pp. 1–9 | DOI: 10.5890/DNC.2015.03.001 A.N. Pisarchik, M. Jimenez-Rodriguez, and R. Jaimes-Reategui Conventional synchronization-based chaotic communication is vulnerable to synchronization attacks enable to recuperate system parameters. However, it is possible to make these attacks inefficient. The simple way to resist synchronization attacks is to change a parameter of the master system faster than the time needed for the system to synchronize. To verify this idea we construct a hybrid communication system composed of two chaotic R¨ossler oscillators and the chaotic logistic map. The latter is used for fast variation of the most sensitive system parameter when the R¨ossler oscillators synchronize. The algorithm is robust to noise in the communication channel. Lattice Model with Nearest-Neighbor and Next-Nearest-Neighbor Interactions for Gradient Elasticity Vasily E. Tarasov Lattice models for the second-order strain-gradient models of elasticity theory are discussed. To combine the advantageous properties of two classes of second-gradient models, we suggest a new lattice model that can be considered as a discrete microstructural basis for gradient continuum models. It was proved that two classes of the second-gradient models (with positive and negative sign in front the gradient) can have a general lattice model as a microstructural basis. To obtain the second-gradient continuum models we consider a lattice model with the nearest-neighbor and next-nearestneighbor interactions with two different coupling constants. The suggested lattice model gives unified description of the second-gradient models with positive and negative signs of the strain gradient terms. The sign in front the gradient is determined by the relation of the coupling constants of the nearest-neighbor and next-nearest-neighbor interactions. Impact of Nonlinearity of Climate Damage Functions on Long-term Macroeconomic Projections under Conditions of Global Warming Dmitry V. Kovalevsky, Svetlana I. Kuzmina, and Leonid P. Bobylev A weakly nonlinear climate damage function used by Nordhaus in his DICE model and a strongly nonlinear climate damage function recently proposed by Weitzman are incorporated in the Kellie-Smith— Cox Integrated Assessment model of coupled climate—economic dynamics, the initial version of which contained only linear climate damages. Long-term projections of the dynamics of coupled climate— economic system are computed with the modified versions of the model. Simulation results for Weitzman function demonstrate pronounced nonlinear emergent dynamics and suggest extremely dramatic long-term economic scenarios for high values of background economic growth rates. Integrability of a Coupled Harmonic Oscillator in Extended Complex Phase Space Ram Mehar Singh With in the frame work of extended complex phase space characterized by x = x1 + ip3, y = x2 + ip4, px = p1 + ix3 and py = p2 + ix4, we investigate the exact invariants for a coupled harmonic oscillator along with PT-symmetric version in two dimensions. For this purpose rationalization method is employed and the invariants obtained in this work play an important role to study the complex trajectories of the concerned classical system. The Models with Impact Deformations M. U. Akhmet and A. Kıvılcım We consider mechanisms with impact deformations such that colliding parts are deformable and the Newton's coefficient of restitution is variable. It is shown how a system with impact deformations can replace the KelvinVoigt viscoelastic model in analysis of a mechanism with contact motion. The suggested impact deformations are compared with the experimental data. By applying deformable surfaces of contacts and non-constant coeffi- cients of restitution, we suppress the chattering in two different mechanical models. We have investigated the existence and stability of periodic solutions in mechanisms with contacts. To actualize the theoretical results, extended examples with simulations are presented. Unpredictability of Coin Tossing in the Most Sensitive Regions of the Phase Space Detailed calculations reveal that the sensitivity of the dependence of the result of coin toss (head or tail) on the initial state in the phase space is not only very inhomogeneous but also fractal. In the most sensitive regions, the number of turns of the coin is still fluctuating as the initial height changes within the atomic scale. Thus, the predictability of the toss of a real coin fails in these regions. The portion of such unpredictable regions with sub-atomic sensitivity becomes dominant in the phase space with the improvement of the elasticity of the surface. This offers one unique example in the macroscopic physics that macroscopic determinism fails due to the extreme sensitivity. This also helps to understand why the long-time accurate weather report is not possible. Operator-theoretic Identification of Closed Sub-systems of Dynamical Systems Oliver Pfante and Nihat Ay A central problem of dynamical systems theory is to identify a reduced description of the dynamical process one can deal easier. In this paper we present a systematic method of identifying those closed sub-systems of a given discrete time dynamical system in the frame of operator theory. It is shown that this problem is closely related to finding invariant sigma algebras of the dynamics. Poincare Recurrences in the Circle Map: Fibonacci Stairs V.S. Anishchenko N.I. Semenova, and T.E. Vadivasova We show that the dependence of the mimimal Poincar′e return time on the vicinity size is universal for the golden and silver ratios in the circle map and can be referred to as the "Fibonacci stairs". The rigorous result for the Afraimovich-Pesin dimension equality αc = 1 is confirmed for irrational rotation numbers with the measure of irrationality μ = 2. It is shown that some transcendental number are Diophantine and have the measure μ = 2. It is also confirmed that the gauge function 1/t cannot be applied for Liouvillian numbers. All the obtained features hold for both the linear and the nonlinear circle map. A Semi-analytical Prediction of Periodic Motions in Duffing Oscillator Through Mapping Structures Albert C.J. Luo and Yu Guo In this paper, periodic motions in the Duffing oscillator are investigated through the mapping structures of discrete implicit maps. The discrete implicit maps are obtained from differential equation of the Duffing oscillator. From mapping structures, bifurcation trees of periodic motions are predicted analytically through nonlinear algebraic equations of implicit maps, and the corresponding stability and bifurcation analysis of periodic motion in the bifurcation trees are carried out. The bifurcation trees of periodic motions are also presented through the harmonic amplitudes of the discrete Fourier series. Finally, from the analytical prediction, numerical simulation results of periodic motions are performed to verify the analytical prediction. The harmonic amplitude spectra are also presented, and the corresponding analytical expression of periodic motions can be obtained approximately. The method presented in this paper can be applied to other nonlinear dynamical systems for bifurcation trees of periodic motions to chaos. Topology of Delocalization in the Nonlinear Anderson Model and Anomalous Diffusion on Finite Clusters A.V. Milovanov and A. Iomin This study is concernedwith destruction of Anderson localization by a nonlinearity of the power-law type. We suggest using a nonlinear Schr¨odinger model with random potential on a lattice that quadratic nonlinearity plays a dynamically very distinguished role in that it is the only type of power nonlinearity permitting an abrupt localization-delocalization transition with unlimited spreading already at the delocalization border. For super-quadratic nonlinearity the borderline spreading corresponds to diffusion processes on finite clusters. We have proposed an analytical method to predict and explain such transport processes. Our method uses a topological approximation of the nonlinearAnderson model and, if the exponent of the power nonlinearity is either integer or half-integer, will yield the wanted value of the transport exponent via a triangulation procedure in an Euclidean mapping space. A kinetic picture of the transport arising from these investigations uses a fractional extension of the diffusion equation to fractional derivatives over the time, signifying non-Markovian dynamics with algebraically decaying time correlations. A Method for Solving Nonlinear Differential Equations: An Application to λφ4 Model Danilo V. Ruy Recently, it has been great interest in the development of methods for solving nonlinear differential equations directly. Here, it is shown an algorithm based on Pad′e approximants for solving nonlinear partial differential equations without requiring a one-dimensional reduction. This method is applied to the λφ4 model in 4 dimensions and new solutions are obtained. Synchronization of Micro-Electro-Mechanical-Systems in Finite Time Hadi Delavari1, Ayyob Asadbeigi, and Omid Heydarnia Finite time synchronization of chaotic Micro-Electro-Mechanical Sys-tems (MEMS) is considered. In particular, a Lyapunov-based adaptive controller is developed such that convergence of synchronization error is guaranteed globally in the presence unknown perturbations. The system under consideration suffers from bounded parametric uncertainties, additive external disturbances as well as dead zone input nonlinearities. We establish the controller on being resistance against hard nonlinearities by a novel scheme which can be developed to general chaotic systems even. We provide rigorous stability analysis to come up with sufficient conditions that guarantee finite time error convergence of perturbed system. Several simulation scenarios are carried out to verify the effectiveness of obtained theoretical results. Scaling Modeling of the Emitted Substance Dispersion Transported by Advection Caused by Non-homogeneousWind Field and by Isotropic and Anisotropic Diffusion in Vicinity of Obstacles Ranis N. Ibragimov, Andrew Barnes, Peter Spaeth, Radislav Potyrailo, and Majid Nayeri DetailedA simple two-dimensional mathematical approach for source localization of contaminants in the vicinity of individual simple two-dimensional obstacles is proposed. The approach consists of scaling analysis of advectiondiffusion potential flows that can be used in the vicinity of two-dimensional cylindrical obstacles. Three different modeling scenarios are developed in order to simulate the effects of wind. Particularly, the model incorporates the cases of anisotropic diffusion and spatially and temporary inhomogeneous airflow speeds. Coarse-Graining and Master Equation in a Reversible and Conservative System Felipe Urbina, Sergio Rica, and Enrique Tirapegui A coarse graining process is applied to a Ising like model with a conservative and a reversible dynamics. It is shown that, under some assumptions, this coarse graining leads to a tractable probability transfer matrix of finite size which provides a master equation for a coarse graining probability distribution. Some examples are discussed. On Selective Decay States of 2D Magnetohydrodynamic Flows Mei-Qin Zhan The selective decay phenomena has been observed by physicists for many dynamic flows such as Navier-Stoke flows, barotropic geophysical flows, and magnetohydrodynamic (MHD) flows in either actual physical experiments or numerical simulations. Rigorous mathematical works have been carried out for both Navier-Stoke and barotropic geophysical flows. In our previous work, we have rigorously showed the existence of selective states for 2D MHD flows. In this paper, we present a partial result on instability of the selective states. Modeling Fluid Dynamics in the Ocean and Atmosphere S.V. Prants This Special Issue collects together works on analytic solutions and numerical simulation of fluid dynamics in the ocean and atmosphere. The contributed papers address a variety of problems in geophysical fluid dynamics including formation of coherent structures in random hydrodynamic flows, hyperbolicity in the ocean, mesoscale surface and deep vortices in the ocean, formation of vocalized atmospheric vortices and motion of tropical cyclones, convective instability and nonlinear structures in systems with a multi-component convection, instability development in shear stratified flows and others. Clustering of a Positive Random Field –What is This? V.I. Klyatskin It is shown that, in parametrically excited stochastic dynamic systems described by partial differential equations, spatial structures (clusters) can appear with probability one, i.e., almost in every system realization, due to rare events happened with probability approaching to zero. The problems of such type arise in hydrodynamics, magnetohydrodynamics, physics of plasma, astrophysics, and radiophysics. Equilibrium Distributions for Hydrodynamic Flows This paper deals with the problem of stochastic structure formation in random hydrodynamic flows. In particular, starting from an analysis of the steady-state probability density, it considers coherent structures of vortex formation (vortex genesis) in stochastic quasi-geostrophic flows, which are related to rotation and random topography of the bottom. Hyperbolicity in the Ocean S.V. Prants1, M.V. Budyansky1, M.Yu. Uleysky1, J. Zhang2 Some manifestations of hyperbolicity in the ocean, the important concept in dynamical systems theory, are discussed. It is shown how to identify hyperbolic points, hyperbolic trajectories and their stable and unstable manifolds solving advection equations for passive scalars in a satellite-derived AVISO velocity field and computing finite-time Lyapunov exponents by the singular-value decomposition method. To validate our simulation we use available tracks of oceanic drifters following near surface currents in some areas in the Northwestern Pacific Ocean. The tracks illustrate how drifters "feel" the presence of hyperbolic points, hyperbolic trajectories and stable and unstable manifolds and change abruptly their trajectories when approaching a hyperbolicity region. Application of the Hydromechanical Model for a Description of Tropical Cyclones Motion Boris Shmerlin, Mikhail Shmerlin Within the framework of the hydromechanical model (HMM), proposed by one of the authors, a tropical cyclone (TC) motion is defined by a largescale wind field and a TC intensity. The model contains parameters describing TC and its interaction with wind field. The diagnostic, quasi-prognostic and prognostic calculations of TC movement are carried out. Diagnostic and quasi-prognostic calculations mean that an objective analysis of a large scale wind field and an objective analysis of a TC intensity is used during a TC whole lifetime. In case of diagnostic calculations, model parameters (constants for each TC) are defined from the best coincidence between the real and calculated track of a TC during a TC whole lifetime; for quasiprognostic calculations they are defined during the preliminary "preprognostic" period. Diagnostic calculations show that the HMMrather correctly describes peculiarities of a TC motion. Quasi-prognostic calculations show that model parameters may be rather correctly defined during a preliminary "preprognostic" period. The results of the diagnostic, quasi-prognostic and prognostic calculations are presented. Influence of Deep Vortices on the Ocean Surface Daniele Ciani, Xavier Carton1, Igor Bashmachnikov2, Bertrand Chapron3, Xavier Perrot4 We study the influence of deep vortices on the ocean surface in terms of sea-surface elevation, a quantity related to a fluid stream function. We use several mathematical and numerical models, from the most idealized configurations (point vortices) to realistic ones (finite volume vortices). We determine analytically the surface influence of vortices at rest (steady signature) and in motion (dynamical signature). Then, using a nonlinear, numerical hydrodynamicmodel for oceanic vortices, we determine the growth with time of a dynamical signature for drifting vortices without steady signature. We conclude on the possibility to detect several types of oceanic vortices with surface measurements, using the results from our theory and experiments. The Formation of Localized Atmospheric Vortices of Different Spatial Scales and Ordered Cloud Structures Boris Shmerlin1, Maxim Kalashnik1,2, Mikhail Shmerlin1 The classical Rayleigh theory of convective instability of a viscous and heat conductive rotating atmospheric layer is generalized to the case of phase transitions of water vapor both for the precipitation convection (PC) and for the nonprecipitation (NPC) one. A principal difference is stated between moist convection and Rayleigh convection, on the one hand, and PC and NPC, on the other hand. In particular, the instability region on the plane of model parameters turned out to generally consist of two subregions, in one of which the localized axisymmetric disturbances with a tropical cyclone (hurricane) structure have the highest growth rate. In case of PC the ascending motions on the axis of symmetry correspond to such disturbances, in case of NPC a spontaneous growth of localized vortices both with ascending and descending motions on the axis is possible. Under other parameters values in case of PC spatially periodic cloud structures (convective rolls or closed cloud cells) have the highest growth rate and in case of NPC–mesoscale systems of convective rolls or mesoscale cloud clusters with annular cloud structures. An Approach to the Modeling of Nonlinear Structures in Systems with a Multi-component Convection Sergey Kozitskiy We consider 3D multi-component convection in a horizontally infinite layer of an uncompressible fluid slowly rotating around a vertical axis. A family of CGLE type amplitude equations is derived by multiple-scaled method in the neighborhood of Hopf bifurcation points. We numerically simulate a case of the three-mode convection at large Rayleigh numbers. It was shown that the convection typically takes a form of hexagonal structures for a localized initial conditions. The rotation of the system prevents the spread of the convective structures on the entire area. The approach to the modeling of the Saturn's polar hexagon on the basis of amplitude equations is discussed. Instability Development in Shear Flow with an Inflection–Free Velocity Profile and Thin Pycnocline S.M. Churilov Weakly stratified flows of the class under study have a wide 3D spectrum of the most unstable waves with very close growth rates and phase velocities so that their individual critical layers merge into a common one. The analysis of evolution equations for those waves has shown that throughout a weakly nonlinear stage of development their amplitudes grow explosively. During the first (three-wave) phase, the most rapidly growing are low-frequency waves whereas at the next phase, when numerous and diverse higher-order wave interactions come into play, the growth of highfrequency waves is accelerated and they overtake low-frequency waves. The results obtained are illustrated by numerical calculations for some ensembles of waves. Transient Free Surface Flow Past a Two-dimensional Flat Stern Osama Ogilat1,2, Yury Stepanyants2 A transient free surface flow past a two-dimensional semi-infinite flat plate in the fluid of a finite depth is considered in the linear approximation. It is assumed that the fluid is inviscid and incompressible and the flow is irrotational. The plate is suddenly submerged at relatively small depth below the free surface into the fluid uniformly moving with a constant velocity. The linearized problem is solved for relatively small Froude numbers F < 1 using the Laplace and Fourier transforms, as well as the Wiener– Hopf technique. It is shown that eventually at large time, the transient solution approaches asymptotically the steady-state solution. Peculiarities of the solution obtained are discussed and illustrated graphically. Cavitating Flow between Two Shear Moving Parallel Plates and Its Control Yan Liu1, Sheng Ren2, Jiazhong Zhang3 Two parallel plates with shear moving velocity in opposite direction is introduced as external excitations to induce cavitating flow between them, and a developed scheme based on Lattice Boltzmann method is used to simulate and analyze the evolution of the cavitation or phase transition. First, the principles and simulation process of Lattice Boltzmann method and potential models for single component multiphase flow are introduced, including a special model to the moving boundary conditions. Then, the numerical simulations of evolution of phase transition, induced by shear motions of two parallel plates, are carried out in detail, and the complicated pattern formation of cavitating flows are analyzed in such micro- and multiphase dynamic system and some new results are obtained. In particular, the influences of main parameters, such as initial density and moving velocity, on the cavitation and flow pattern are studied further. The results show that the shear moving motion of two parallel plates could induce the cavitation, and the cavitation and cavitating flow pattern could be controlled availably and efficiently by the main parameters listed above. Further, the method and analysis could be extended to flowing liquid, and an idea of drag reduction utilizing the cavitation due to phase transition in such liquid is proposed. Fractional Dynamics and Systems with Power-Law Memory M. Edelman, J.A. Tenreiro Machado The special issue of DNC "Fractional Dynamics and Systems with Power- Law Memory" contains papers related to presentations given at the 5th Conference on Nonlinear Science and Complexity, NSC'14, held on August 4-9, 2014 in Xi'an Jiaotong University, Xi'an, P. R. of China. Within this conference the authors proposed two mini-symposia: "Nonlinear Fractional Dynamics and Systems with Memory" and "Fractional Calculus Applications". Fractional Calculus: Models, Algorithms, Technology J.A. Tenreiro Machado In the last three decades Fractional Calculus (FC) became an area of intense research and development. The accompanying poster illustrates the present day major achievements in the application of FC in physics, engineering and biology. Fractional Maps and Fractional Attractors. Part II: Fractional Difference Caputo α- Families of Maps In this paper we extend the notion of an α-family of maps to discrete systems defined by simple difference equations with the fractional Caputo difference operator. The equations considered are equivalent to maps with falling factorial-law memory which is asymptotically power-law memory. We introduce the fractional difference Universal, Standard, and Logistic α- Families of Maps and propose to use them to study general properties of discrete nonlinear systems with asymptotically power-law memory. Analysis of Terrorism Data-series by means of Power Law and Pseudo Phase Plane Antonio M. Lopes, J.A. Tenreiro Machado Terrorist attacks are catastrophic events often accompanied by a large number of human losses. The statistics of these casualties can be approximated by Power Law (PL) distributions. In this paper we analyze a dataset of terrorist events by means of PL distributions and Pseudo Phase Plane (PPP) technique. We consider worldwide events grouped into 13 geographical regions. First, for each region, we approximate the empirical data by PL functions and we analyze the emerging PL parameters. Second, we model the dataset as time-series and interpret the data as the output of a dynamical system. For each region, we compute the correlation coefficient to find the optimal time delay for reconstructing the PPP. Third, we compare the PPP curves using clustering tools in order to unveil relationships among the data. Adaptive Memory Identification of Fractional Order Systems Yang Zhao, Yan Li, Fengyu Zhou This paper deals with a previously ignored problem that how to find the memory (initialization function) of fractional order system by using the recent sampled input-output data. A novel and practical strategy is proposed to estimate the initialization function, which adapts to all system parameters but fractional order. To implement this method, a P-type order learning approach is introduced to identify the system order separably and accurately, thanks to the fractional order sensitivity function. The initialization response is computed through an iterative learning identification strategy that guarantees the accuracy and adaptiveness simultaneously. Along with the estimations of order and initialization response, a practical piecewise identification criterion of initialization function is established by using the least squares and instrumental variable methods. The above strategy is available for both Caputo and Riemann-Liouville fractional order systems, where the initial values are applied rather than the initial conditions. Two illustrated examples are provided to support the conclusions. The Optimal Control Problem for Linear Systems of Non-integer Order with Lumped and Distributed Parameters V.A. Kubyshkin, S.S. Postnov The optimal control problem for linear dynamic systems of fractional order with lumped and distributed parameters is investigated. This problem is reduced to the classical moment problem. This paper validates the conditions making possible to formulate and resolve the obtained moment problem. Some particular cases of fractional-order systems are discussed. The explicit solutions for the problems of optimal control were obtained in case of systems with lumped parameters. In case of system with distributed parameters an approximate solution analyzed for moment problem. In particular, this paper studies the problem to minimize the norm of control for the assigned time interval and the problem of control with the minimal time of the object transition into the desirable state with the given limitation of the norm of control. Sliding Mode Control of Fractional Lorenz-Stenflo Hyperchaotic System Jian Yuan, Bao Shi This paper proposes sliding mode control for the 4-D fractional order Lorenz-Stenflo hyperchaotic system. Two methods are utilized: one is based on the frequency distributed model of fractional integral operator; and the other is based on the Mittag-Leffler stability theorem and the Caputo operator property. Both of the two methods involve two steps: firstly, constructing a fractional order sliding surface; secondly, designing a single sliding control law for suppression of the nominal plant. Numerical simulations are carried out to verify the efficiency of the theoretical results. Hybrid Projective Synchronization in Mixed Fractional-order Complex Networks with Different Structure Li-xin Yang, Jun Jiang, Xiao-jun Liu In this paper, a fractional-order drive-response complex network model with different order nodes is proposed for the first time. To achieve the hybrid projective synchronization (HPS) of drive-response complex network with different orders, a general strategy is proposed and effective controllers for hybrid projective synchronization are designed. The fractional operators are introduced into the controller to transform problem into synchronization problem between drive-response complex network with identical orders. Numerical simulation results which are carried show that the method is easy to implement and reliable for synchronizing the driveresponse fractional-order complex networks. Nonlinear Four-point Impulsive Fractional Differential Equations with p-Laplacian Operator Fatma Tokmak Fen, Ilkay Yaslan Karaca In this paper, we investigate the existence of solutions for a four-point nonlocal boundary value problem of nonlinear impulsive differential equations of fractional order α ∈ (2,3]. By using some well known fixed point theorems, sufficient conditions for the existence of solutions are established. Some illustrative examples are also discussed. About Utility of the Simplified Gr¨unwald-Letnikov Formula Equivalent Horner Form Dariusz W. Brzezinski, Piotr Ostalczyk First we discuss some crucial factors that determine numerical calculations accuracy of the Gr¨unwald-Letnikov formula and its equivalent Horner form. Then we introduce simplified variants of both formulas and the concept of the calculation tail. We analyze the utility of its length for mitigation of a time and a memory shortages influence on the accuracy in realtime microprocessor calculations. Credibility of the conclusions is lent by the comparison of the results obtained on a PC and on a real-time DSP system. The Double Exponential Formula as a Gauss Quadratures Replacement for Numerical Integration We propose to replace the Gauss Quadratures with a numerical integration method known as the Double Exponential (DE) Formula. The numerical quadrature built upon it is at least equivalently accurate and much simpler to customize and apply in situations when tabulated values of the Gauss Quadratures' nodes and weights can not be applied. The DE Formula was developed for integrals with endpoint singularities. However, we confirm that it can be successfully applied to any integral and interval, for which the Gauss Quadratures have been usually selected. To remain compact, the following presentation focuses only on the most difficult integrals, e.g. the improper integrals and the integrals with endpoint singularities. The main part of the paper consists of the calculations accuracy comparison between numerical quadrature based upon the DE Formula and the Gauss-Laguerre, the Gauss-Hermite or Gauss-Chebyshev Quadratures. Introducing Increasing Returns to Scale and Endogenous Technological Progress in the Structural Dynamic Economic Model SDEM-2 Two nonlinear modifications of the Structural Dynamic Economic Model SDEM-2 are developed and studied analytically and numerically. In the first model version described in the present paper the production function is assumed to be nonlinear that leads to increasing returns to scale, while the second model version proposed describes endogenous technological progress by treating the technology parameter of the production function as an additional state variable. Dependent on the values of model parameters and on initial conditions, both modifications of SDEM considered demonstrate two different dynamic regimes: either an explosive economic growth or the collapse of the economy. The Existence of Optimal Control for Semilinear Distributed Degenerate Systems M. Plekhanova Optimal control problems for a class of semilinear distributed systems unsolved with respect to the times derivative are studied. Two types of initial condition for the system state and various cost functionals are considered in the problems. Abstract results are illustrated by examples of the start control problems for the quasistationary system of phase field equations. Spin-transfer Torque and Topological Changes of Magnetic Textures Alberto Verga The electric manipulation of magnetic textures in nanostructures, important for applications in spintronics, can be realized through the spin-transfer torque mechanism: a spin-polarized current can modify the magnetization of skyrmions and magnetic vortices, and eventually change the topology of the magnetization. The spin-transfer torque and the intrinsic space and time scales of the topological changes are essentially quantum mechanical. We model the interaction between itinerant and fixed spins with a simple tightbinding hamiltonian in a square lattice. The dynamics is described by the Schr¨odinger equation for the electrons and the Landau-Lifshitz equation for the evolution of the magnetic texture. We investigate the phenomenology of the topological change of a Belavin-Polyakov skyrmion under the action of a spin-polarized current and show that adding an exchange dissipation term, regularizes the transition towards a ferromagnetic state. Equilibrium States Under Constraint in a Variational Problem on a Surface Panayotis Vyridis, M.K. Christophe Ndjatchi, Fernando Garcıa Flores, Julio Cesar Flores Urbina We study the equilibrium states for an energy functional with a parametric force field on a region of a surface under a constraint of geometrical character. We use an improved method, based in Skrypnik's variational theories [10]. In local coordinates, equilibrium points satisfy an elliptic boundary value problem. This model can be described as the deformation of the elastic medium and membranes. Stability of Hopfield Neural Networks with Delay and Piecewise Constant Argument M.U. Akhmet, M. Karacaoren In this paper, by using the concept of differential equations with piecewise constant argument, the model of Hopfield neural networks with constant delay is developed. Sufficient conditions for the existence of an equilibrium as well as its global exponential stability by means of Lyapunov functionals and a linear matrix inequality (LMI) are obtained. An example is given to illustrate our results. Vibrational Resonance in a System with a Signum Nonlinearity K. Abirami, S. Rajasekar, M.A.F. Sanjuan We present our investigation on vibrational resonance in a system with a signum nonlinearity. We construct an exact analytical solution of the system in the presence of an external biharmonic force with two frequencies ω and Ω, Ω ω and use it for the computation of the response amplitude Q at the low-frequencyω. We analyse the effect of the strength of the signum nonlinearity on vibrational resonance for the cases of the potential with a single-well, a double-well and a single-well with a double-hump. An interesting feature of vibrational resonance in the system is that Q does not decay to zero for g (the amplitude of the high-frequency force) → ∞. We compare the features of the vibrational resonance of these two systems, since the potential of the system with the signum nonlinearity and that of the Duffing oscillator show ssimilar forms. The strength of the nonlinearity in these two systems is found to give rise distinct effects on resonance. Controllability of Nonlinear Fractional Delay Integrodifferential Systems R. Joice Nirmala, K. Balachandran In this paper we establish the sufficient conditions for controllability of nonlinear fractional delay integrodifferential systems. The results are obtained by using the solution representation of fractional delay differential equations and the application of Schauder's fixed point theorem. Examples are provided to illustrate the results. Existence of Stationary Solutions for some Systems of Integro-Differential Equations Vitali Vougalter, Vitaly Volpert The article deals with the existence of solutions of a system of nonlocal reaction-diffusion equations which appears in population dynamics. The proof relies on a fixed point technique. Solvability conditions for elliptic operators in unbounded domains which fail to satisfy the Fredholm property are being used. Regular and Singular Pulse and Front Solutions and Possible Isochronous Behavior in the Extended-Reduced Ostrovsky Equation: Phase-Plane, Multi-Infinite Series and Variational Formulations U. Tanriver, G. Gambino, S. Roy Choudhury In this paper we employ three recent analytical approaches to investigate several classes of traveling wave solutions of the so-called extendedreduced Ostrovsky Equation (exROE). A recent extension of phase-plane analysis is first employed to show the existence of breaking kink wave solutions and smooth periodic wave (compacton) solutions. Next, smooth traveling waves are derived using a recent technique to derive convergent multi-infinite series solutions for the homoclinic orbits of the travelingwave equations for the exROE equation. These correspond to pulse solutions respectively of the original PDEs. We perform many numerical tests in different parameter regime to pinpoint real saddle equilibrium points of the corresponding traveling-wave equations, as well as ensure simultaneous convergence and continuity of the multi-infinite series solutions for the homoclinic orbits anchored by these saddle points. Unlike the majority of unaccelerated convergent series, high accuracy is attained with relatively few terms. And finally, variational methods are employed to generate families of both regular and embedded solitary wave solutions for the exROE PDE. The technique for obtaining the embedded solitons incorporates several recent generalizations of the usual variational technique and it is thus topical in itself. One unusual feature of the solitary waves derived here is that we are able to obtain them in analytical form (within the assumed ansatz for the trial functions). Thus, a direct error analysis is performed, showing the accuracy of the resulting solitary waves. Given the importance of solitary wave solutions in wave dynamics and information propagation in nonlinear PDEs, as well as the fact that not much is known about solutions of the family of generalized exROE equations considered here, the results obtained are both new and timely. Exogenous Versus Endogenous for Chaotic Business Cycles Marat Akhmet1, Zhanar Akhmetova2, Mehmet Onur Fen1 We propose a novel approach to generate chaotic business cycles in a deterministic setting. Rather than producing chaos endogenously, we consider aggregate economic models with limit cycles and equilibriums, subject them to chaotic exogenous shocks and obtain chaotic cyclical motions. Thus, we emphasize that chaotic cycles, which are inevitable in economics, are not only interior properties of economic models, but also can be considered as a result of interaction of several economical systems. This provides a comprehension of chaos (unpredictability, lack of forecasting) and control of chaos as a global economic phenomenon from the deterministic point of view. We suppose that the results of our paper are contribution to the mixed exogenous-endogenous theories of business cycles in classification by P.A. Samuelson [1]. Moreover, they demonstrate that the irregularity of the extended chaos can be structured, and this distinguishes them from the generalized synchronization. The advantage of the knowledge of the structure is that by applying instruments, which already have been developed for deterministic chaos, one can control the chaos, emphasizing a parameter or a type of motion. For the globalization of cyclic chaos phenomenon we utilize new mechanisms such as entrainment by chaos, attraction of chaotic cycles by equilibriums and bifurcation of chaotic cycles developed in our earlier papers. Non-Abelian Bell Polynomials and Their some Applications for Integrable Systems The noncommutative Bell polynomials and their dual Bell polynomials are presented, respectively, which are extensively applied to mathematics and physics. We make use of them to exhibit a method for generating integrable hierarchies of evolution equations. As applications, we obtain the Burgers hierarchy and a convection-diffusion equation which can be applied to fluid mechanics, specially, be used to represent mass transformations in fluid systems under some constrained conditions. As reduced cases, the Burgers equation which has extensive applications in physics is followed to produce. Furthermore, we obtain a set of nonlinear evolution equations with four potential functions which reduces to a new nonlinear equation similar to the Calogero-Degasperis-Fokas equation. Finally, we discrete the convection-diffusion equation and obtain its three kinds of finite-difference schemes, that is,the weighted implicit difference scheme and the Lax difference scheme. Their some properties including truncation errors, compatibilities and stabilities based on the Von Neumann condition are discussed in detail. Review on Finite Difference Method for Reaction-Diffusion Equation Defined on a Circular Domain pp. 133-144 | DOI: 10.5890/DNC.2016.06.003 Walid Abid1, R. Yafia2 †, M.A. Aziz-Alaoui3, H. Bouhafa1, A. Abichou1 In this paper, a finite difference method for a non-linear reaction diffusion equation defined on a circular domain is presented. A simple second-order finite difference treatment of polar coordinate singularity for Laplacian operator, the centered difference approximations, the treatments for Neumann boundary problems are used to discretize this equation. By using this method, numerical solutions can be computed. In the end, we give two applications of reaction diffusion predator-prey models with modified Leslie-Gower and Holling type II functional responses. Asymptotic Behavior of Solutions of Singular Integro-differential Equations M.K. Dauylbaev, A.E. Mirzakulova, We study the asymptotic behavior of the two-point integral boundary value problem for third order integro-differential equations with the small parameter at two highest derivatives. The asymptotic estimations of the solution of the integral boundary value problem is obtained. The obtained results shown that the solution of integral boundary value problem on both sides of given segment has the initial jumps with different orders. On the Solvability of Nonlocal Boundary Value Problem for the Systems of Impulsive Hyperbolic Equations with Mixed Derivatives A.T. Assanova A nonlocal boundary value problem for a system of impulsive hyperbolic equations at the fixed times is considered. The questions of existence, uniqueness, and construction of algorithms for finding the solutions to this problem are studied. By introducing the additional parameters as values of solutions on specific lines the considered problem is reduced to the problem consisting of the Goursat problem for a system of hyperbolic equations and the Cauchy problem for ordinary differential equations. The algorithms for finding the approximate solutions of latter problem are obtained and their convergence to the solution of original problem is proved. Conditions for existence of a unique solution to the nonlocal boundary value problem with impulse effects are set in the terms of initial data. Nonlinear Dissipation for Some Systems of Critical NLS Equations in Two Dimensions We prove the global well-posedness in H1(R2,CN) for certain systems of the critical Nonlinear Schrodinger equations coupled linearly or nonlinearly with nonlinear supercritical dissipation terms, generalizing the previous result of [1] obtained for a single equation of this kind. Dynamical Systems Generated by a Gonosomal Evolution Operator Utkir A. Rozikov, Richard Varro In this paper we consider discrete-time dynamical systems generated by gonosomal evolution operators of sex linked inheritance. Mainly we study dynamical systems of a hemophilia, which biologically is a group of hereditary genetic disorders that impair the body's ability to control blood clotting or coagulation, which is used to stop bleeding when a blood vessel is broken. We give an algebraic model of the biological system corresponding to the hemophilia. The evolution of such system is studied by a nonlinear (quadratic) gonosomal operator. In a general setting, this operator is considered as a mapping from Rn, n ≥ 2 to itself. In particular, for a gonosomal operator at n = 4 we explicitly give all (two) fixed points. Then limit points of the trajectories of the corresponding dynamical system are studied. Moreover we consider a normalized version of the gonosomal operator. In the case n = 4, for the normalized gonosomal operator we show uniqueness of fixed point and study limit points of the dynamical system. Exact Analytic Solutions of Pochammer-Chree and Boussinesq Equations by Invariant Painlev´e Analysis and Generalized Hirota Techniques Matthew Russo, S. Roy Choudhury Combinations of truncated Painlev′e expansions, invariant Painlev′e analysis, and generalized Hirota series are used to solve ('partially reduce to quadrature') the integrable Boussinesq and the cubic and quintic generalized Pochammer-Chree (GPC) equation families. Although the multisolitons of the Boussinesq equation are very well-known, the solutions obtained here for all the three NLPDEs are novel, and non-trivial. All of the solutions obtained via invariant Painlev′e analysis are complicated rational functions, with arguments which themselves are trigonometric functions of various distinct traveling wave variables. This is reminiscent of doublyperiodic elliptic function solutions when nonlinear ODE systems are reduced to quadratures. The solutions obtained using recently-generalized Hirota-type expansions are closer in functional form to conventional hyperbolic secant solutions, although with non-trivial traveling-wave arguments which are distinct for the two GPC equations. Modeling of Complex Rheological Fluids with Fractal Structures Rakiz M. Sattarov1†, Ilham R. Sattarzada1, Sayavur I. Bakhtiyarov2, and Ranis N. Ibragimov3 An unsteady flow of rheological complex fluid with fractal structure in various pipe geometries was studied when the pressure is a harmonic function of time at a given frequency at the initial cross section of the pipe. An integral type rheological equation by fractional derivative is applied for the first time to model the thixotropic rheological type oils with a high content of wax, resins and asphaltene. The obtained results show that, the relaxation time and the fractal parameters can significantly influence the process of damping pressure along the pipe. It is shown that depending on these parameters the attenuation process may increase and decrease compared to the processes in a viscous damping fluid pressure. The results of simulations are compared to those for resin-asphalt-paraffin oils of two types. Analytical Prediction of Homoclinic Bifurcations Following a Supercritical Hopf Bifurcation Tanushree Roy1, Roy Choudhury1†, and Ugur Tanriver2 An analytical approach to homoclinic bifurcations at a saddle fixed point is developed in this paper based on high-order, high-accuracy approximations of the stable periodic orbit created at a supercritical Hopf bifurcation of a neighboring fixed point. This orbit then expands as the Hopf bifurcation parameter(s) is(are) varied beyond the bifurcation value, with the analytical criterion proposed for homoclinic bifurcation being the merging of the periodic orbit with the neighboring saddle. Thus, our approach is applicable in any situation where the homoclinic bifurcation at any saddle fixed point of a dynamical system is associated with the birth or death of a periodic orbit. We apply our criterion to two systems here. Using approximations of the stable, post-Hopf periodic orbits to first, second, and third orders in a multiple-scales perturbation expansion, we find that, for both systems, our proposed analytical criterion indeed reproduces the numerically-obtained parameter values at the onset of homoclinic bifurcation very closely. Analysis of Stochastic Diffusive Predator PreyModel with Hyperbolic Mortality Rate M. Suvinthra1†, K. Balachandran1, M. Sambath2 In this work, we establish a Freidlin-Wentzell type large deviation principle for a diffusive predator-prey modelwith hyperbolic mortality rate perturbed by multiplicative type Gaussian noise. We implement the variational representation developed by Budhiraja and Dupuis to establish the large deviation principle for the solution processes. Nonlinear Parametrizations of Outgoing Longwave Radiation in Zero-Dimensional Energy Balance Models Dmitry V. Kovalevsky† A one-layer and two-layer zero-dimensional (0D) energy balance models (EBMs) of the global climate system with different approximations for parametrization of outgoing longwave radiation (OLR) are considered. Three alternative approximations for parametrizing the OLR are explored in detail: (i) the (conventional) linear approximation, (ii) the quadratic approximation, and (iii) the 'exact' (power 4) model. In case of one-layer 0D EBM, exact analytical solutions are derived in closed form for all three alternative approximations for parametrizing the OLR. In the numerical examples provided, the deviations of the linear approximation from the 'exact' model are visible, while the quadratic approximation is virtually indistinguishable from the 'exact' model. Exact Analytical Solutions of Selected Behaviourist Economic Growth Models with Exogenous Climate Damages Capital dynamics are calculated for (i) the AK model with output reduced by climate damages, (ii) the AK model with climate-dependent depreciation rate, and (iii) the Solow–Swan model with output given either by the Cobb–Douglas production function or by the constant elasticity of substitution (CES) production function and reduced by climate damages. The climate projections used as model inputs are exogenous. Simple analytical parametrisations for temperature dynamics are assumed (either linear or exponential temperature growth). The quadratic and the Nordhaus climate damage functions are considered. Exact analytical solutions for capital dynamics are derived in closed form (with the exception of the Solow–Swan model with CES production function). Numerical examples are provided for illustrative purposes. As the unabated climate change with unlimited temperature growth is assumed, the long-runmodel dynamics are dramatic: the capital converges to zero at infinite time, and the economy collapses. On Symmetric Strictly non-Volterra Quadratic Stochastic Operators U.U. Jamilov For a symmetric strictly non-Volterra quadratic stochastic operator on the three-dimensional simplex it is proved that this operator has a unique fixed point. A sufficient condition of attractiveness for the unique fixed point is found. For such operators we describe the set of ω− limit points. We proved that some classes of such operators have infinitely many periodic points. Also it is shown that there are trajectories which are asymptotically cyclic with period two. On the Existence of Stationary Solutions for Some Systems of Integro-Differential Equations with Anomalous Diffusion Vitali Vougalter1†, Vitaly Volpert2 The article is devoted to the proof of the existence of solutions of a system of integro-differential equations appearing in the case of anomalous diffusion when the negative Laplacian is raised to some fractional power. The argument relies on a fixed point technique. Solvability conditions for elliptic operators without Fredholm property in unbounded domains along with the Sobolev inequality for a fractional Laplace operator are being used. Existence of Semi Linear Impulsive Neutral Evolution InclusionsWith Infinite Delay in Frechet Spaces Dimplekumar N. Chalishajar†, K. Karthikeyan, A. Anguraj In this paper, sufficient conditions are given to investigate the existence of mild solutions on a semi-infinite interval for first order semi linear impulsive neutral functional differential evolution inclusions with infinite delay using a recently developed nonlinear alternative for contractivemultivalued maps in Frechet spaces due to Frigon combined with semigroup theory. The existence result has been proved without assumption of compactness of the semigroup. We study a new phase space for impulsive system with infinite delay. Perspectives on Multi-Level Dynamics Fatihcan M. Atay1†, Sven Banisch2, Philippe Blanchard3, Bruno Cessac4, Eckehard Olbrich5, Dima Volchenkov6 As Physics did in previous centuries, there is currently a common dream of extracting generic laws of nature in economics, sociology, neuroscience, by focalising the description of phenomena to a minimal set of variables and parameters, linked together by causal equations of evolution whose structure may reveal hidden principles. This requires a huge reduction of dimensionality (number of degrees of freedom) and a change in the level of description. Beyond the mere necessity of developing accurate techniques affording this reduction, there is the question of the correspondence between the initial system and the reduced one. In this paper, we offer a perspective towards a common framework for discussing and understanding multi-level systems exhibiting structures at various spatial and temporal levels. We propose a common foundation and illustrate it with examples from different fields. We also point out the difficulties in constructing such a general setting and its limitations. delay. Further Results on the Stability of Neural Network for Solving Variational Inequalities Mi Zhou1†, Xiaolan Liu2,3‡ This paper analyzes and proves the global Lyapunov stability of the neural network proposed by Yashtini and Malek when the mapping is continuously differentiable and the Jacobian matrix of the mapping is positive semi-definite. Furthermore, the neural network is shown to be exponentially stable under stronger conditions. In particular, the stability results can be applied to the stability analysis of variational inequalities with linear constraints and bounded constraints. Some examples show that the proposed neural network can be used to solve the various nonlinear optimization problems. The new results improve the existing ones in the literature. How the Minimal Poincar´e Return Time Depends on the Size of a Return Region in a Linear Circle Map N. Semenova, E. Rybalova, V. Anishchenko† It is found that the step function of dependence of the minimal Poincar´e return time on the size of a return region τinf(ε) for the linear circle map with an arbitrary rotation number can be approximated analytically. All analytical results are confirmed by numerical simulation. Reversible Mixed Dynamics: A Concept and Examples S.V. Gonchenko† We observe some recent results related to the new type of dynamical chaos, the so-called, "mixed dynamics" which can be considered as an intermediate link between "strange attractor" and "conservative chaos". We propose a mathematical concept of mixed dynamics for two-dimensional reversible maps and consider several examples. We Speak Up the Time, and Time Bespeaks Us Dimitri Volchenkov†, Anna Cabigiosu, MassimoWarglien We have presented the first study integrating the analysis of temporal patterns of interaction, interaction preferences and the local vs. global structure of communication in networks of agents. We analyzed face-to-face interactions in two organizations over a period of three weeks. Data on interactions among ca 140 individuals have been collected through a wearable sensors study carried on two start-up organizations in the North-East of Italy. Our results suggest that simple principles reflecting interaction propensities, time budget and institutional constraints underlie the distribution of interaction events. Both data on interaction duration and those on intervals between interactions respond to a common logic, based on the propensities of individuals to interact with each other, the cost of interrupting other activities to interact, and the institutional constraints over behavior. These factors affect the decision to interact with someone else. Our data suggest that there are three regimes of interaction arising from the organizational context of our observations: casual, spontaneous (or deliberate) and institutional interaction. Such regimes can be naturally expressed by different parameterizations of our models. On Quasi-periodic Perturbations of Duffing Equation A.D. Morozov†, T.N. Dragunov Quasi-periodic two-frequency perturbations are studied in a system which is close to a nonlinear two-dimensional Hamiltonian one. The example of Duffing equation with a saddle and two separatix loops is considered. Several problems are studied: dynamical behavior in a neighborhood of a resonance level of the unperturbed system, conditions for the existence of resonance quasi-periodic solutions (two-dimensional resonance tori), global behavior of solutions inside domains separated from the unperturbed separatrix. In a neighborhood of the unperturbed separatrix the problem of relative position of stable an unstable separatrix manifolds is studied, conditions for the existence of doubly asymptotic solutions are found. A Study of the Dynamics of the Family f λ ,μ = λsinz+μ/(z−kπ) where λ ,μ ∈ R\{0} and k ∈ Z\{0} Patricia Dom´ınguez†, Josu´e V´azquez, Marco A. Montes de Oca In this article we investigate the dynamics of the meromorphic family f λ ,μ (z) = λ sin z+ μ/( z−kπ) , λ ,μ ∈ R \ {0} and k ∈ Z \ {0}. We show that for some parameters λ ,μ the Stable set contains an attracting component which is multiply connected and completely invariant. We give a definition of a cut of the space of parameters, with μ and kπ fixed, and show examples of a cut and the Stable and Chaotic sets related to the cut, for some λ given. New Results on Exponential Stability of Fractional Order Nonlinear Dynamic Systems Tianzeng Li1,2†, Yu Wang1,3, Yong Yang In this letter stability analysis of fractional order nonlinear systems is studied. An extension of Lyapunov direct method for fractional order systems is proposed by using the properties of Mittag-Leffler function and Laplace transform. Some new sufficient conditions which ensure local exponential stability of fractional order nonlinear systems are proposed firstly. And we apply these conditions to the Riemann-Liouville fractional order systems by using fractional comparison principle. Finally, three examples are provided to illustrate the validity of the proposed approach. Robust Exponential Stability of Impulsive Stochastic Neural Networks with Markovian Switching and Mixed Time-varying Delays Haoru Li1, Yang Fang2, Kelin Li2† This paper is concerned with the robust exponential stability problem for a class of impulsive stochastic neural networks with Markovian switching, mixed time-varying delays and parametric uncertainties. By construct a novel Lyapunov-Krasovskii functional, and using linear matrix inequality (LMI) technique, Jensen integral inequality and free-weight matrix method, several novel sufficient conditions in the form of LMIs are derived to ensure the robust exponential stability in mean square of the trivial solution of the considered system. The results obtained in this paper improve many known results, since the parametric uncertainties have been taken into account, and the derivatives of discrete and distributed time-varying delays need not to be 0 or smaller than 1. Finally, three illustrative examples are given to show the effectiveness of the proposed method. Slowing Down of So-called Chaotic States: "Freezing" the Initial State M. Belger1, S. De Nigris†2, X. Leoncini‡1,3 The so-called chaotic states that emerge on the model of XY interacting on regular critical range networks are analyzed. Typical time scales are extracted from the time series analysis of the global magnetization. The large spectrum confirms the chaotic nature of the observable, anyhow different peaks in the spectrum allows for typical characteristic time-scales to emerge. We find that these time scales τ (N) display a critical slowing down, i.e they diverge as N →∞. The scaling law is analyzed for different energy densities and the behavior τ (N) ∼ √ N is exhibited. This behavior is furthermore explained analytically using the formalism of thermodynamicequations of the motion and analyzing the eigenvalues of the adjacency matrix. Relaxation Oscillations and Chaos in a Duffing Type Equation: A Case Study L. Lerman1†, A. Kazakov2,1, N.Kulagin3 Results of numerical simulations of a Duffing type Hamiltonian system with a slow periodically varying parameter are presented. Using theory of adiabatic invariants, reversibility of the system and theory of symplectic maps, along with thorough numerical experiments, we present many details of the orbit behavior for the system. In particular, we found many symmetric mixed mode periodic orbits, both being hyperbolic and elliptic, the regions with a perpetual adiabatic invariant and chaotic regions. For the latter region we present details of chaotic behavior: calculation of homoclinic tangles and Lyapunov exponents. Potential Symmetries, Lie Transformation Groups and Exact Solutions of Kdv-Burgers Equation XiaoMin Wang, Sudao Bilige†, YueXing Bai In this paper, the classical symmetries and the potential symmetries of KdVBurgers equation are calculated based on differential characteristic set algorithm, and the corresponding Lie transformation groups and invariant solutions of the potential symmetry are derived. Moreover a series of new exact solutions for KdV-Burgers equation are obtained by acting Lie transformation groups on the invariant solutions. It is important that these solutions can not be obtained from the classical symmetries of KdV-Burgers equation. Conservation Laws in Group Analysis of Gas Filtration Model S.V. Khabirov† One-dimensional gas filtration was described nonlinear parabolic equation as the conservation law. The potential of the conservation law satisfies a equation as the conservation law. The introduce of the second potential gives a system of equations which admits 6-dimensional Lie algebra. This extends group properties of initial model. With the help of optimal system of subalgebras are classified all invariant and partial invariant solutions which are reduced to invariant solutions of the initial model. Some time it is possible to find integral of invariant submodel. Existence of Semi Linear Impulsive Neutral Evolution Inclusions with Infinite Delay in Frechet Spaces Dimplekumar N. Chalishajar†, K. Karthikeyan‡, A. Anguraj§ Wave Collision for the gKdV-4 equation. Asymptotic Approach Georgy Omel'yanov† We consider an approach which allows to describe uniformly in time the process of collision of solitary waves. Next we apply it to the KdV-type equation with nonlinearity u4 for three interacting waves assuming that all wave trajectories intersect at the same point. The constructed asymptotic solution satisfies the equation in a weak sense and it can be treated as a classical asymptotics in the sense that it satisfies some conservation and balance laws associated with the gKdV-4 equation. Results of numerical simulation confirm the theoretical conclusion about the elastic type of the wave interaction. P-Moment Exponential Stability of Caputo Fractional Differential Equations with Random Impulses Ravi Agarwal1†, Snezhana Hristova2, Donal O'Regan3 Fractional differential equations with random impulses arise in modeling real world phenomena where the state changes instantaneously at uncertain moments. Using queuing theory and the usual distribution for waiting time, we study the case of exponentially distributed random variables between two consecutive moments of impulses. The p-moment exponential stability of solutions is defined and studied when the waiting time between two consecutive impulses is exponentially distributed. The argument is based on Lyapunov functions. We discuss both continuous and differentiable Lyapunov functions and Caputo fractional Dini derivatives and Caputo derivatives are applied. Some examples are given to illustrate our results. An Analytic Technique for the Solutions of Nonlinear Oscillators with Damping Using the Abel Equation A Ghose-Choudhury1†, Partha Guha2† Using the Chiellini condition for integrability we derive explicit solutions for a generalized system of Riccati equations $\ddot{x}+\alpha x^{2n+1}\dot{x}+x^{4n+3}=0$ by reduction to the first-order Abel equation assuming the parameter $\alpha \ge w\sqrt{(2n+1)}$. The technique, which was proposed by Harko et al, involves use of an auxiliary system of first-order differential equations sharing a common solution with the Abel equation. In the process analytical proofs of some of the conjectures made earlier on the basis of numerical investigations in [1] is provided. On the Existence of Stationary Solutions for Some Systems of Non-Fredholm Integro-Differential Equations with Superdiffusion Vitali Vougalter1†, Vitaly Volpert2† We establish the existence of stationary solutions for certain systems of reaction-diffusion equations with superdiffusion. The corresponding elliptic problem involves the operators with or without Fredholm property. The fixed point technique in appropriate H2 spaces of vector functions is employed. Defensive Driving Strategy for Autonomous Ground Vehicle in Mixed Traffic Xiang Li1 , Jian-Qiao Sun2† One of the challenges of autonomous ground vehicles (AGVs) is to interact with human driven vehicles in the traffic. This paper develops defensive driving strategies for AGVs to avoid problematic vehicles in the mixed traffic. A multi-objective optimization algorithm for local trajectory planning is proposed. The dynamic predictive control is used to derive optimal trajectories in a rolling horizon. The intelligent driver model and lanechanging rules are employed to predict the movement of the vehicles. Multiple performance objectives are optimized simultaneously, including traffic safety, transportation efficiency, driving comfort and path consistency. The multi-objective optimization problem is solved with the cell mapping method. Different and relatively simple scenarios are created to test the effectiveness of the defensive driving strategies. Extensive experimental simulations show that the proposed defensive driving strategy is promising and may provide a new tool for designing the intelligent navigation system that helps autonomous vehicles to drive safely in the mixed traffic. An Impact Oscillator with A Grazing Cycle M. U. Akhmet†, A. Kıvılcım An oscillator which impacts against a rigid barrier is taken into account. A cycle with zero impact velocity is discussed. The main result of this article concerns stability of the grazing cycle. A significant attention to a model with the variable coefficient of restitution depending on velocity is paid. The mechanical reasons for that are provided as well as new theoretical advantages have been discovered for the investigation of dynamics near a grazing cycle. The W-map which reduces the system with variable moments of impacts to that with fixed moments and simplifies the analysis, is defined. A new type of linearization system with two compounds is applied to investigate the stability of the grazing cycle whose existence is easily examined. A new approach to suppress a singularity, caused by the tangency, in linearization has been developed. Simulations are provided to visualize the stability of the grazing cycle. Partially Integrable &weierp T -Symmetric Hierarchies of the KdV and Burgers' Equations in (1+1) and (2+1) Keri Pecora, S. Roy Choudhury† In this paper, we generalize the work of Bender and co-workers to derive new partially-integrable hierarchies of various &weierp T -symmetric, nonlinear partial differential equations. The possible integrable members are identified employing the Painlev´e Test, a necessary but not sufficient, integrability condition, and are indexed by the integer n, corresponding to the negative of the order of the dominant pole in the singular part of the Painlev´e expansion for the solution. For the &weierp T -symmetric Korteweg-de Vries (KdV) equation, as with some other hierarchies, the first or n = 1 equation fails the test, the n = 2 member corresponds to the regular KdV equation, while the remainder form an entirely new, possibly integrable, hierarchy. B¨acklund Transformations and analytic solutions of the n = 3 and n = 4 members are derived. The solutions, or solitary waves, prove to be algebraic in form. The &weierp T -symmetric Burgers' equation fails the Painlev´e Test for its n = 2 case, but special solutions are nonetheless obtained. Also, a &weierp T - Symmetric hierarchy of the (2+1) Burgers' equation is analyzed. The Painlev´e Test and invariant Painlev´e analysis in (2+1) dimensions are utilized, and BTs and special solutions are found for those cases that pass the Painlev´e Test. On Quadratic Stochastic Operators Corresponding to Cyclic Groups U.A. Rozikov , U.U. Jamilov† We introduce a new class of quadratic stochastic operators corresponding to cyclic groups. We study the set of fixed points and prove that almost all (w.r.t. Lebesgue measure) trajectories of such operators converge to the center of the simplex. For the cyclic groups of order 2n we show that for any subgroup corresponding quadratic stochastic operator is a regular operator. Group Analysis of the Generalized Hunter-Saxton System Yuri Bozhkov1,†, Valter Aparecido Silva Junior2,3 We find the Lie point symmetries of the generalized two-component Hunter-Saxton system. Then we show that it is nonlinearly self-adjoint and establish the corresponding conservation laws using a recent theorem of Nail Ibragimov which enables one to determine conservation laws for problems without variational structure. Finally we obtain some invariant solutions. Existence of Mild Solutions of Abstract Fractional Differential Equations with Fractional Non-Instantaneous Impulsive Conditions A. Anguraj1†, S. Kanjanadevi1† , Juan J. Trujillo2† We study the existence and uniqueness of a mild solution of fractional impulsive differential equations with nonlocal conditions. Here we consider fractional derivative in the non-instantaneous impulsive conditions. We use fixed point techniques and resolvent operators to prove our existence results. Studying the Dynamcis of Neuronal Membrane Using a Numerical Model Marina Gonza´lez†, Marco Aure´lio Santos Granero , Mariana Pelissari Monteiro Aguiar Baroni The Hodgkin and Huxley model describes the electrophysiology of the membrane of the giant squid's axon. This model was developed from measurements of passive and active electrical behavior of the nerve cell. The four-coupled nonlinear ordinary differential equations, which describe the model, are based on the behavior of sodium and potassium channels. The aim of this work is to study the dynamic behavior of a neuronal physiological system described by the Hodgkin-Huxley (H-H) model through the analysis and interpretation of numerical simulations. Genealogical Tree of Russian schools on Nonlinear Dynamics S. V. Prants†, M. Yu. Uleysky One of the most prominent feature of research in Russia and the former Soviet Union is so-called scientific schools. It is a collaboration of researchers with a common scientific background working, as a rule, together in a specific city or even at an institution. The genealogical tree of scientific schools on nonlinear dynamics in Russia and the former Soviet Union is grown. We use these terminology in a broad sense including theory of dynamical systems and chaos and its applications in nonlinear physics. In most cases we connect two persons if one was an advisor of the Doctoral thesis of another one. It is an analogue of the Candidate of Science thesis in Russia. If the person had no official advisor or we don't know exactly who was an advisor, we fix that person who was known to be an informal teacher and has influenced on him/her very much. Asymptotic Stability of Nonzero Solutions of Discontinuous Systems of Impulsive Differential Equations K. G. Dishlieva† Discontinuous systems of nonlinear non-autonomous differential equations with impulsive effects are the main object of investigation in the paper. These systems consist of two basic parts: (i) A set of non-linear nonautonomous systems of ordinary differential equations that define the continuous parts of the solutions. The right-hand sides of the systems are elements of the set of functions f = { f1, f2, ...} ; (ii) The conditions which consistently determine "the switching moments". The structural change (discontinuity) of the right-hand side and impulsive perturbations take place at the moments of switching. In these moments, the trajectory meets the "switching sets". They are parts of the hyperplanes, situated in the phase space of the system considered. Sufficient conditions are found so that the nonzero solutions of the studied discontinuous system with impulsive effects are asymptotically stable. A Dynamic Systems Approach to the Representation of Policy Implementation Processes in a Multi-Actor World Dmitry V. Kovalevsky1,2†, Richard Hewitt3,4, Cheryl de Boer5, Klaus Hasselmann6,7 Implementation of any policy involves negotiation between multiple actors, and is therefore susceptible to a dynamic systems modelling approach. In this paper, we make an attempt to develop a family of dynamic systems models of policy implementation in such a multi-actor world by translating an existing, semi-quantitative, application of two theoretical approaches, Contextual Interaction Theory (CIT) and Participatory Action Research (PAR), into a quantitative dynamic framework. We explore various alternative actor-based dynamic systems for this proposed Participatory Contextual Interaction Theory (PCIT), including linear, piecewise linear, and strongly nonlinear models. Analytical results are supplemented with results of numerical simulations. One of the goals of the modelling exercise is to advance the actor dynamics module in the APoLUS land use cellular automata model; illustrative examples of the incorporation of actor dynamics models developed in the present paper into the computation of APoLUS transition potentials are provided. Simultaneous Stabilization using Set Stabilizing Gain Generation Venkatesh S. Deshmukh† A set stabilizing gain generator (SSGG) framework is developed along with dynamic controllers and observers to regulate and estimate the state trajectories of an arbitrary set of three or more open loop unstable linear time invariant dynamic system models of variable dimensions in the state space form. The individual systems in the set are assumed to be controllable and observable. An algorithm is suggested for Single Input (SI) simultaneous stabilization and Single Output (SO) simultaneous observation. This is then extended to a set of Multiple Input (MI) and Multiple Output (MO) models. A dynamic observer based dynamic controller is also formulated for SISO and MIMO systems. The algorithm is then extended to accommodate simultaneous stabilization and observation of system models of variable dimensions. It is finally shown that the dynamic controllers and observers guarantee a certain stability radius against bounded time-varying uncertainty for the entire set which their static counterparts fail to assign. Suitable SI, SO, SISO and MI, MO, MIMO examples are provided. On Large Deviations of Stochastic Integrodifferential Equations with Brownian Motion A. Haseena1 , M. Suvinthra2 , N. Annapoorani2† In this paper, a Freidlin-Wentzell type large deviation principle is established for the stochastic integrodifferential equation driven by finite dimensional Brownian motion. Both the additive and multiplicative noise cases are considered here. Large deviation principle for additive noise case is established via contraction principle whilst weak convergence approach is employed to obtain the same for the multiplicative noise case. Abundant Solutions of Distinct Physical Structures for Three Shallow Water Waves Models Abdul-Majid Wazwaz In this work, we investigate three completely integrable model equations used to describe shallow water waves. A variety of techniques will be sued to determine abundant solutions, of distinct physical structures, for each model. The three models give soliton solutions, periodic solutions, rational hyperbolic functions and rational solutions as well. Estimates from Below for Blow Up Time for Two Species Chemotaxis System V. Bhuvaneswari†, K. Balachandran This paper deals with the two species chemotaxis system in a bounded domain R3under different boundary conditions. Explicit lower bounds for blow up time are obtained by using differential inequality technique under suitable assumptions on the data. Particle Dynamics in a Viscous Fluid Under the Action of Acoustic Radiation Force Hassan K. Hassan1 , Lev A. Ostrovsky2 , Yury A. Stepanyants1 Complex dynamics of small particles under the action of acoustic radiation force is considered. This work extends the previous studies of Ostrovsky & Sarvazyan (2009) and Ostrovsky (2015) in two aspects. Firstly, here the particle material can have an arbitrary compressibility and sound speed. Secondly, in addition to the viscosity effect described by the Stokes drag force, the Boussinesq–Basset drag force and the inertial force which includes added mass effect are accounted for. Although, as was assumed in the earlier publications, the latter effects are usually small, nevertheless their influence can be noticeable in specific cases considered in this paper. The control of particle motion by switching of acoustic modes is also studied for particles of different properties. Quantitative estimates are given for particles made of different materials. Bifurcation Trees of Period-1 Motions to Chaos of a Nonlinear Cable Galloping Bo Yu1 and Albert C. J. Luo2† In this paper, period-m motions on the bifurcation trees of peiod-1 to chaos for nonlinear cable galloping are studied analytically, and the analytical solutions of the period-m motions in the form of the finite Fourier series are obtained through the generalized harmonic balance method, and the corresponding stability and bifurcation analyses of the period-m motions in the galloping system of nonlinear cable are carried out. The bifurcation trees of period-m motions to chaos are presented through harmonic frequency-amplitudes. Numerical illustrations of trajectories and amplitude spectra are given for periodic motions in nonlinear cables. From such analytical solutions of periodic motions to chaos, galloping phenomenon in flow-induced vibration can be further understood. Bifurcation and Stability Analysis of a Diffusive Predator-Prey Model with Ratio-Dependent Type III Functional Response M. Sivakumar†, K. Balachandran This paper is concerned with a diffusive Leslie-Gower predator-prey system with ratio dependent Holling type III functional response subject to Neumann boundary conditions. By linearizing the system at the positive constant steady-state solution and analyzing the associated characteristic equation in detail, local stability, existence of a Hopf bifurcation at the coexistence of the equilibrium and stability of bifurcating periodic solutions of the system in the absence of diffusion are studied. Furthermore, Turing instability and Hopf bifurcation analysis for the system with diffusion are studied. Finally, numerical simulations are provided in order to verify our theoretical results. Controllability of Nonlinear Stochastic Fractional Systems with L´evy Noise R. Mabel Lizzy∗,K. Balachandran† and M. Suvinthra‡ In this paper we study the controllability of linear and nonlinear stochastic fractional systems driven by L´evy noise. Here we use the L´evy-Itˆo decomposition of an arbitrary L´evy process into Brownian and Poisson parts. The necessary and sufficient conditions for controllability of the linear system is obtained. Also, the nonlinear system is shown controllable under the assumption that the corresponding linear system is controllable and using the Banach contraction principle. Modelling, Analysis and Control of Nonlinear Discrete and Continuous Mechanical Structures Dedicated for Mechatronic Applications pp. 421–423 | DOI: DOI:10.5890/DNC.2017.12.001 Jan Awrejcewicz, Dariusz Grzelczyk This is the Special Issue dedicated to the 13th International Conference on "Dynamical Systems - Theory and Applications" (DSTA-2015) which was held on December 7-10, 2015 in Lodz Poland. The main aim of the conference was to provide a platform for researchers and engineers to present and discuss the current state and contemporary investigations in different disciplines of science, bioscience, and engineering. This issue presents eight selected manuscripts related to the modeling, analysis and control of nonlinear discrete and continuous mechanical structures met in mechatronic applications. Dynamics of Waves in the Cubically Nonlinear Model for Mutually Penetrating Continua Vjacheslav Danylenko, Sergii Skurativskyi† In this report we study the mathematical model for mutually penetrating continua. This model consists of the wave equation describing the carrying medium and the equation for oscillators forming the oscillating inclusion. Prescribing the constitutive equation of the carrying medium and kinetics of oscillator's dynamics for model in question, the cubic nonlinearity is accounted. We are interested in the structure of wave solutions obeying the dynamical system of Hamiltonian type. This allows us to determine the peculiarities of the phase space of dynamical system, namely, the relation describing the homoclinic trajectories, the division of phase plane into the parts with equivalent orbits' behavior, the conditions of bifurcations. To simulate the wave dynamics, we construct the three level finite-difference numerical scheme and study the evolution of solitary waves, their pair interactions and stability. Propagation of periodic waves is modeled as well. Design of a Tracking Controller for Object Interception in Space Elzbieta Jarzebowska, Bartlomiej Pilarczyk The paper presents a model-based tracking controller design for a freefloating space robot for a mission scenario of intercepting an object. Such missions are of interest due to a growing number of objects needed to be removed from space. The free-floating mode requires spacecraft thrusters to be off and linear and angular momentum are conserved then. Momentum conservation generates holonomic and nonholonomic constraint equations, respectively. The free-floating mode implicates underactuation, so the robot becomes multi-constrained. Many control algorithms are designed for underactuated robots but they are specific mission and robot dependent. Motivations for the presented research come from the growing space exploration, which results in more space debris and requires sophisticated removal services. Service tasks and debris removal need to be performed by specialized robots. The debris interception scenario presented in the paper consists of estimation of target properties, a controller design to track and intercept the debris, and move it to the graveyard orbit. Simulation results of the theoretical control development for the robot intercepting a non-tumbling object are provided. Implementation Assessment of a Wave Energy Converter, Based on Fully Enclosed Multi-axis Inertial Reaction Mechanisms Ioannis A. Antoniadis1, Vasilis Georgoutsos1, Andreas Paradeisiotis1, Stratis A. Kanarachos2, Konstantinos Gryllias3 This paper examines the implementation of a standalone 1 MW Wave Energy Converter (WEC), based on a novel concept of a class of WECs, consisting in fully enclosed appropriate internal body configurations, which provide inertial reaction against the motion of an external vessel. Acting under the excitation of the waves, the external vessel is subjected to a simultaneous surge and pitch motion in all directions, ensuring maximum wave energy capture. The internal body is suspended from the external vessel body in such an appropriate geometrical configuration, that a symmetric four bar mechanism is essentially formed. The first advantage of this suspension geometry is that a linear trajectory results for the center of the mass of the suspended body with respect to the external vessel, enabling the introduction of a quite simple form of a Power Take-Off (PTO) design. The simplicity and symmetry of the suspension geometry and of the PTO, ensure a quite simple and robust technological implementation. Mass and inertia distribution of the internal body are optimized, ensuring maximal conversion and storage of wave energy. As a result, the internal body assembly is essentially, dynamically equivalent to a vertical physical pendulum. However, the resulting equivalent pendulum length and inertia can far exceed those that can be achieved by a simple horizontal or vertical pendulum, suspended or inverted, leading to a significant reduction of the suspended mass. Multi-parametric Dependence of DeformationWork of Zona Pelucida in Fertilization Process Trough Quasi-static Continual Shell-like ZP Model Andjelka Hedrih1†, Katica (Stevanovic) Hedrih1,2 <\it>Zona Pelucida (ZP) is highly specialized glycosylated and sulfated polymer gel that surrounds mammalian oocyte, exhibits elastic or viscoelastic properties and change of diameter in different maturation stages. In process of in vitro fertilization, sperm cells affect the external ZP surface and transfer certain amount of energy to the ZP structure that goes through the deformation work. ZP thickness and number of motile sperm cells are important factors that may influence the fertility. Using the quasi-static approximate ZP model in the form of hollow sphere the numerical analysis of how specific deformation work depends of different variables like: ZP thickness, specific point in ZP, external pressure that comes from sperm cells were done. According to the model, sperm that make pressure upon ZP surface are in the form of homogeny discrete continuum distribution in radial directions. For specific ZP point in the model, analytical expressions of component stresses and strains were obtained as well as deformations and volume dilatation in ZP. The results are discussed from biomechanical aspect of fertilization. Limitations of the model are also discussed. Fatigue Cracks Detection in Rectangular Plates with Circular Hole with the Use of Elastic Waves Marek Barski†, Adam Stawiarski, Piotr Pajak The current work is devoted to the problem of fatigue cracks detection and evaluation in the case of isotropic rectangular plates with circular holes. The cutout is located in the geometrical center of the plate. The structure is subjected to the cyclic tension. It causes the formation of the fatigue cracks in the vicinity of the hole. This type of damages can be effectively detected by the analyzing of elastic waves propagation. In the presented work the propagation of the elastic waves in the vicinity of a hole is simulated with the use of the finite element method. It is assumed that the elastic waves are excited and picked up by piezoelectric sensors. The most effective position of the piezoelectric actuator is looked for while the position of the array of sensors is fixed. Four different possibilities of the location of the actuator are studied. Moreover, the advanced algorithm of crack detection and evaluation is also discussed. Free Vibrations of Cantilever Bars with Linear and Nonlinear Variable Cross-Section Jacek Jaworski, Olga Szlachetka The topic of this study is the first mode of natural transverse vibrations of isotropic, homogeneous and elastic bars (columns or beams) with clamped bottom and free head. The columns of the first group are shaped as truncated solid cones or as tubes with linearly variable wall thickness and with different inclination of lateral faces, from cylinder to cone. The columns of the second group were shaped in similar way, but the generatrices of the solids of revolution were curvilinear ?in the shape of a parabola. The first frequency of free vibrations was determined using the Rayleigh method. The deflection line of the column axis during the vibration was assumed in form of the bending line of the column axis subjected to a uniform load. Resulting frequencies (or periods) were compared with these obtained with the use of FEM (ANSYS) and a good compliance of results was observed. As the expression for the energy of an elementary slice of material was integrated over the length of the rod, the formula for the frequency was obtained in form of an integral equation. In some cases an exact solution of integral equation was obtained, however in other cases only a numerical solution was possible. Study of Negative Elements for Discrete Mechatronic Systems Andrzej Buchacz†, DamianGałe¸ziowski In the paper, the known problem of vibration control have been studied for elements which have defined configurations and connections of piezostack actuator with external electric network and can exhibits negative values. Depending on the phase of the synthesis process, used for the design of the mechatronic systems that have to comply with some given requirements, negative elements have been identified and described. Subsequently with the study on selection of corresponding negative stiffness and damping, presented in the graph form, optimal values for the systems can be compared. Following examples with selected calculations, the goal of this paper is to present limits and constrains that may support the physical realization, as well as applications of the considered systems. Fibre Spring-damper Computational Models in a Laboratory Mechanical System and Validation with Experimental Measurement Pavel Polach1†, Miroslav Byrtus1, ZbynekSika2, Michal Hajzman1 The paper deals with searching for the phenomenological model of a fibre. A multibody model of a simple weight-fibre-pulley-drive mechanical system is created. This mechanical system is focused on the investigation of a fibre behaviour and experimental laboratory measurements are performed on it. The carbon fibre, which is driven by one drive, is led over a pulley. On its other end there is a prism-shaped steel weight, which moves in a prismatic linkage on an inclined plane. Drive excitation signals can be of different shapes with the possibility of variation of a signal rate. Time histories of the weight position and of the force acting in the fibre are measured. At simulating with the multibody model of this system there is investigated the influence of dependences of the fibre spring-damper coefficients on the velocity of the weight motion in the computational model, on the coincidence of the simulation results and the experimental measurement results are evaluated. The aim of the simulations is to create a phenomenological model of the fibre, which will be utilizable in fibre modelling in the case of more complicated mechanical or mechatronic systems. Existence and Uniqueness of Solutions for a Coupled System of Higher Order Fractional Differential Equations with Integral Boundary Conditions P. Duraisamy, T. Nandha Gopal In this article, we study the existence of solutions for a coupled system of higher order nonlinear fractional differential equations with non-local integral boundary condition by using Schaefer's fixed point theorem and the uniqueness result is proved by the contraction mapping principle. Finally, examples are provided to the applicability our main results. Almost Periodicity in Chaos pp. 15-29 | DOI: 10.5890/DNC.2018.03.002 Marat Akhmet, Mehmet Onur Fen Periodicity plays a significant role in the chaos theory from the beginning since the skeleton of chaos can consist of infinitely many unstable periodic motions. This is true for chaos in the sense of Devaney[1],Li-Yorke[2]and the one obtained through period-doubling cascade [3]. Countable number of periodic orbits exist in any neighborhood of a structurally stable Poincare homoclinic orbit, which can be considered as a criterion for the presence of complex dynamics [4–6]. It was certified by Shilnikov [7] and Seifert [8] that it is possible to replace periodic solutions by Poisson stable or almost periodic motions in a chaotic attractor. Despite the fact that the idea of replacing periodic solutions by other types of regular motions is attractive, very few results have been obtained on the subject. The present study contributes to the chaos theory in that direction. In this paper, we take into account chaos both through a cascade of almost periodic solutions and in the sense of Li-Yorke such that the original Li-Yorke definition is modified by replacing infinitely many periodic motions with almost periodic ones, which are separated from the motions of the scrambled set. The theoretical results are valid for systems with arbitrary high dimensions. Formation of the chaos is exemplified by means of unidirectionally coupled Duffing oscillators. The controllability of the extended chaos is demonstrated numerically by means of the Ott-Grebogi-Yorke [9] control technique. In particular, the stabilization of tori is illustrated. Backlund Transformation and Quasi-Integrable Deformation of Mixed Fermi-Pasta-Ulam and Frenkel-Kontorova Models Kumar Abhinav, A Ghose Choudhury, Partha Guha In this paper we study a non-linear partial differential equation (PDE), proposed by Kudryashov [arXiv:1611.06813v1[nlin.SI]], using continuum limit approximation of mixed Fermi-Pasta-Ulam and Frenkel-Kontorova Models. This generalized semi-discrete equation can be considered as a model for the description of non-linear dislocation waves in crystal lattice and the corresponding continuous system can be called mixed generalized potential KdV and sine-Gordon equation. We obtain the B¨acklund transformation of this equation in Riccati form in inverse method. We further study the quasi-integrable deformation of this model. A New Comparison Theorem and Stability Analysis of Fractional Order Cohen-Grossberg Neural Networks Xiaolei Liu, Jian Yuan, Gang Zhou, Wenfei Zhao This paper proposes a new comparison theorem and stability analysis of fractional order Cohen-Grossberg neural networks. Firstly, a new comparison theorem for fractional order systems is proved. Secondly, the stability of a class of fractional order Cohen-Grossberg neural networks with Caputo derivative is investigated on the basis of the above comparison theorem. Thirdly, sufficient conditions of stability of the neural networks are obtained utilizing the property of Mittag-Leffler functions, the generalized Gronwall-Bellman inequality and the method of the integral transform. Furthermore, a numerical simulation example is presented to illustrate the effectiveness of these results. Existence of Solutions of Stochastic Fractional Integrodifferential Equations P. Umamaheswari, K. Balachandran, N. Annapoorani In this paper, a general class of stochastic fractional integrodifferential equations is investigated. The Picard-Lindel¨of successive approximation scheme is used to establish the existence of solutions. The uniqueness of the solution is also studied under suitable conditions. Evolution Towards the Steady State in a Hopf Bifurcation: A Scaling Investigation Vin´ıcius B. da Silva, Edson D. Leonel Some scaling properties describing the convergence for the steady state in a Hopf bifurcation are discussed. Two different procedures are considered in the investigation: (i) a phenomenological description obtained from time series coming from the numerical integration of the system, leading to a set of critical exponents and hence to scaling laws; (ii) a direct solution of the differential equations, which is possible only in the normal form. At the bifurcation, the convergence to the stationary state obeys a generalized and homogeneous function. For short time, the dynamics giving by the distance from the fixed point is mostly constant when acritical time is reached hence changing the dynamics to a convergence for the steady state given by a power law. Both the size of the constant plateau and the characteristic cross overtime depend on the initial distance from the fixed point. Near the bifurcation, the convergence is described by an exponential decay with a relaxation time given by a power law. Integrability of a time dependent coupled harmonic oscillator in higher dimensions Anand Malik, Ram Mehar Singh, S. B. Bhardwaj, Kushal Sharma, Richa Rani, Fakir Chand Within the frame work of extended complex phase space characterized by x = x1 +ip4,y = x2 + ip5,z = x3 +ip6,px = p1 + ix4, py = p2 +ix5 and pz = p3 +ix6, we investigate the exact dynamical invariant for a coupled harmonic system in three dimensions. For this purpose Lie-algebraic method is employed and the invariant obtained in this work may play an important role in reducing the order of differential equations, solution of Cauchy system and to check the accuracy of a numerical simulation. Approximation of random fixed point theorems pp. 95-105 | DOI: 10.5890/DNC.2018.03.008 The aim of this paper is to establish and discuss the approximation of Caristi's random fixed point theorems. Our theorem is used to determine a large numbers of nonlinear stochastic problems. Fibre Three-point multi-term fractional integral boundary value problems of fractional functional differential equations with delay K. Sathiyanathan, V. Krishnaveni, M. Sivabalan In this paper, we study fractional functional differential equations with three-point multi-term boundary conditions. Our method of analysis is based on the reduction of the given system to an equivalent system of integral equations. Existence and uniqueness results are obtained by using Schauder fixed point theorem and contraction principle. An illustrative example is also presented. Dynamics and stability results of fractional integro-differential equations with complex order D. Vivek, K. Kanagarajan, S. Harikrishnan In this paper, we study the existence, uniqueness and Ulam stability of solutions of fractional integro-differential with complex order. Based on Krasnoselkii fixed point theorem and Banach contraction principle, we obtain existence and Ulam stability results. Nonlinear Time Series Analysis of Complex Systems Using an e-Science Web Framework Bruno B. F. Leonor, Walter A. dos Santos, Asiel Bomfin Jr., Reinaldo R. Rosa The analysis of time series in the era of Big Data has become a major challenge for computational framework research. Furthemore, in the areas of space science which deals with a large variety of data, the practical consistence between workload, workflow and cloud computing is crucial. Here, such consistence is provided by an innovativee-Science framework project named Sentinel which is based on a NoSQL data base (MongoDB) and a containerization platform (Docker). This web framework supports researchers for time series analysis in a cloud environment where they can easily access, parameterize, initialize and monitor their applications. As a case study in the Brazilian Space Weather Program, we consider the intensive analysis of time series from a complex information system for solar activity monitoring and forecasting. As a prototype for implementing the framework, the DFA (detrended fluctuation analysis) techniquewas used as a nonlinear spectrum analyzer applied to the solar irradiance measurements from 1978 to 2012. Moreover, new applications can be added and managed by researchers on the portal easily to complement their data analysis purposes. Simulating the Interaction of a Comet With the Solar Wind Using a Magnetohydrodynamic Mode Edgard de F. D. Evangelista†, Margarete O. Domingues, Odim Mendes, Oswaldo D. Miranda We present simulations of a comet interacting with the solar wind. Such simulations are treated in the framework of the ideal,2D magnetohydrodynamics (MHD), using the FLASH code in order to solve the equations of such a formalism. Besides, the cometis treated as a spherically symmetric source of ions in the equations of MHD. We generate results considering several scenarios, using different values for the physical parameters of the solar wind and of the cometin each case. Our aim is to study the influence of the solar wind on the characteristics of the comet and, given the nonlinear nature of the MHD, we search for the occurrence of phenomena which are typical of nonlinear systems such as instabilities and turbulence. Using RQA for evaluating heart rate patterns under controlled breathing Guênia Ladeira, Luís Cláudio Oliveira-Lopes, Gabriela V. Lima In this work, the variation of time and amplitude of the electrical stimulation of the heart muscle is analyzed during respiratory control exercise, effect of the autonomic nervous system. The specifics are verified by recurrence plot, and recurrence quantification analysis. The analysis revealed distinguished behavior of the electrical stimulation of the heart. And, the cross recurrence between the variations of amplitude and time demonstrated the equilibrium relationship in the cardiac cycles, which are accentuatedin the exerciseof concentration. Using Different Interval Extensions to Increase the Accuracy of the Exact Solution on Recursive Functions H. M. Rodrigues Junior, M. L. C. Peixoto, M. E.G. Nepomuceno., S.A.M. Martins The results obtained by numerical computation are not always precise. It happens because a computer has storage limitations and its set of numbers is finite. In this work, the interval analysis is used to give bounds around exact solution of the logistic map function. The connection between computer and interval mathematics makes possible to solve problems that can not be solved efficiently using floating point arithmetic. We use the intersection of different pseudo-orbits obtained by interval extensions to reduce the bounds of the exact solution. The method is applied using the Intlab toolbox. Without any substantial computational effort, we show a reduction of up to 26% in the width of interval by applying the method proposed in this paper. Semantic networks of keywords from Brazilian dissertations on physics teaching J. O. do Nascimento, R. L. S. Monteiro, D. M. Moreira, M. A. M. Simões Gonçalves, H. B. de Barros Pereira Metrics of social and complex networks can be used in the study of emergent patterns in semantic networks. The goal of this article is to analyze the semantic network formed by the keywords of theses and dissertations on physics teaching that were defended in Brazil between the years 1996 and 2006. We describe the method used for constructing the network, including the keywords. We analyze the network's properties in local and global contexts. We study which keywords are presented as cut points, since their removal from the network result in a number of components greater than one. Our results suggest that the networkis small–world and scale–free. In addition, using the proposed method, we verified that most of the topics emphasized in the network are related to the education of physics instructors andnotto methods of teaching physics at different levels. Exploiting the rounding mode of floating-point in the simulation of Chua's circuit M. R. Silva, E. G. Nepomuceno, G. F. V. Amaral, S. A. M. Martins, L. G. Nardo The Chua's circuit has been considered as one of the most important paradigms for nonlinear science studies. Its simulations is usually undertaken by means of numerical methods under the rules of IEEE 754-2008 floating-point arithmetic standard. Although, it is well known the propagation error issue, less attention has been given to its consequences on the simulation of Chua's circuit. In this paper we presented a simulation technique for the Chua's circuit, it exhibits qualitative differences in traditional approaches such as RK3, RK4 and RK5. By means of the positive largest Lyapunov exponent we show that for the same initial condition and same set of parameters, we produce a periodical and a chaotic solution. Nonlinear integral inequalities with parameter and bounded solutions of dynamical systems Taoufik Ghrissi, M. A. Hammami We present in this paper some new nonlinear integral inequalities with parameter and we will use them to show the boundedness of solutions of perturbed nonlinear time-varying differential equations. Furthermore, some examples are given to illustrate the applicability of the obtained results. Polar Amplification Projected by Energy Balance Model with Nonlinear Parametrization of Outgoing Longwave Radiation Dmitry V. Kovalevsky, Genrikh V. Alekseev A linear energy balance model (EBM) of polar amplification (PA) developed by Langen and Alexeev (2007) is modified by replacing the conventional linear parametrization of outgoing longwave radiation (OLR) with a more accurate nonlinear (quadratic) approximation for OLR.The nonlinear version of the model is studied analytically and numerically. The numerical results presented in the paper convincingly demonstrate the importance of adopting the nonlinear approximation for parametrizing the OLR in EBMs. The simulation results yielded by the nonlinear version of the model are visibly different from those provided by the linear model, especially for simulations of climate change in high latitudes and for quantification of PA with dedicated indices. Global Stability Analysis of a General Scalar Difference Equation Huseyin Merdan, Ozlem Ak Gumus, Gamzegul Karahisarli We consider a general first-order scalar difference equation with and without Allee effect. The model without Allee effect represents asexual reproduction of a species while the model including Allee effect represents sexual reproduction. We analyze global stabilities of both models analytically and compare the results obtained. Numerical simulations are included to support the analytical results. We conclude that Allee effect has a destabilizing effect on the global stability of the model. This result is different fromthe local stability behaviour of the same fixed point of the model. On the Well-posedness of the Magnetic, Semi-relativistic Schrodinger-Poisson System We prove global existence and uniqueness of strong solutions for the Schr¨odinger-Poisson system in the repulsive Coulomb case with relativistic, magnetickinetic energy. Asymptotic Stability of Caputo Fractional Singular Differential Systems with Multiple Delays Sivaraj Priyadharsini, Venkatesan Govindaraj In this work, Lyapunov functions combining Matrix inequalities or Matrix equations are developed to analyze the asymptotic stability of Caputo fractional singular systems with multiple time-varying delay. Integer-order derivatives of the Lyapunov functions are used to derive asymptotic stability criteria. The main advantage of applying stability criteriais that noneed of solving roots of transcendental equations. Some examples are provided to explain the effectiveness of the proposed criteria. Stability of Fractional Differential Equations without Singular Kernal S. Harikrishnan, K. Kanagarajan, D. Vivek In this paper,we establish four types of Ulam stability,namely Ulam-Hyers stability, generalized Ulam-Hyers stability, Ulam-Hyers-Rassias and generalized Ulam-Hyers-Rassias stability for differential equation of fractional order. Van der Pol Oscillators Generated from Grazing Dynamics M. U. Akhmet, A. Kıvılcım In this paper, we take into account two coupled Van der Pol equations with impacts. The main novelty is that the degenerated system for the model admits an oscillation with zero impact velocity. To prove presence of oscillations, beside the perturbation method, the newly developed linearization for dynamics with grazing has been applied. As different from the theoretical results such as Nordmark mapping and Zero time discontinuity mapping, the grazing is examined through another method of discontinuous dynamics, which diminishes the role of mappingsin the analysis. The rich diversity of changes in the dynamics is observed under regular perturbations, since of the grazing discontinuity. By means of the simulation results, the analytical studies are visualized. A novel controllability analysis of impulsive fractional linear time invariant systems with state delay and distributed delays in control B.Sundara Vadivoo, R.Raja, R.P.Agarwal, G.Rajchakit In this paper, we investigate the controllability of impulsive fractional linear time invariant systems with state delay and distributed delays in control. By using the controllability Grammian matrix which is defined by the Mittag Leffler matrix function , a new set of sufficient conditions are obtained for the considered system to be controllable. Finally, two numerical examples are given to demonstrate the validity of the obtained the oretical results. Tikhonov theorem for differential equations with singular impulses M. Akhmet, S. Cag The paper considers impulsive systems with singularities. The main novelty of the present research is that impulses (impulsive functions) are singular. This is beside singularity of differential equations. The most general Tikhonov theorem for the impulsive systems is obtained. Illustrative examples with simulations are given to support the theoretical results. Approximate Controllability Results for Impulsive Partial Functional Nonlocal Integro-differential Evolution Systems through Resolvent Operators Mahalingam Nagaraj, Selvaraj Suganya, Dumitru Baleanu, Mani Mallika Arjunan This paper investigates the existence and approximate controllability results for a class of impulsive functional integro-differential evolution systems with nonlocal conditions via resolvent operators in Banach spaces. By making utilization of Banach contraction principle and Schaefer's fixed point theorem along with resolvent operators and semigroup theory, we build up the desired results. As an application, we also consider an impulsive partial functional integro-differential equations. Nonlinear Modifications of Perturbation Theory with Some Examples of Applications to Complex System Analysis V.V. Uchaikin, V.A. Litvinov, E.V.Kozhemjakina The article reviews some nonlinear generalizations of the perturbation theory performed by various authors (including some co-authors of this article) in various years, and can be useful in analyzing complex systems and inverse problems. There is considered the problem of widening the range of perturbation theory applicability with simultaneous reducing the complexity of computational algorithm. The consideration is based on the duality principle, including adjoint (in the Lagrange sense) operators and importance functions. The article contains an exposition of perturbation theory of higher orders, variational methods of higher orders, symmetry transformation and some other question. Special attention is paid to such a promising approachas multi-reference methods. It is shown that this approach extends the range of applicability of perturbation theory and saves calculationstime. To demonstrate details of the method,some examples of transport problems from reactor engineering and astrophysics are considered. On I-lacunary summability methods of order a in intuitionistic fuzzy n-normed spaces E. Savas In this paper we introduce and study the notion I -statistical convergence of order α , and I -lacunary statistical convergence of order α with respect to the intuitionistic fuzzy n-normed space and also we investigate their relationship and some inclusion theorems are proved. Modularity, redundancy and the problem of "sex" Christopher R. Stephens Homologous recombination, and the associated notion of sexual reproduction, are ubiquitous in the evolution of biological organisms. However, there is still no clear, generally accepted understanding of why it exists and under what circumstances it is useful. Here, we argue that its utility is strongly linked to specific types of fitness landscape. In particular, landscapes that are quasi-modular (weak additive epistasis) or redundant (negative epistasis), two properties that are also ubiquitous in biological systems. We further argue that recombination and modularity together are responsible for the formation of "building block hierarchies" where, to make compatible the two different types of building block, landscape blocks associated with modularfitness landscapes and Building Block schemata defined by the recombination distribution, a meta-evolution is necessary, wherein the recombination distribution itself evolves, leading to recombination hot spots at the boundaries of landscape blocks. We finally argue that recombinative dynamics and modular landscapes are necessary conditions for the formation of building block hierarchies and, ultimately, life itself. Complex Inference Networks: A new tool for spatial modelling Christopher R. Stephens, Ra´ul Sierra Alcocer, Constantino Gonz´alez Salazar All systems - physical, biological, ecological and social - are composed of hierarchies of building blocks- atoms, molecules, cells, tissues, individuals, species etc. - with corresponding interactions, wherein the presence of interactions- attractive and repulsive- affects the relative spatio-temporal distribution of the building blocks. In physical systems, in particular, the structure of the building blocks and the nature of their interactions has been deduced via systematic observations of their positions in space and time. Unfortunately, Complex Adaptive Systems are highly multi-factorial, so that, unlike many physical systems, it is impossible to systematically observe and characterize each and every interaction that exists in such systems. In this paper we discuss a general framework - Complex Inference Networks - wherein interactions, particularly in Complex Adaptive Systems, may be studied and characterized using position data about their building blocks. We compare and contrast physical versus Complex Adaptive Systems and give as an explicit example the identification of disease hosts in the ecology of emerging and neglected diseases, where it has been possible to discover previously unknown ecological interactions from species co-occurrence data. MS-Stability Analysis of Predictor-Corrector Schemes for Stochastic Differential Equations R. Zeghdane, A. Tocino Deterministic predictor-corrector schemes are used mainly because of their numerical stability which they inherit from implicit counterparts of their corrector schemes. In principle these advantages carry over to the stochastic case. In this paper a complete study for the linear MS-stability of the one parameter family of weak order 1.0 predictor-corrector Taylor schemes for scalar stochastic differential equations is given. Figures of the MS-stability regions that confirm the theoretical results are shown. It is also shown that mean-square A-stability is recovered if the parameter is increased. Investigation into the Regular and Chaotic States of Twitter Victor Dmitriev, Andrey Dmitriev The present paper is devoted to the investigation into the nonlinear dynamics of Twitter. A new model of Twitter as a thermodynamic non-equilibrium system is suggested. Dynamic variables of such system are represented by the variations of tweet/retweet number and instantaneous diversity between the densities of population on different levels around the equilibrium values. Regular and chaotic states of networks are described. It is pointed out, that the system is in a condition of an asymptotically stable equilibrium when the intensity values of an external information are small (the number of tweets eventually tends to its equilibrium value). If the intensity values of external information exceed the critical value, then the chaotic oscillations of tweets are to be observed. We have made the calculations of the correlation dimension and embedding dimension for the dynamics of the 10 most popular @ (TOP 100 by data of Twitter Counter). The results show, that all observed time series have clearly defined chaotic dynamical nature. The Effect of non-Selective Harvesting in Predator-Prey ModelWith Modified Leslie-Gower and Holling Type II Shemes I. El Harraki, R. Yafia, A. Boutoulout, M. A. Aziz-Alaoui In this paper, we study the effect of harvesting on the qualitative properties of predator-prey model with modified Leslie-Gower and Holling Type II functional responses. The model is given by a system of two ordinary differential equations with non-selective constants harvesting terms. We investigate the impact of harvesting terms on the boundedness of solutions, on the existence of the attraction set, on the stability of different equilibrium points. A Lyapunov function is used to prove the global stability of the interior equilibrium. We also, discussed the policy of optimal harvest and we got the solution for the interior equilibrium by the Pontryagin maximum criterion. Finally, our theoretical results are illustrated by a numerical simulations. Ant Colony Optimization Algorithm for Lesion Border Detection in Dermoscopic Images Asmae Ennaji, Abdellah Aarab Medical image segmentation plays a crucial role in computer aided diagnosis system that have a significant potential for early detection of skin cancer. The aim of segmentation process in this field is to facilitate the characterization and the visualization of the lesion in dermoscopic images. This paper proposes a new method for improving the lesion border detection in dermoscopic images, based on the ant colony optimization algorithm. Our experiments show that the proposed method achieved a significant improvement in image segmentation when compared to the deterministic canny procedure. Bifurcation diagrams and Fomenko's Surgery on Liouville tori of a rigid body in the Goryachev-Chaplygin case on sokolov terms Jaouad Kharbach, Mohammed Benkhali, Walid Chatar, Ahmed Sali, Abdellah Rezzouk, Mohammed Ouazzani-Jamil In this paper, by taking advantage of the richness of Fomenko's theory of surgery on (bifurcations of) Liouville tori, we give a complete description of the topology and bifurcations of the invariant level sets of a heavy rigid body around a fixed point corresponding to the Goryachev-Chaplygin case on sokolov terms. In particular, for all values of the parameters of the system, the bifurcation diagrams of the momentum mapping are constructed, bifurcations of the common level sets of the first integrals are described, explicit periodic solutions were determined, the topology of the invariant manifolds and all generic bifurcations are illustrated numerically. Approximate Controllability of Impulsive Neutral Functional Integrodifferential Systems with Nonlocal Conditions A. Yasotha, K.Kanagarajan In this paper, we study the approximate controllability of impulsive neutral functional integrodifferential systems with finite delay. The fractional power theory and α -norm are used to discuss the problem so that the obtained results can apply to the systems involving derivatives of spatial variables. By methods of functional analysis and semigroup theory, sufficient conditions of approximate controllability of the impulsive integrodifferential equation are formulated and proved. An example is provided to illustrate the theory. Stable and Unstable Behaviors for Brushless Motor with Harmonic Disturbance via Discrete Implicit Maps Jianzhe Huang Dynamic model for brushless motor in the rotating frame is studied in this paper, where a periodic disturbance is introduced in the quadrature-axis voltage. The methodology of discrete implicit maps is adopted to investigate the dynamical response of such brushless motor system. Multiple closed semi-analytic bifurcation routes for period-1 motions are found by varying frequency of disturbance of the quadrature-axisvoltage. The stability conditions for periodic motions will be given through eigenvalue analysis. From the semi-analytical prediction of period-1 motions, the corresponding frequency-amplitude characteristics are obtained. Finally, the stable and unstable period-1 motions will be presented numerically. With such discrete implicit maps method, the unstable motion of such brushless motor with strong nonlinearity can be obtained. Compared to the method of generalized harmonic balance, the dimension of Jacobian matrix can be dramatically decreased and the numerical error can be avoided without using QR algorithm. Existence, Uniqueness and Stability Results for Impulsive Neutral Stochastic Functional Differential Equations with Infinite Delay and Poisson Jumps pp. 1-11 | DOI: 10.5890/DNC.2019.03.001 A. Anguraj, K. Banupriya In this paper, we study the existence and uniqueness of mild solutions of impulsive neutral stochastic functional differential equations with infinite delay and Poisson jumps under non-Lipschitz condition with Lipschitz condition being considered as aspecial case by means of the successive approximation. Further, We study the continuous dependence of solutions on the initialvalue by means of a corollary of the Bihari inequality. Synchronization of Time-delay Chaotic Systems with Uncertainties and External Disturbances Ajit K. Singh, Vijay K. Yadav, Subir Das In this article, the authors have studied the problem of synchronization of time-delay chaotic systems with uncertainties and external disturbances. The effectiveness of the problem statement is visualized through synchronization of time-delay advanced Lorenz system and double time-delay R¨ossler system with parametric uncertainties and disturbances using active control method. Numerical simulations are carried out using Runge-Kutta algorithm for delay differential equations (DDEs) and the results are depicted through graphs. The physical meaning of time-delay system is that a signal is transmitted and receive data latertime, which is found to occurin active sensing problems. The salient feature of the article is the demonstration of the efficiency of the considered method during synchronization of time-delay chaotic systems even in presence of uncertainties and external disturbances. Spatiotemporal Dynamics of Meta Population Models with Application in Law R.O. Walo, A.M. Ndondo, S. B. Mushi The paper deals with a spatiotemporal dynamics study of criminality cityby-city in attempt to model mechanisms which give rise to synchronisation outbreak of crimes in different cities and villages. The study gives sharp relations between social and penal pressions. Dynamics and Stability Results for Nonlinear Neutral Pantograph Equations via Hilfer-Hadamard Fractional Derivative A study on the existence and uniqueness of a solution for nonlinear neutralpanto graphe quations with Hilfer-Hadamard fractional derivative is the main aim of this paper. The existence result is obtained by using Schauder fixed point theorem. Also, we discuss Ulam stability for these equations basedon Banach contraction principle. Superdiffusive Searching Skill in Animal Foraging Kishore Dutta Random walk related to diffusion driven transport is widely used to describe phenomenain the context of physics, biology, statistics, and econometrics. However, contrary to the normal (Gaussian) diffusion process, anomalous diffusion takes place in many natural phenomena ranging from turbulence to animal foraging. In situations where spreading process is more rapid than the normal diffusion, the distribution of steps comprises many short movements broken episodically by less frequent long excursions,commonly known as L´evyflight. Recent quantification of high resolution foraging data of many free-ranging animals suggest that they display L´evyflight behaviorin environmentswhere prey is scarce. This article describes the fundamental properties of L´evy flight and how it encodes the statistical characteristicsof theanimal foraging patterns. Existence of Solutions for Boundary Value Problem of Non-linear Integro-differential Equations of Fractional Order J. Kavitha, V. Sadhasivam In this article, we cultivate the existence theory for the following boundary value problem of fractional integro-differential equations togetherwith fractional integro-differential boundary conditions. By using the coincidence degree theory, we will obtain a new criteria for the existence of the solutions of the given boundary value problems. We presentan exampleto illustrate ourmain results. Approximate Controllability of Sobolev-Type Fractional Neutral Evolution Inclusions M. Tamil selvan, R. Murugesu In this work, we establish a set of sufficient conditions for the approximate controllability for a class of Sobolev-type fractional neutral evolution inclusions in Banach spaces. We use Bohnenblust-Karlin'sfixed point theorem to prove our main results. Further, we extend our result to study the approximate controllability for nonlinear fractional control system with nonlocal conditions. An exampleis also givento illustrate our mainresults. Controllability of Nonlinear Fractional Langevin Systems P. Suresh Kumar, V. Govindaraj, K. Balachandran, N. Annapoorani In this paper, we first derive the solution representation of fractional Langevin differential equation represented by the fractional differential coefficient in the sense of Caputo fractional derivative in terms of Mittag Leffler function. Based on this solution representation, controllability of linear fractional Langevin dynamical systems is studied by using Grammian matrix. Sufficient conditions for the controllability of the nonlinear system are established by using the Schauder's fixed point theorem. An exampleis givento verifythe results. A New Local Fractional Derivative of q−uniform Type Juan E. N´apoles Valdes†, Jorge A. Castillo Medina , Paulo M. Guzm´an , Luciano Miguel Lugo In this work we present a new derivative of q-uniform type, which contains several definitions of known q-derivatives. Some examples and properties similar to those of the ordinary calculus are also presented. Dynamics of Modified Leslie-Gower Harvested Predator-Prey Model with Beddington-DeAngelis Functional Response R. Sivasamy, M. Sivakumar, K. Sathiyanathan, K. Balachandran This paper considers a modified Leslie-Gower predator-prey model with Beddington-DeAngelis functional response and nonlinear prey harvesting strategy subject to the zero-flux boundary conditions. To understand the dynamics of the considered system, we derive sufficient conditions for permanence analysis, local stability, global stability and Hopf bifurcation of interior equilibrium point. Further we also investigate the existence and non-existenceof non-constant positive steady state solutions. Stability and Bifurcation Analysis of Three Species Predator-Prey Model with Holling Type II M. Sambath, C. Gokila This paper considers a prey predator model with omnivore population and Holling type II response. First, we have studied the boundedness of the model system. The local and global stability of the equilibrium is investigated by analyzing the eigenvalues and constructing the appropriate Lyapunov functions respectively. The persistence of positive equilibrium is also discussed. The existence of Hopf bifurcation is investigated by analyzing the distribution of eigenvalues at the positive equilibrium point. By using the normal form theory and explicit formulae which determine the direction of bifurcating periodic solutions are derived. Some numerical simulationsare carryingout,to check our theoretical results. Dynamical Behaviours of a Lotka Volterra Model with Katugampola Fractional Derivative L. Vignesh, K. Kanagarajan, D. Vivek In this paper, we study the dynamical behaviours of a very special type of delay differential equation known as Lotka Volterra model (predator-prey model) via Katugampola fractional derivative. The existence and uniqueness results are obtained using the Krasnoselskii's fixed point theorem. Moreover,we derived fractional Euler's method for delay differential equations. Illustrative examples are solved to show the applicability of theory results. Permanence of Stochastic Biological Systems Area of research related to prey-predator systems is an important topic. The concept of permanence is an important issues related to biological systems. In general permanence is considered as a combination of persistence and boundedness. Following this, this paper reviews few existing definitions of stochastic permanence. Following the existing definition, in this paper a new definition of permanence for stochastic biological systems has been proposed, which modifies the existing ones. The permanence of a general Lotka-Volterra stochastic two species cooperation, competition and predation systemsis provedby new definition. Exact Solutions and Stability Analysis of a Nonlinear Model of Open-Ocean Deep Convection that Allows Multiple Steady States Igor L. Bashmachnikov, DmitryV. Kovalevsky We present analytical solutions of the two-basin model of open-ocean deep convection. Originally suggested by Whitehead [Whitehead, J.A. (2000), Stratified convection with multiple states, Ocean Modelling, 2(3-4), 109121], the model allows regimes with multiple steady states (multiple equilibria). We provide the full analytical description of the steady states for the particular case of constant surface heat flux from the ocean to the atmosphere, and explore analytically stability of the equilibria within the Lyapunov theory. The results show that, for this particular case, the steady state is unique and stable for all dynamic flow regimes. We also present analytical expressions for dependence of critical values of sea-surface heat flux,at which transitions between the dynamic regimes occur,on the model parameters. A Fundamental Approach on Non-integer Order Differential Equation Using Nonlocal Fractional Sub-Strips Boundary Conditions M. Subramanian, A.R. VidhyaKumar, T. Nandha Gopal The intention of this article is to exhibit the validity and applicability of a fractional sub-strips integral boundary condition on a nonlinear differential equation of fractional order. We propose convergent and stable methods to validate the existence and uniqueness of the stated problem. The scheme is also provided with an illustrative examples. On the Way to Generalized Kinetics for Multi-scale Complex System V. V. Uchaikin Development of nano-technology throws a new challenge to statistical physics. The classical statistical physics is underlaid by classical or quantum mechanics of closed systems, classical version of the Central Limit Theorem, the Markov model of Brownian motion, the linear approximation of the perturbation theory as a main mathematical tool. The nanotechnology has caused constructing new models: nonlocal in space and time dynamics, generalized Limit Theorem for the Levy-stable family as limit distributions, non-Markovian (subordinated) Levy-motion and instead of the Brownian one,self-similar (fractal) and non-local (integral and fractional differential) analysis. The article gives a short review of these concepts. The multiscale complex system is considered to exacute a self similar motion obeying the Levy stable statistics, which generalized then to fractionally stable kind. The intermediate asymptotics is introduced on the base of truncated L´evy flight. This leads to the correspondence principle connecting classical and nano-statistics. Some numerical results are presented. Regional Observability with Constraints on the State of Semilinear Parabolic Systems Hayat Zouiten, Ali Boutoulout, Fatima-Zahrae El Alaoui The paper is devoted to the investigation of regional observability with constraints on the state of semilinear parabolic systems. The purpose is to reconstruct the initial state between two prescribed functions only on an internal subregion ω of the system evolution domain Ω. The proofs use two approaches, the subdifferential and HUM approach. Finally, a numerical exampleis provided to verify the effectiveness of our theory results. Controllability Criteria for Nonlinear Higher Order Fractional Delay Dynamical Systems with Impulses M. Sivabalan, K. Sathiyanathan This paper is concerned with the controllability of nonlinear higher order fractional delay with impulses, which involved Caputo derivatives of any different orders. A necessary and sufficient condition for the controllability of linear fractional delay dynamical systems with impulses has proved,and a sufficient condition for the corresponding nonlinear systems has obtained. Examples has given to verify the results. Relation Between Autocorrelation Sequence and Average Shortest-Path Length in a Time Serie to Network Mapping Amanda Leitede Camargo, Marcio Eisencraft An invertibl emapping between time series and networks was recently proposed. It can be used as a tool to figure out properties of the mapped time series. In the present work we use controlled artificial signals to numerically investigate how correlation properties of time series are mapped in the topological measures of associated networks. More specifically, we employ filtered uniform white noise and analyse how the auto correlation sequence influences the average shortest-path length. Approximate Controllability of Second-order Neutral Stochastic Non-autonomous Integrodifferential Inclusionsby Resolvent Operators R. Nirmalkumar, R. Murugesu In this paper, we formulate a set of sufficient conditions for the approximate controllability for a class of second-order neutral stochastic nonautonomous integrodifferential inclusions in Hilbert space. We establish the results with the help of resolvent operators and Bohnenblust-Karlin's fixed point theorem is to prove the main result. An application is given to illustrate the main result. A Study on Langevin Equations with ψ-Hilfer Fractional Derivative In this paper, we discuss the existence, uniquenessand four types of Ulam stability results for a general class of Langevin equations. An illustrate exampleis given to check the applicable of this results. Fuzzy ( η ,ϕ )-mixed Vector Equilibrium Problems In this research document, we study the fuzzy ( η ,ϕ )-mixed vector equilibrium problems. By using the concepts of KKM-mapping, we prove the existence theorem of fuzzy ( η , ϕ )-mixed vector equilibrium problems. Quadratic Operators Defined on a Finite-dimensional Simplex of Idempotent Measures I. T. Juraev, M. M. Karimov We describe some quadratic operators which map the (n−1)-dimensional simplex of idempotent measures to itself. Such operators are divided to two classes: the first class contains all n×n×n- cubic matrices with nonpositive entries which in each n×n dimensional k-th matrix contains exactly one non-zerorow and exactly one non-zero column; the second class contains all n×n×n-cubic matrices with non-positive entries whichhasat leastonequadraticzero-matrix. These matrices play a role of the stochastic matrices in the case of idempotent measures. For both classes of quadratic mapswe find fixed points. A Variational Problem on the Deformation Energy of an Elastic Medium Panayotis Vyridis The variational problem under consideration is a generalization to higher dimension (n > 2) of a free and constrained deformation of an elastic medium. In the case of the free body the existence of the minimizers of the corresponding energy functional is studied, using standard direct methods. When the elastic medium is subjected to a one parametric force field of the shell, the problem turns to the existence of bifurcation points, where the necessary conditions are also sufficient. Breakup of Closed Curve - Quasiperiodic Route to Chaos in Vibroimpact System V. A. Bazhenov, O. S. Pogorelova, T. G. Postnikova At present chaotic vibrations are the one of the most interesting and explored subjects in nonlinear dynamics. Particularly the routes to chaos in non-smooth dynamical systems are of the special scientists' interest. Quasiperiodic route to chaos in nonlinear non-smooth discontinuous 2DOF vibroimpact system is studied in this paper. In narrow frequency range different oscillatory regimes have succeeded each other many times under very small control parameter varying. There were periodic subharmonic regimes - chatters, quasiperiodic, and chaotic regimes. There were the zones of transition from one regime to another, the zones of prechaotic and post chaotic motion. The hysteresis effects (jump phenomena) occurred for increasing and decreasing frequencies. The chaoticity of obtained regime has been confirmed by typical views of Poincar´e map and Fourier spectrum, by the positive value of the largest Lyapunov exponent, and by the fractal structureof Poincar´e map. Stability Analysis of a Stochastic Viral Infection Model with General Infection Rate and General Perturbation Terms Marouane Mahrouf, Khalid Hattaf, Noura Yousfi In this paper, we propose a stochastic viral infection model with general incidence rate. In the proposed model, the stochastic perturbations are modeled by general functions. Further, the global existence and positivity of solutionsare investigated. In addition, the stochastic stability of the model is established by using the direct Lyapunov method. Finally, anapplication of the hepatitis B virus(HBV) is given to validate our theoretical results. Dynamical Behavior of a Delayed Predator-prey Model in Periodically Fluctuating Environments A. Moussaoui, M. A. Aziz Alaoui In this paper we develop a non-autonomous predator-prey system with time delay to study the influence of water level fluctuations on the interactions between fish species living in an artificial lake. We derive persistence and extinction conditions of the species. Using coincidence degree theory, we determine conditions for which the system has at least one periodic solution. Numerical simulations are presented to illustrate theoretical results. Analytic Solution of Time Fractional Boussinesq Equation for Groundwater Flow in Unconfined Aquifer Ritu Agarwal, MahaveerPrasad Yadav1, Ravi P. Agarwal Anapproximate analytical solution to the nonlinear time fractional Boussinesq equation is presented here. Derivative with respect to time variable is replaced with Caputo fractional derivative. The Natural transform method and Adomian decomposition method are employed to obtain the solution. Some test problems are solved to show the accuracy of the proposed method. Behavior of water table head is depicted graphically for various time values. Regional Weak and Strong Stabilization of Semilinear Systems with Decay Estimate A. El Alami, A. Boutoulout The aim of this paper is to develop the question of the regional stabilization for infinite-semilinear parabolic systems on a spatial domain Ω. precisely the study of such system on a subregion ω of Ω. We consider a decomposition of the state space via the spectral properties of the system. Then we apply this approach to regional strong and weak stabilization problem using bounded feedback. Some applications and simulations are presented. Monotone Dynamical Systems Theory for Epidemiological Models with Delay: A New Approach with Applications Jaafar El Karkri†, Khadija Niri Monotone dynamical systems theory is an efficient and powerful tool for the study of dynamical systems asymptotic behaviour. However it is rarely used in mathematical epidemiology. In this paper we present a comparison between two different approaches of convergence and stability for dynamical systems. We prove that the convergence in the sense of the monotone dynamical systems theory is equivalent to the uniform convergence in the classical Lyapunov theory. Then we provide a stability analysis of an SIS epidemiological model based on the monotone approach. Numerical simulations illustrate our theoretical results. Stability and Hopf Bifurcation of an Epidemic Model With Logistic Growth and Delay El Mehdi Lotfi, Khalid Hattaf, Noura Yousfi In this work, we propose and analyze a delayed epidemic model with logistic growth, in which the growth of susceptible individuals is governed by the logistic equation and the delay represents the latent period of the disease. Firstly, we prove that our model is mathematically and biologically well posed. In addition, the stability of equilibria and the existence of Hopf bifurcation are established. Moreover, several epidemic models existing in the previous studies are extended and generalized. Finally, some numerical simulations are given to illustrate our main results. Stability and Bifurcation Analysis in a Two-Dimensional Neutral Differential Equation A. Moussaid, Talibi Alaoui Hamad This paper discusses asymptotic stability and Hopf bifurcations occurs at the origin in certain two-dimensional neutral delay differential equations. We give necessary and sufficient conditions on the parameters to obtain the asymptotic stability and bifurcations. Global existence of periodic solutions is established using a global Hopf bifurcation result of Krawcewicz et al. Finally, some numerical simulations are carried out to support the analytic results. Our results are a generalization of M. Liu and X. Xu [1]. Interval Criteria for Oscillation of Damped Second-order Nonlinear Impulsive Differential Equation with Variable Delay V. Muthulakshmi, R. Manjuram This paper is devoted to the study of the oscillatory behavior of damped second order nonlinear impulsive differential equations with variable delay. The results obtained here extend and complement to some known results in the literature. Boundary Controllability of Fractional Order Nonlocal Semi-linear Neutral Evolution Systems with Impulsive Condition Kamalendra Kumar, Rakesh Kumar Sufficient conditions of boundary controllability of fractional nonlocal semi-linear neutral evolution equations with impulsive conditions are demonstrated. To get the result, we employ fixed point theorem and strongly continuous semi-group theory. An application is discussed to explain the theory. Influence of Coupled Nonlocal Slit-Strip Conditions Involving Caputo Derivative in Fractional Boundary Value Problem M. Subramanian†, A.R. Vidhya Kumar, T. Nandha Gopal By employing the coupled nonlocal slit-strips integral boundary conditions involving Caputo derivative, we investigate the existence and uniqueness of a boundary value problem of fractional differential equations. The main result is illustrated with examples. Using the Parameter Optimization Method for Solving Differential Equations with Contour Conditions: The nonlinear Euler-Bernoulli Beam Ad´elcio C. Oliveira The Parameter OptimizationMethod was presented with analytical and numerical examples. It was shown that it is a useful tool for solving differential equations with contour conditions. The numerical procedure is based only on Runge-Kutta integration routine and on optimization techniques, both frequently used and with many developed routines, thus it was shown that this approach is accessible and practical. The method was used to solve a one-dimensional Nonlinear Schr¨odinger Equation and a nonlinear Euler- Bernoulli beam. Delimitation of Hyperchaotic Regions in Parameter Planes of a Four-Dimensional Dynamical System Juliane C. Ramos, Paulo C. Rech We report results of a numerical investigation on parameter planes of a set of four autonomous first-order nonlinear ordinary differential equations. More specifically, here are reported ten numerically computed parameter plane diagrams for a five-parameter four-dimensional system, where the dynamical behavior of each point is characterized by using the related Lyapunov exponents spectrum. Each of these parameter plane diagrams indicates parameter values for which hyperchaos, chaos, quasiperiodicity, and periodicity may be found, i.e., each of these diagrams shows delimited regions for each of these dynamical behaviors. Study on Stochastic Quasi-Linear Partial Differential Equations of Evolution A. Anguraj, K. Ramkumar In this article, the existence and uniqueness of local mild solution of a stochastic counterpart of TosioKato's Quasi-linear partial differential equation with additive cylindrical wiener process in a separable Hilbert space is established using contraction mapping principle. Bernoulli Mapping with Holeanda Saddle-Node Scenario of the Birth of Hyperbolic Smale–Williams Attractor OlgaB. Isaeva, Igor R. Sataev1 One-dimensional Bernoulli mapping with hole is suggested to describe the regularities of the appearance of a chaotic set under the saddle-node scenario of the birth of the Smale–Williams hyperbolic attractor. In such a mapping, a non-trivial chaotic set (with non-zero Hausdorff dimension) arises in the general case as a result of a cascade of period-adding bifurcations characterized by geometric scaling both in the phase space and in the parameter space. Numerical analysis of the behavior of models demonstrating the saddle-node scenario of birth of a hyperbolic chaotic Smale– Williams attractor shows that these regularities are preserved in the case of multi dimensional systems. Limits of applicability of the approximate 1D model are discussed. The Thomas Attractor with and without Delay: Complex Dynamics to Amplitude Death Brayden McDonald, S. Roy Choudhury Bifurcations in the Thomas cyclic system leading from simple dynamics into chaotic regimes are considered. In particular, the existence of only one non-trivial fixed point of the system in parameter space implies that this point attractor may only be destabilized via a Hopf bifurcation as the single system parameter is varied. Saddle-node, transcritical and pitchfork bifurcations are precluded. The periodic orbit immediately following the Hopf bifurcation is constructed analytically by the method of multiple scales, and its stability is determined from the resulting normal form and verified by numerical simulations. The dynamically rich range of parameters past the Hopf bifurcation is next systematically explored. In particular, the period-doubling sequences there are found to be more complex than noted previously, and include period-three-like windows for instance. As the system parameter is decreased below these period-incrementing bifurcations, various additional features of the subsequent crises are also carefully tracked. Finally, we consider the effect of delay on the system,leading to the suppression of both the Hopf bifurcation as well as all of the subsequent complex dynamics. In modern terminology, this is an example of Amplitude Death,rather than Oscillation Death, as the complex system dynamics is quenched, with all of the variables settling to a fixed pointof the original system. Existence of Nonoscillatory Solutions for Mixed Neutral Fractional Differential Equation Velu Muthulakshmi, Subramani Pavithra In this paper, we establish some sufficient conditions for the existence of nonoscillatory solution for a class of mixed neutral fractional differential equations with Liouville fractional derivative of order α ≥0 on the halfaxis. Our results generalize some of the existing results in the literature. Some examples are given to illustrate our results. Universal Behavior of the Convergence to the Stationary State for a Tangent Bifurcation in the Logistic Map Joelson D. V. Hermes, Fl´avio Heleno Graciano, Edson D. Leonel The scaling formalism is applied to understand and describe the evolution towards the equilibrium at and near at a tangent bifurcation in the logistic map. At the bifurcation the convergence to the steady state is described by a homogeneous function leading to a set of critical exponents. Near the bifurcation the convergence is rather exponential whose relaxation time is given by a power law. We use two different approaches to obtain the critical exponents: (1) a phenomenological investigation based on three scaling hypotheses leading to a scaling law relating three critical exponents and; (2) an approximation that transforms the recurrence equations in a differential equation which is solved under appropriate conditions given analytically the scaling exponents. The numerical results give support for the theoretical approach. Steady-State and Dynamic Characteristics of Water-lubricated Rubber Bearings under Two Sets of Reynolds Boundary Conditions Gang Liu, Ming Li The lubrication characteristics of bearings are significantly influenced by boundary conditions. The double Reynolds boundary conditions (DRBCs) are introduced to analyze the hydrodynamic lubrication characteristics of water-lubricated rubber bearings for turbulent flows, considering the elasticity of rubber liner. Differences in its steady state and dynamic characteristics arising from the DRBCs and the Reynolds boundary conditions (RBCs) are discussed based on the finite difference method. The results show that the water-film reformation boundary is significantly different between the two sets of conditions, however,the load capacity of the bearing is only slightly different. The attitude angle and friction are greater for DRBCs than for RBCs. In the horizontal direction, direct stiffness and direct damping coefficients are larger for DRBCs than for RBCs, but in the vertical direction the opposite result holds. Effect of Fractional-Order on the Dynamic of two Mutually Coupled van der Pol Oscillators: Hubs, Multistability and its Control Ngo Mouelas Ad`ele1, Kammogne Soup Tewa Alain, Kengne Romanic, Fotsin Hilaire Bertrand1, EssimbiZobo Bernard This paper presents a novel approach to analyze the dynamic effect of the fractional-order derivative of the two mutually coupled van der Pol oscillators. The stability analysis is presented by two complementary phase diagrams: the isospike diagrams and the two Lyapunov exponent spectra. These diagrams reveal precisely the Hubs, spirals bifurcation and chaos when the derivative order is fixed at q = 0.95. In addition, when the fractional-order is set as a control parameter, various methods for detecting chaos including bifurcation diagrams, spectrum of largest Lyapunov exponentare exploited to establish the connection between the system parameters and various complicated dynamics. A transition was also observed between a desynchronized state and a multistability situation. These diagrams displayed the coexistence of four disconnected attractors (two symmetric). We study the basins of attraction of the system in the multi stability regime which thereby reveal the coexistence of attractors in the systems when the fractional-order derivative is taken as a function of initials conditions. Based on the parametric control, we have controlled this striking phenomenon in the system. Finally, the hardware circuit is implemented and the results are found to be in good agreement with the numerical investigations. Complex Dynamicsofan Exploited Prey-PredatorModel with NonlinearPrey Refuge Md. Manarul Haque, Sahabuddin Sarwardi In this paper we study the renewable resources of a prey-predator system with prey refuge and nonlinear harvesting. Taxation is imposed to prevent over exploitations to maintain ecological balance. The steady state of the system are determined and various dynamical behavior are discussed in its steady states under certain parametric conditions. The boundedness, feasibility of interior equilibria, bionomic equilibrium have been studied. The main observation is that the taxation plays an important role in regulating the dynamics of the present system. Moreover the variation of the taxation change the system from periodic behaviors to chaos. Some numerical illustration are given in order to support of our analytical and theoretical findings. The Dynamical Behaviorof a Two Degrees of Freedom Oblique Impact System Xiaowei Tang, Xilin Fu, Xiaohui Sun The oblique impact phenomena is quite common in practical engineering. In this paper, by using the theory of discontinuous dynamical systems, we studied the complex dynamics behaviors of a two degrees of freedom oblique impact system. We can see that the dynamics of oblique impact is more complex than that of the direct impact. The occurrence or disappearance conditions of sticking motion and grazing motion on the separation boundaries are given in Section 3. The conditions here are necessary and sufficient, which generate better results than those obtained with only sufficient conditions. The results appropriately interpret the physical phenomenon of this oblique impact system, hence validate our conclusions. As a supplement, we also give the analytic conditions of the existence of periodic motions. Numerical simulations for sticking motion and grazing motion are presented at last. Unsteady Magneto hydrodynamic Boundary Layer Flow towards a Heated Porous Stretching Surface with Thermal Radiation and Heat Source/Sink Effects Santosh Chaudhary, Susheela Chaudhary, Sawai Singh Mathematical model of unsteady boundary layer flow and heat transfer is explored for analyzing the study of influence of thermal radiation on incompressible viscous electrically conducting fluid over continuous stretching surface embedded in a porous medium in the presence of heat source/sink. The scope of influencing parameters that describing phenomenon are determined and governing time dependent boundary layer equations are transformed to ordinary differential equations by using appropriate similarity transformation. Numerical computation of the problem was carried out by shooting iteration technique together with Runge-Kutta fourth order integration scheme. Effects of unsteadiness parameter, permeability parameter, magnetic parameter, thermal radiation parameter, Prandtl number and heat source/sink parameter on velocity and temperature profiles are computed and illustrated graphically, where as local skin friction coefficient and local Nusselt number are represented numerically through tables. In nonmagnetic flow condition the result is found in concordance with earlier investigations. Fluid Flow and Solute Transfer in a Permeable Tube with Influence of Slip Velocity M. Varunkumar, P. Muthu In this paper,the influence of slip velocity on the fluid flow and solute transfer in a tube with permeable boundary is studied as a mathematical model for blood flow in glomerular capillaries. The viscous incompressible fluid flow across the permeable tube wall, as a result of differences in both hydrostatic and osmotic pressure, is considered. The solutions of differential equations governing the fluid flow and solute transfer are obtained using analyticaland Crank-Nicolsontype numericalmethods. It is observedthat the effect of slip on the hydrostatic and osmotic pressures, velocity profiles, concentration profile, solute mass flux and total solute clearance is significantand the results are presented graphically. Noise-induced Intermittent Oscillation Death in a Synergetic Model R.Jaimes-Re´ategui, D.A.Magall´on-Garc´ıa1, A.Gallegos, G.Huerta-Cuellar1, J.H.Garc´ıa-L´opez, A. N. Pisarchik We study noise-induced intermittency in asynergetic model of two coupled oscillators with asymmetric nonlinear coupling. This model was previously used to simulate visual perception of ambiguous images. We show that additive noise induces preferencefor one of the coexisting unstable steady states. When the noise intensity exceeds a certain threshold value,the oscillations of one of the coupled subsystems are interrupted during some time intervals, resulting in intermittent oscillation death, while another subsystem exhibitsnoisy oscillationsin the vicinity of an unstable fixed point. A Mathematical Model for Vineyard Replacement with Nonlinear Binary Control Optimization An´ıbal Galindro, AdelaideCerveira, Delfim F. M. Torres, Jo˜aoMatias, Ana Marta-Costa Vineyard replacement is a common practice in every wine-growing farm since the grapevine production decays over time and requires a new vine to ensure the business sustainability. In this paper, we formulate a simple discrete model that captures the vineyard's main dynamics such as production values and grape quality. Then, by applying binary non-linear programming methods to find the vineyard replacement trigger, we seek the optimal solution concerning different governmental subsidies to the target producer. Preventing Computer Virus Prevalence using Epidemiological Modeling and Optimal Control Joa˜o N.C. Gonc¸alves1, Helena Sofia Rodrigues, M. Teresa T. Monteiro Computer viruses are a serious threat to the general society, due to their implications for private life and corporative systems. This paper begins to briefly illustrate the dynamics of computer viruses within a network system, by taking advantage of the EpiModel R package and using a SIR (Susceptible–Infected–Recovered) epidemic model. However, since devices are not constantly immune to cyberattacks, a SIRS model with an optimal control application is proposed to minimize the levels of infections caused by malicious objects. Additionally, real numerical data related to the number of reported cybercrimes in Japan from 2012 to 2017 are considered. The existence and uniqueness of an optimal control for the proposed control problem are proved. Under proper investment costs, numerical simulations in Matlab show the effectiveness of the proposed control strategy in increasing the rate of immunity and decreasing the chances of re–susceptibility to cyberattacks. An Improved Approach for Image Segmentation and Three-Dimensional Reconstruction K. Bellaj, S. Boujena, E. EL Guarmah The main contribution of this paper consists of introducing a novel model of three-dimensional reconstruction from multiple two-dimensional images. Actually, our proposed model presents the following main advantages; first, we improve the original region fitting energy in the general region-based level set method by an anisotropic diffusion to evolve the contour. Second, we use the Dijkstra algorithm to improve and allow simple and free initializations. Third, we adopt the domain decomposition method to reduce the computational cost for high-resolution images. Finally, in order to prove the efficiency and accuracy of our proposed method, experiments were performed on synthetic and real images. A Generalized Reaction-Diffusion Epidemic Model with Time Delay El Mehdi Lotfi, Radoune Yafia, Noura Yousfi In this paper, we propose a generalized reaction-diffusion epidemic model with time delay. The proposed model is governed by partial differential equations(PDEs). The global existence, positivity and boundedness of solutions are obtained. By analyzing the corresponding characteristic equation and constructing appropriate Lyapunov functionals, the local/global stability of homogeneous steady states are investigated. Finally, an application of our analytical results is given. Study of a Predator-Prey System with Monod-Haldane Functional Response and Harvesting N.H. Gazi, M.R. Mandal, S. Sarwardi In the present work we study a predator-prey harvesting model with Monod-Haldane functional response. The mathematical features of the model have been analyzed in terms of stability, bifurcations and harvesting. Threshold values for some parameters indicating the feasibility and stability conditions of all equilibria are determined. The range of significant parameters under which the system admits different types of bifurcations are investigated. Optimal harvesting criteria and the net economic revenue are analyzed. Numerical illustrations are performed finally in order to confirm the analytical findings. Existence and Stability Results of Impulsive Stochastic Partial Neutral Functional Differential Equations with Infinite Delays and Poisson Jumps A. Anguraj, K. Ravikumar In this paper, we are focused upon the results on existence, uniqueness and stability of mild solution of impulsive stochastic partial neutral functional differential equations with infinite delays and poisson jumps. The results are obtained by using the method of successive approximation and Bihari's inequality. Characterizations of Non-Associative Ordered Semigroups by Their Intuitionistic Fuzzy Bi-Ideals Nasreen Kausar, Meshari Alesemi, Salahuddin, Mohammad Munir The aim of this paper is to investigate, the characterizations of different classes of non-associative and non-commutative ordered semigroups in terms of intuitionistic fuzzy left (right, bi-, generalized bi-, (1,2)-) ideals. Existence Results of Fractional Neutral Integrodifferential Equations with Deviating Arguments B. Kamalapriya, K. Balachandran, N. Annapoorani In this paper we prove the existence of solutions of fractional neutral integrodifferential equations with deviating arguments by using the resolvent operators and fixed point theorem. Examples are discussed to illustrate the theory. Mathematical Contribution of Analytical Solution in Biological Control using Asymptotic Approach A. Eswari The theoretical model of uncolonised by antagonist, colonised by the antagonist and infected by ther pathogen in the biological control has been discussed. In this article, we present the non-steady state solutions for biological control. The simple analytical expressions of the dimensionless concentrations are derived for non-linear boundary value problem using the variational iteration method. Numerical solutions are obtained and the results are discussed graphically. We also compared our analytical results with numerical simulation, it gives a satisfactory agreement is noted. On Different Approaches for Integrals of Physical Dynamical Systems Jasvinder Singh Virdi For driven time dependent harmonic oscillator a detailed and systematic study of several methods for the building of Integrals is carried out. The central feature of the present discussion is the establishment of Integrals for the dynamical system. In spite of their different procedural details all the approaches lead to the same invariant for the given classical system. Advantage and limitations of different methods are briefly highlighted. A Mathematical Study of a Two Species Eco-Epidemiological Model with Different Predation Principles Aktar Saikh, Nurul Huda Gazi This paper formulates and analyzes a predator-prey model with disease in the prey. Mathematical analysis of the model system concerns the existence, uniqueness and uniform boundedness of solutions in the positive octant. The threshold condition for epidemic and the conditions for persistence are obtained. Moreover, the system is analyzed for local stability, global stability around several equilibria. Hopf-bifurcation with its nature and the stability of the bifurcating limit cycle are studied around the disease free equilibrium point. Numerical simulations are performedto justify the analytical findings. Eco-epidemilogical significance and implications of the concluded results are discussed as well. Existence, Uniqueness and Stability of Impulsive Stochastic Partial Neutral Functional Differential Equations with Infinite Delays Driven by a Fractional Brownian Motion A. Anguraj, K. Ramkumar, E. M. Elsayed This article presents the result on existence, uniqueness and stability of mild solution of impulsive stochastic partial neutral functional differential equations driven by a fractional Brownian motion. The results are obtained by using the method of successive approximation and Bihari's inequality. Homoclinic Solutions in Bazykin's Predator-Prey Model Bashir Al-Hdaibat In this paper we derive an explicit second-order approximation of the homoclinic solutions in the Bazykin's predator-prey model. The analytic solutions are compared with those obtained by numerical continuation. Attractiveness and Exponential p-Stability of Neutral Stochastic Functional Integrodifferential Equations Driven by Wiener Process and fBm with Impulses Effects Mahamat Hassan Mahamat Hamit, Fulbert Kuessi Allognissode, Mohamed salem Mohamed, Louk-Man Issaka, Mamadou Abdoul Diop In this work, we consider a class of neutral stochastic integro-differential equations driven by Wiener process and fractional Brownian motion with impulses effects. This paper deals with the global attractiveness and quasiinvariant sets for neutral stochastic integro-differential equations driven by Wiener process and fractional Brownian motion with impulses effects in Hilbert spaces. We use new integral inequalities combined with theories of resolvent operators to establish a set of sufficient conditions for the exponential p-stability of the mild solution of the considered equations. An example is presented to demonstrate the obtained theory. Stability Approach of a Fractional-Delayed Duffing Oscillator Yusry O. El-Dib In this proposal, a formulation for the approximate-analytical solution of a fractional-delayed damping Duffing oscillator is developed. The fractional derivative is established using the Riemann-Liouville definition. In this scheme, the solution used a homotopy perturbation. In this proposal, a transcendental frequency equation is established. Finally, an analytic solution to the complicated algebraic frequency equation is obtained. Stability conditions are formulated to maintain the structure of the oscillatory solution. The case of un-delayed damping Duffing equation is investigated through the modified homotopy technique which is assumed to be the successor to obtain the solution. Spatial Patterns of an SIS Epidemic Model with Diffusion Muhammad A. Yau, Nuru Huda Gazi We consider an SIS spatial epidemic model with nonlinear incidence rate and diffusion, that consists of susceptible S and infective I individual populations which interact randomly in their physical environment. We use pattern formation to explain the spread and control of the epidemic over a course of time. Turing instability conditions are established and analysed for the model to exhibit spatial patterns. We find the exact Turing space in the parameter regime for these conditions to hold. An implicit pseudo-spectral method is used to numerically approximate the system and the patterns form reveal that the susceptible and infected populations behave of the same way. In some examples these populations are in isolation from each other. This happens because the susceptible individuals diffuse or move away from the infected individuals to avoid contact and the possibility of getting infected with the disease. Rigorous numerical experimentation reveals that the model has rich dynamics. We find that whenever the transmission rate β is less than the treatment r, there is no outbreak but for β ≥ r there is a possibility of having an outbreak. The results obtained extend well the findings of pattern formation in epidemic models and may have direct implications for the study of disease spread and control and perhaps the mechanistic impact of public health interventions on epidemics. Incorporating Prey Refuge in a Prey-Predator Model with Beddington-DeAngelis Type Functional Response: A Comparative Study on Intra-Specific Competition Hafizul Molla, Md. Sabiar Rahman, Sahabuddin Sarwardi The present study deals with a prey-predator system with prey refuge dependingon both species with the Beddington-DeAngelisresponse function. We propose a mathematical model for predator-prey interactions, allowing prey refuge in the absence of intra-specific competition and in the presence of intra-specific competition among the predators. We have analyzed the models in terms of boundedness, persistence, existence of equilibria and their stability and Hopf bifurcation. Existence of paradox of enrichments are examined well in both the cases. The analytical findings of this study are substantially validated by sufficient numerical simulations. The ecological implications of the obtained results are discussed as well. Fractional Differential Equations Involving Hadamard Fractional Derivatives with Nonlocal Multi-point Boundary Conditions Muthaiah Subramanian, Murugesan Manigandan, Thangaraj Nandha Gopal In this paper, we investigate the existence and uniqueness of solutions for the Hadamard fractional boundary value problems with nonlocal multipoint boundary conditions. By using Leray-Schauder nonlinear alternative, Leray Schauder degree theory, Krasnoselskii fixed point theorem, Schaefer fixed point theorem, Banach fixed point theorem, Nonlinear Contractions, the existence and uniqueness of solutions are obtained. As an application, two examples are given to demonstrate our results. Evolutionary Dynamics of a Single-Species Population Model with Multiple Delays in a Polluted Environment Ashok Mondal, A. K. Pal, G. P. Samanta In this work, evolutionary dynamical behaviour of a single-species population model in a polluted environment has been analyzed. This model system describes the effect of toxicant on a single-species population. Two discrete time delays have been incorporated for proper description. Important mathematical characteristics of the proposed model such as positivity, boundedness, stability and Hopf-bifurcation for all possible combinations of both the delays at the interior equilibrium point of the model system have been discussed. It is observed that increase amount of delay may lead to the change of stable behaviour of stationary points through the creation of limit cycles and higher periodic oscillations. Furthermore, it is reported that Hopf-bifurcations may also occur around stationary points for corresponding non-delayed system. Various numerical simulations are performed to validate analytical findings. Qualitative Analysis of a Modified Leslie-Gower Model with Addictive Allee Effect and Gestation Delay S. Vinoth, R. Sivasamy, K. Sathiyanathan This paper explores on the qualitative analysis of a modified Leslie-Gower prey-predator model where the consumption rate of prey is according to Beddington-DeAngelis functional response and Allee effect on prey population. Moreover time-lag ( τ ) is established to exploit gestation period of predations. The permanence analysis of proposed system is investigated. Then we study the local stability of non-delayed model at all possible equilibrium points and it is demonstrated that the given model experiences Hopf bifurcation about interior equilibrium point with respect to delay τ . Thereafter the stability and direction of Hopf bifurcation are formulated through normaland center manifold theorems. The derived criteria are justified with the help of numerical simulations. Modeling Response Time Distributions with Generalized Beta Prime M. Dashti Moghaddam, Jiong Liu, John G. Holden, R. A. Serota We use Generalized Beta Prime distribution, also known as GB2, for fitting response time distributions. This distribution, characterized by one scale and three shape parameters, is incredibly flexible in that it can mimic behavior of many other distributions. GB2 exhibits power-law behavior at both front and tail ends and is a steady-state distribution of a simple stochastic differential equation. We apply GB2 in contrast studies between two distinct groups – in this case children with dyslexia and a control group – and show that it provides superior fitting. We compare aggregate response time distributions of the two groups for scale and shape differences (including several scale-independent measures of variability, such as Hoover index), which may in turn reflect on cognitive dynamics differences. In this approach, response time distribution of an individual can be considered as a random variate of that individual's group distribution. Uniqueness and Non-Uniqueness of Signed Measure-Valued Solutions to the Continuity Equation Paolo Bonicatto We consider the continuity equation tmt +div(bmt ) = 0, where {mt}t∈R is a measurable family of (possibily signed) Borel measures on Rd and b: R×Rd → Rd is a bounded Borel vector field (and the equation is understood in the sense of distributions). We discuss some uniqueness and non-uniqueness results for this equation: in particular, we report on some counterexamples in which uniqueness of the flow of the vector field holds but one can construct non-trivial signed measure-valued solutions to the continuity equation with zero initial data. This is based on a joint work with N.A. Gusev [1]. Integrability and Jacobi last multipliers of cubic Liénard differential equations with quadratic damping Maria V. Demina We solve completely the problem of Liouvillian integrability for cubic Li´enard differential equations with quadratic damping. Our main tool is the method of Puiseux series. We find necessary and sufficient conditions for equations under consideration to have Jacobi last multipliers of a special form. It turns out that some particular sub–families being Liouvillian non–integrable possess Jacobi last multipliers. The Jacobi last multipliers give rise to non–standard Lagrangians and it is an interesting property of these dynamical systems. In addition, we prove that cubic Li´enard differential equations with quadratic damping do not have algebraic limit cycles. Lax Equation on the Uhlenbeck Manifold Ya. Dymarskii We give an analytic and topological description of the Uhlenbeck manifold, that is a manifold of triples (a symmetric operator, an eigenvector, an eigenvalue), for the finite-dimensional symmetric matrices and the family of stationary periodic Schrodinger operators. Then, we describe an uplifting of Lax vector fields to these manifolds. Periodic Behavior of Maps Obtained by Small Perturbations of Smooth Skew Products L.S. Efremova We study C1-smooth maps obtained by small perturbations of C1-smooth skew products of maps of an interval with W-stable quotients and present results on the coexistence of periods of periodic orbits for maps under consideration. In particular, C1-smooth W-stable maps of an interval do not contain maps of type 2¥, i.e. maps that have the unbounded set of (the least) periods of periodic orbits t for t = {2i}i≥0. We prove here that analogously to C1-smooth skew products of maps of an interval with W-stable quotients there exist the maps under consideration with t = {2i}i≥0. On the Extreme Points of the Unit Ball in the Space of Solenoidal Vector Measures on the Plane Nikolay A. Gusev We consider the space of finite divergence-free Borel vector measures on Rd, endowed with the total variation norm. For d = 2 we present a characterization of the extreme points of the unit ball in this space. This allows one to decompose (for d =2) any finite divergence-free vectormeasure into measures induced by closed Lipschitz curves. The results are based on a joint work with P. Bonicatto. Dynamical System Model with the use of Liouville Equation for Empirical Distribution Function Densities Alexey A. Kislitsyn, Yurii N. Orlov The difference approximation of the one-dimensional Liouville equation for the sample distribution function density of the non-stationary time series estimated by the histogram is considered. The scheme with semi-group property conservation is constructed for evolutionmodel of this sample distribution function density. We investigate the problem of appropriate Liouville equation construction for given initial and final distributions. We prove the necessary and sufficient condition of such a representation, which is a strong positivity of the initial density distribution in the inner class intervals. The determination of the corresponding Liouville velocity algorithm is constructed and its mechanical-statistical meaning is shown. The dynamical system, associated with this Liouville equation, is considered. We interpret the Liouville statistical velocity as a corresponding velocity of dynamical system, according to representation of statistical mechanics. We show, that this interpretation leads to monotonic discrete dynamical system with stationary point, corresponding to equality of initial and final distribution functions. Monotone Maps on Dendrites E. N. Makhrova Let X be a dendrite, f : X → X be a monotone map. In the article the relationship between a structure of a dendrite X and dynamical properties of f is studied. Namely the relation between a structure of X and a structure of the sets of periodic points of f , non-wandering points of f , w-limit sets of trajectories is established. The structure of dendrites on which there exist monotone pointwise chain recurrent maps is characterized. The structure of dendrites on which there exist monotone maps with homoclinic points is described. Ekeland's Variational Principle for Functions Unbounded from below R. Sengupta, S. Zhukovskiy A modification of the Ekeland variational principle for functions unbounded from below is obtained. For a wide class of differentiable functions not necessarily bounded below, it is shown that there exists a minimizing sequence satisfying the first-order necessary conditions, up to any desired approximation. Complex Geometry of Universal Teichm¨uller Space Armen Sergeev We discuss complex geometric properties of the universal Teichm¨uller space T . It is a complex Banach manifold which name is motivated by the fact that all classical Teichm¨uller spaces T(G), associated with compact Riemann surfaces, are contained in T as complex subvarieties. Another important subset of T is the space S of orientation-preserving diffeomorphisms of S1 considered modulo M¨obius transforms. It is a K¨ahler Frechet manifold. Our interest in T was initially motivated by its relation to string theory which we have studied earlier in a series of papers. One-Particle Approximation as a Simple Playground for Irreversible Quantum Evolution A.E. Teretenkov Both quantum information features and irreversible quantum evolution of the models arising in physical systems in one-particle approximation are discussed. It is shown that the calculation of the reduced density matrix and entanglement analysis are considerably simplified in this case. The irreversible quantum evolution described by Gorini–Kossakowski–Sudarshan–Lindblad equations in the one-particle approximation could be defined by a solution of a Shroedinger equation with a dissipative generator. It simplifies the solution of the initial equation on the one side and gives a physical interpretation of such a Shroedinger equation with non-Hermitian Hamiltonian on the other side. Non-Autonomous Dynamics and Product Formula Approximation of Solution Operator Valentin A. Zagrebnov The paper is devoted to non-autonomous dynamics, which is generated by positive self-adjoint operator A and a family of non-negative self-adjoint operators {B(t)}t≥0 defined in a separable Hilbert space. It is shown that solution operator {U(t, s)}0≤s≤t of the evolution equation can be approximated in the operator norm topology by a product formula that involves A and B(t). We also established the rate of convergence of the product formula to the solution operator. These results are proved using the evolution semigroup approach to non-autonomous dynamics. Weak Compactness Problem for Sets of Bounded Radon Measures on Various Topological Spaces Valeriy K. Zakharov, Timofey V. Rodionov The paper presents some weak compactness criterion for a subset M of the set RMb(T,G ) of all positive bounded Radon measures on a Hausdorff topological space (T,G ) similar to the Prokhorov criterion for a complete separable metric space. Since for a general topological space the classical space Cb(T,G ) of all bounded continuous functions on T can be trivial and so does not separate points and closed sets, we consider instead of Cb(T,G )-weak compactness S(T,G )-weak compactness with respect to the new uniformly closed linear space S(T,G ) of all (symmetrizable) metasemicontinuous functions.. Evaluation of Chaotic Properties Of CBC Mode of Encryption Embedded with RC5 Block Cipher Algorithm Abdessalem Abidi, Christophe Guyeux, Mohsen Machhout It has been recently proven that, under ad hoc conditions, the Cipher Block Chaining (CBC) mode of operation can behave chaotically according to the mathematical definition of Devaney, on the infinite discrete product set of finite memory coupled with (finite) media of unbounded size. This occurs when the chosen block cipher function satisfies some properties related to a well defined associated graph. Rudimentary examples taken from so-called transposition cipher methods have formerly been proposed as illustrative examples. In this paper, the same canvas will be followed by regarding the conditions under which the CBC mode behaves chaotically. But the encryption function is now the Rivest Cipher 5 (RC5) one, a very famous symmetric key block cipher algorithm. Therefore, our goal is to prove the chaotic behavior of RC5-CBC encryption algorithm according to the reputed Devaney's definition. Then, this unpredictability was checked in hardware through such sensitivity tests which allowed us to validate that RC5-CBC exhibits a high degree of randomness, key and plain sensitivity, in addition to the so-called avalanche effect. Vibration and Stability Analysis Comparison for Nanoshell and Piezoelectric Nanoshell Subjected to Electrostatic Excitation Sayyid H. Hashemi Kachapi In current study, vibration and stability analysis comparison of two nanostructures i.e. nanoshell (NS) and piezoelectric nanoshell (PENS) subjected to electrostatic excitation and Visco-Pasternak medium is investigated using the Gurtin–Murdoch surface/interface (S/I) theory. For this analysis, Hamilton's principles, the assumed mode method combined with Lagrange–Euler's and also Complex averagingmethod combinedwith Arclength continuation are used. It can be seen that by changing the surface/ interface densities and as a result, increasing or decreasing the system stiffness, the natural frequency can be less or greater than the state without taking into account the S/I effects. In both nanostructures (NS and PENS), considering the surface/interface effects increase the nonlinear behaviour compared with without S/I effects. Iterative Method for Non-Stationary Mixed Variational Inequalities We consider a non-stationary mixed variational inequality problem involving an integrable mapping and a convex function, where only approximation sequences are known instead of exact values of the cost mapping and function, and feasible set. We apply a descent method and partial penalization to prove the convergence is attained without concordance of penalty, accuracy, and approximation parameters under coercivity type conditions. DNC Volume 10, Issue 1 Qualitative Analysis for a Phytoplankton-Zooplankton Model with Allee Effect and Holling Type II Response M.S. Surendar, M. Sambath This paper investigates a predator-prey system with Allee effect and Holling type-II functional response consisting of phytoplankton as prey and zooplankton as a predator. For the non-spatial system, we analyzed the stability, the existence of Hopf bifurcation at the coexistence equilibrium and stability of bifurcating periodic solutions are obtained. Moreover, the diffusion driven instability, Hopf bifurcation of the corresponding spatial system with zero flux boundary conditions and Turing instability region regarding parameters are established. Numerical simulations are provided to illustrate theoretical results. Dynamics of an HIV pathogenesis Model with Absorption and Saturation Incidence Vinoth Sivakumar, Jayakumar Thippan, Prasantha Bharathi Dhandapani In this paper, the dynamics of an HIV infection model with absorption and saturation incidence are proposed and analyzed. Further, we introduce a time delay to the model, which describes a time between infected cells and excretion of the viral particles. This model is used to explain existence, characteristic equations, and stability of infected and disease free steady states. Numerical simulations are provided to illustrate the theoretical results. Uniqueness and Decay of Weak Solutions to Phase-Lock Equations Jishan Fan, Gen Nakamura, and Mei-Qin Zhan In this paper, we prove the uniqueness of weak solutions $(f ,Q)$ to the phase-lock equations with $f_0$ ∈ $L^2$ and $Q_0$ ∈ $L^3$ when the space dimension $d = 3$. We also prove the uniqueness of weak solutions $(f ,a)$ to the Ginzburg-Landau equations with $(f_0,a_0)$ ∈ $L^p ×L^p$ and $1 < p < 2$ when $d = 1$. We will also present a result on the decay of $Q$ as time $t →\infty$. Evolutionary Dynamics of Zero-sum Games with Degenerate Payoff Matrix and Bisexual Population N.N. Ganikhodjaev, U.U. Jamilov, M. Ladra In this paper we consider the quadratic stochastic operators describing evolution of a bisexual population. We establish correlation between such operators and evolutionary games, namely demonstrate that Volterra quadratic stochastic operator with degenerate payoff matrix is non-ergodic and corresponding evolutionary game is rock-paper-scissors game. To prove this statements we study the asymptotic behavior of trajectories of the Volterra quadratic stochastic operators with the non-hyperbolic fixed points. Dynamics of a Stage-Structured-Prey and Predator Model with Linear Harvesting of Mature Prey and Predator In this paper we have studied the dynamical behaviors of stage-structuredprey and predator model with linear harvesting of both mature prey and predator. Optimal harvesting policy, positivity, boundedness and Hopfbifurcation have been studied. Finally, some graphical and numerical simulations are given in order to validate our analytical and theoretical findings. Numerical Solution of Hybrid Fuzzy Mixed Delay Differential Equation by Fourth Order Runge-Kutta Method D. Prasantha Bharathi, T. Jayakumar, S. Vinoth The hybrid systems are used to study many engineering problems over a period of time. Now, we are adding another parameter called delay to make the hybrid systems to model any biological problems in future. We are extending the system of hybrid fuzzy differential equation to hybrid fuzzy mixed delay differential equation in this research article. The Runge-Kutta method of order four has been used to solve the problem by converting the mixed delay differential equation to multiple retarded delay differential equation. Finally, the numerical example is presented and the results are analyzed. Distributed Delay Effects on Coupled van der Pol Oscillators, and a Chaotic van der Pol-Rayleigh System Ryan Roopnarain, S. Roy Choudhury Distributed delays modeled by 'weak generic kernels' are introduced in two well-known coupled van der Pol systems, as well as a chaotic van der PolRayleigh system with parametric forcing. The systems are closed via the 'linear chain trick'. Linear stability analysis of the systems and conditions for Hopf bifurcation that initiates oscillations are investigated, including deriving the normal form at bifurcation, and deducing the stability of the resulting limit cycle attractor. The value of the delay parameter $a = a_{Hopf}$ at Hopf bifurcation picks out the onset of Amplitude Death(AD) in all three systems, with oscillations at larger values (corresponding to weaker delay). In both van der Pol systems, the Hopf-generated limit cycles for $a > a_{Hopf}$ turn out to be remarkably stable under very large variations of all other system parameters beyond the Hopf bifurcation point, and do not undergo further symmetry breaking, cyclic-fold, flip, transcritical or Neimark-Sacker bifurcations. Numerical simulations reveal strong distortion and rotation of the limit cycles in phase space as the parameters are pushed far into the post-Hopf regime, and also reveal other features, such as how the oscillation amplitudes and time periods of the physical variables on the limit cycle attractor change as the delay and other parameters are varied. For the chaotic system, very strong delays may still lead to the cessation of oscillations and the onset of AD. Varying of the other important system parameter, the parametric excitation, leads to a rich sequence of dynamical behaviors, with the bifurcations leading from one regime (or type of attractor) into the next being carefully tracked. Watermarking of Electronic Patient Record in Parkinson Disease Affected Speech: A Robust and Secure Audio Hiding Technique for Smart e-healthcare Application Aniruddha Kanhe, Aghila Gnanasekaran A cloud-based framework for security of electronic patient record, in telediagnosis of Parkinson's disease (PD) speech signal has been proposed in this paper. The security and authenticity of patient's personal information is achieved using proposed Discrete Cosine Transform-Singular Value Decomposition (DCT-SVD) based audio watermarking technique. The automatic classification of PD affected speech signal from healthy person's speech signal requires high computational accuracy and in this work, it is addressed by extracting various time frequency features for the Support Vector Machine (SVM) classifier. In the proposed framework the speech signal of PD suspected person is recorded by a mobile application. The personal information is securely embedded in this speech signal using proposed DCT-SVD based audio watermarking technique, before transmitting to cloud for automatic classification. To ensure the security of electronic patient record (EPR), proposed watermarking technique is tested against various signal processing attacks and the performance of classifier has been evaluated by computing classification sensitivity, specificity, accuracy and area under the curve of receiver operating characteristics (ROC). The theoretical proofs and experimental results, show that proposed framework provides telediagnosis of PD affect patients without compromising the security and privacy of patient's records. The classification accuracy of 89% with high data payload of 6kbps is proposed leading to a smart and secure e-healthcare application. Abstract Fractals Marat Akhmet, Ejaily Milad Alejaily We develop a new definition of fractals which can be considered as an abstraction of the fractals determined through self-similarity. The definition is formulated through imposing conditions which govern a relation between subsets of a metric space to build a porous self-similar structure. Examples are provided to confirm that the definition satisfies a large class of self-similar fractals. The new concepts create new frontiers for fractals and chaos investigations. The Global Attractiveness of the Fixed Point of a Gonosomal Evolution Operator Akmal T. Absalamov In the paper we prove a conjecture of U.A. Rozikov and R. Varro about globally attractiveness of a unique nonhyperbolic fixed point of the normalized gonosomal evolution operator of a sex linked inheritance. Holling-Tanner Predator-Prey Model with Type-IV Functional Response and Harvesting Nurul Huda Gazi, Subrata Kumar Biswas In this paper we modify Holling-Tanner predator-preymodel by using type-IV functional response in prey species in lieu of type-II functional response. Harvesting is used in predator as well as prey species. This model is compared with a special type of Kolmogorov model. In the case of quadratic harvesting, the fixed points are computed after nondimensionalization. For the non-existence of periodic orbits in the first quadrant we apply a condition of the general Kolmogorov model to exist a Dulac function. We show that this system does not have periodic orbits with the help of numerical simulation and graphical representation. Bifurcation Analysis of Wilson-Cowan Model with Singular Impulses Marat Akhmet, Sabahattin C¸ a˘g The paper concernswithWilson-Cowan neuralmodel. Themain novelty of the study is that besides the traditional singularity of themodel, we consider singular impulses. A new technique of analysis of the phenomenon is suggested. This allows to consider the existence of solutions of the model and bifurcation in ultimate neural behavior observed through numerical simulations. The bifurcations are reasoned by impulses and singularity in the model and they concern the structure of attractors, which consist of newly introduced sets in the phase space such that medusas and rings. Copyright © 2011-2020 L & H Scientific Publishing. All rights reserved.
CommonCrawl
\begin{document} \begin{abstract} We give sharp upper bounds on the anticanonical degree of fake weighted projective spaces, only depending on the dimension and the Gorenstein index. \end{abstract} \maketitle \section{Introduction} A $d$-dimensional \emph{fake weighted projective space} is a quotient $X = ({\mathbb C}^{d+1} \backslash \{0\}) / G$ by a diagonal action of $G := {\mathbb C}^*\!\times\!\Gamma$, where $\Gamma$ is a finite abelian group and the factor~${\mathbb C}^*$ acts via positive weights. Any fake weighted projective space~$X$ is normal,~${\mathbb Q}$-factorial, of Picard number one and is a Fano variety, i.e. its anticanonical divisor~$-\mathcal{K}$ is ample. Apart from the classical projective spaces, all fake weighted projective spaces are singular, but have at most abelian quotient singularities. Fake weighted projective spaces form an interesting example class for the general question of effectively bounding geometric data of a Fano variety in terms of its singularities. For instance, Kasprzyk~\cite{Ka} bounds the order of the torsion part of the divisor class group of a fake weighted projective space $X$ provided that it has at most canonical singularities. Another invariant of the singularities is the \emph{Gorenstein index}, i.e., the minimal positive integer $\iota$ such that $\iota \mathcal{K}$ is Cartier. In the case of Gorenstein index $\iota = 1$, Nill~\cite{Ni} provides a bound for the \emph{degree} of a $d$-dimensional fake weighted projective space $X$, i.e., the self intersection number $(-\mathcal{K})^d$ of its anticanonical divisor. In the present paper, we extend Nill's bound to higher Gorenstein indices. For any $d \ge 2$ define a $d+1$ tuple of positive integers by $$ Q_{\iota,d} := \left( \frac{2 t_{\iota,d}}{s_{\iota,1}}, \ldots , \frac{2 t_{\iota,d}}{s_{\iota,d-1}}, 1, 1 \right), \ \ s_{\iota,k} := \iota\, s_{\iota,1} \cdots s_{\iota,k-1}+1, \ \ t_{\iota,k} := \iota\, s_{\iota,1} \cdots s_{\iota,k\!-\!1}, $$ where $s_{\iota,1} := \iota + 1$. Our main result provides sharp upper bounds on the degree $(-\mathcal{K})^d$ in terms of the Gorenstein index and lists the cases attaining these bounds: \begin{introthm}\label{thm:main} The anticanonical degree of any $d$-dimensional fake weighted projective space $X$ of Gorenstein index $\iota$ is bounded according to the following table. \begin{longtable}{c|c|cc|cc|c} $d$ & $1$ & $2$ & $2$ & $3$ & $3$ & $\ge 4$ \\[2pt]\hline &&&&&\\[-9pt] $\iota$ & $\ge 1$ & $1$ & $\ge\!2$ & $1$ & $\ge\!2$ & $\ge 1$ \\[2pt]\hline &&&&&\\[-10pt] $\begin{array}{c} \text{bound on}\\ (-\mathcal{K})^d \end{array}$ & $2$ & $9$ & $\frac{2(\iota+1)^2}{\iota}$ & $72$ & $\frac{2\,t_{\iota,3}^2}{\iota^4}$ & $\frac{2\,t_{\iota,d}^2}{\iota^{d+1}}$ \\[8pt]\hline &&&&&\\[-10pt] $\begin{array}{c} \text{attained}\\ \text{exactly by} \end{array}$ & ${\mathbb P}^1$ & ${\mathbb P}^2$ & ${\mathbb P}(2\iota,1,1)$ & $\begin{array}{l} {\mathbb P}(3,1,1,1), \\ {\mathbb P}(6,4,1,1) \end{array}$ & ${\mathbb P}(Q_{\iota,3})$ & ${\mathbb P}(Q_{\iota,d})$ \end{longtable} \noindent Equality on the degree holds if and only if $X$ is isomorphic to one of the weighted projective spaces in the last row of the table. \end{introthm} The article is organized as follows. Section \ref{section:fpws-ws} provides basic properties of fake weighted projective spaces. In Section \ref{section:gi-uf} we assign to any $d$-dimensional fake weighted projective space of Gorenstein index $\iota$ a certain partition of $1/\iota$ into $d+1$ unit fractions and give a formula to compute the anticanonical degree in terms of the denominators of these unit fractions. Section \ref{section:syl-seq} contains the number theoretic part of the proof of Theorem \ref{thm:main}. In Section \ref{section:proof} we complete the proof of the main result. This amounts to constructing a weighted projective space of given dimension $d$ and Gorenstein index $\iota$ whose unit fraction partition of $1/\iota$ meets a maximality condition. \section{Fake weighted projective spaces}\label{section:fpws-ws} We recall basic properties of fake weighted projective spaces and fix our notation, see also \cite[Sect.~3]{Ni}. The reader is assumed to be familiar with the very basics of toric geometry \cite{CoLiSch, Fu}. Throughout the article $N$ is a rank $d$ lattice for some $d \in {\mathbb Z}_{\ge 2}$. Its dual lattice is denoted by $M = {\rm Hom}(N,{\mathbb Z})$ with pairing $\bangle{\cdot\, , \cdot } \colon M \times N \rightarrow {\mathbb Z}$. We write $N_{\mathbb R} := N \otimes_{\mathbb Z} {\mathbb R}$ and $M_{\mathbb R} := M \otimes_{\mathbb Z} {\mathbb R}$. Polytopes $P \subseteq N_{\mathbb R}$ are assumed to be full dimensional with $\mathbf{0} \in N_{\mathbb R}$ in their interior. The \emph{normalized volume} of a $d$-dimensional polytope $P$ is ${\rm Vol}(P) = d! {\rm vol}(P)$, where ${\rm vol}(P)$ denotes its euclidean volume. The \emph{dual} of a polytope $P \subseteq N_{\mathbb R}$ is the polytope \begin{equation*} P^* \ := \ \{ u \in M_{\mathbb R} ; \, \bangle{u,v} \ge -1 \text{ for all } v \in P\} \ \subseteq \ M_{\mathbb R}. \end{equation*} For a facet $F$ of $P$ we denote by $u_F \in M_{{\mathbb R}}$ the unique linear form with $\bangle{u_F,v} = -1$ for all $v \in F$. We have \begin{equation*} P^* \ = \ {\rm conv}(\, u_F; \, F \text{ facet of } P\, ), \qquad P \ = \ \{v \in N_{\mathbb R} ; \, \bangle{u_F,v} \ge -1,\, F \text{ facet of } P\}. \end{equation*} A \emph{lattice polytope} $P \subseteq N_{\mathbb R}$ is a polytope whose vertices are lattice points in $N$. We regard two lattice polytopes $P \subseteq N_{{\mathbb R}}$ and $P' \subseteq N'_{{\mathbb R}}$ as isomorphic if there is a lattice isomorphism $\varphi \colon N \rightarrow N'$ mapping $P$ bijectively to $P'$. \begin{proposition} The fake weighted projective spaces are precisely the toric varieties $X = X(P)$ associated to the face fan of a lattice simplex $P \subseteq N_{\mathbb R}$ with primitive vertices. \end{proposition} \begin{proof} This follows from \cite[Sect.~5.1]{CoLiSch}. \end{proof} Two fake weighted projective spaces are isomorphic if and only if their corresponding lattice simplices are isomorphic. The (true) weighted projective spaces among them correspond to lattice simplices whose vertices generate the lattice. Many geometric properties of a fake weighted projective space can be read off the corresponding lattice simplex. Here we focus our attention on the Gorenstein index and the anticanonical degree. \begin{definition} The \emph{index} of a lattice polytope $P \subseteq N_{\mathbb R}$ is the positive integer \begin{equation*} \iota_P \ := \ \min ( \, k \in {\mathbb Z}_{\ge 1} ; \, k P^* \text{ is a lattice polytope}\, ). \end{equation*} \end{definition} \begin{lemma}\label{lemma:gorind} The Gorenstein index of any fake weighted projective space $X = X(P)$ equals the index $\iota_P$ of the corresponding lattice simplex $P \subseteq N_{\mathbb R}$. \end{lemma} \begin{proof} The dual polytope $P^*$ is the polytope associated to $-\mathcal{K}_X$, see \cite[Sect.~4.3]{CoLiSch}. The assertion follows from \cite[Thm.~4.2.8]{CoLiSch}. \end{proof} \begin{lemma}{\rm{(cf.~\cite[p.~111]{Fu})}}\label{lemma:degvol} Let $X = X(P)$ a $d$-dimensional fake weighted projective space. Then we have $(-\mathcal{K}_X)^d \ = \ {\rm Vol}(P^*)$. \end{lemma} Any weighted projective space ${\mathbb P}(q_0,\dots,q_d)$ is up to an isomorphism uniquely determined by its weights $(q_0,\dots,q_d)$. More generally we assign weights to any lattice simplex $P \subseteq N_{\mathbb R}$. \begin{definition}{(cf.~\cite{Co,Ni})}\label{def:weightsys} A \emph{weight system $Q$ (of length $d$)} is a $(d+1)$-tuple $Q = (q_0, \dots, q_d)$ of positive integers. We call \begin{equation*} |Q| \ := \ q_0 + \dots + q_d, \qquad \lambda_Q \ := \ \gcd(Q), \qquad Q_{{\rm red}} := Q/\lambda_Q \end{equation*} the \emph{total weight}, the \emph{factor} and the \emph{reduction} of $Q$. A weight system $Q$ is called \emph{reduced} if it coincides with its reduction and it is called \emph{well-formed} if we have $\gcd(q_j \, ; \, j = 0,\dots,d,\ j \ne i ) = 1$ for all $i = 0,\dots, d$. \end{definition} \begin{definition}{(cf.~\cite{Co,Ni})} To any lattice simplex $P = {\rm conv}(v_0,\dots,v_d) \subseteq N_{\mathbb R}$ we associate a weight system by \begin{equation*} Q_P \ := \ (q_0,\dots,q_d), \qquad q_i \ := \ |\det(\, v_j ; \, j = 0,\dots, d, \ j \ne i\,)|. \end{equation*} \end{definition} The weight systems of isomorphic lattice simplices coincide up to order. Denote by $v_0,\dots,v_d \in N$ the vertices of the lattice simplex $P \subseteq N_{\mathbb R}$. The reduction $(Q_P)_{{\rm red}}$ is the unique reduced weight system satisfying \begin{equation*} \sum\limits_{i=0}^d q_i v_i \ = \ 0. \end{equation*} Moreover, if the vertices of $P$ are primitive, then $(Q_P)_{\rm red}$ is well-formed. Following the naming convention in \cite{Ni} we call $\lambda_P := [N:N_P]$ the \emph{factor} of the lattice simplex $P \subseteq N_{\mathbb R}$, where $N_P \subseteq N$ is the sublattice generated by the vertices of $P$. In \cite{Ka} it is called the \emph{multiplicity} of $P$. \begin{lemma}{\rm{(cf. \cite[Lemma~2.4]{Co})}} For any lattice simplex $P$ we have $\lambda_P = \lambda_{Q_P}$. \end{lemma} If $P$ has primitive vertices then its factor $\lambda_P$ coincides with the order of the torsion part of ${\rm Cl}(X(P))$. In particular $X(P)$ is a weighted projective space if and only if $Q_P$ is reduced. The following Theorem is a reformulation of~\cite[4.5--4.7]{Co}. Compare also~\cite[Thm.~5.4.5]{Ba94} and~\cite[Prop.~2]{BoBo}. \begin{theorem}\label{thm:wpsfwps} To any well-formed weight system $Q$ of length $d$ there exists a $d$-dimensional lattice simplex $P_Q \subseteq N_{\mathbb R}$, unique up to an isomorphism, with $Q_{P_Q} = Q$. Any fake weighted projective space $X = X(P)$ with $(Q_P)_{\rm red} = Q$ is isomorphic to the quotient of ${\mathbb P}(Q)$ by the action of the finite group $N/N_P$ corresponding to the inclusion $N_P \subseteq N$. \end{theorem} As an immediate consequence of Theorem \ref{thm:wpsfwps} we can relate the Gorenstein index and the anticanonical degree of a fake weighted projective space $X(P)$ to those of the weighted projective space ${\mathbb P}((Q_P)_{\rm red})$. \begin{corollary}\label{cor:gidegwpsfwps} Let $X = X(P)$ a $d$-dimensional fake weighted projective space and let $X' = {\mathbb P}((Q_P)_{\rm red})$ the corresponding weighted projective space. Then the Gorenstein index of $X$ is a multiple of the Gorenstein index of $X'$. Moreover we have $\lambda_P (-\mathcal{K}_X)^d = (-\mathcal{K}_{X'})^d$. In particular, $(-\mathcal{K}_X)^d = (-\mathcal{K}_{X'})^d$ holds if and only if $X$ is isomorphic to $X'$. \end{corollary} \begin{proof} By Theorem~\ref{thm:wpsfwps} there is a square matrix $H$ in a lattice basis of $N$ with determinant $\lambda_P$ such that $P = H P_Q$ holds. Dualizing yields $P_Q^* = H^* P^*$. Now apply Lemma \ref{lemma:gorind} and Lemma \ref{lemma:degvol}. \end{proof} \section{Unit fraction partitions}\label{section:gi-uf} To any $d$-dimensional lattice simplex $P \subseteq N_{\mathbb R}$ of index $\iota$ we assign a partition of $1/\iota$ into a sum of $d+1$ unit fractions. The main result of this section is Proposition \ref{prop:uf-vol} where give a formula to compute the normalized volume of the dual polytope $P^*$ in terms of the denominators of these unit fractions. \begin{definition} Let $\iota \in {\mathbb Z}_{\ge 1}$. A tuple $A = (a_1,\dots,a_n) \in {\mathbb Z}^n_{\ge 1}$ is called a \emph{uf-partition of $\iota$} (\emph{of length $n$}) if the following holds: \begin{equation*} \frac{1}{\iota} \ = \ \sum\limits_{k=1}^n \frac{1}{a_k}. \end{equation*} \end{definition} \begin{proposition}\label{prop:P-uf} Let $P \subseteq N_{\mathbb R}$ a $d$-dimensional lattice simplex of index $\iota$ with weight system $Q_P = (q_0, \dots,q_d)$. Then \begin{equation*} A(P) \ := \ \left( \frac{\iota |Q_P|}{q_0}, \dots, \frac{\iota |Q_P|}{q_d} \right) \end{equation*} is a uf-partition of $\iota$ of length $d+1$. We call it the uf-partition of $\iota$ associated to $P$. \end{proposition} \begin{proof} We show that $A(P)$ consists of positive integers. Let $v_0,\dots,v_d \in N$ the vertices of $P$. For $0 \le i \le d$ let $F_i = {\rm conv}(v_0,\dots,\hat{v}_i,\dots,v_d)$ the $i$-th facet of $P$, where $\hat{v}_i$ means that $v_i$ is omitted. We have \begin{equation*} 0 \ = \ \sum\limits_{j=0}^d q_j \bangle{\iota u_{F_i},v_j} \ = \ \bangle{\iota u_{F_i},v_i}q_i\, - \iota\! \sum\limits_{\tiny\begin{array}{l} j\! =\! 0, \\ j\! \ne\! i \end{array}}^d q_j \ = \ (\bangle{\iota u_{F_i},v_i} + 1) q_i - \iota |Q_P|. \end{equation*} By definition of $\iota$ we have $\iota u_{F_i} \in M$. Thus $q_i$ divides $\iota |Q_P|$, so $A(P)$ consists of positive integers. Now summing over the reciprocals of $A(P)$ we see that it is in fact a uf-partition of $\iota$. \end{proof} \begin{proposition}\label{prop:uf-vol} For any $d$-dimensional lattice simplex $P \subseteq N_{\mathbb R}$ with associated uf-partition $A(P) = (a_0,\dots,a_d)$ of $\iota_P$ we have \begin{equation*} \lambda_P {\rm Vol}(\iota_P P^*) \ = \ \frac{a_0\cdots a_d}{{\rm lcm}(a_0,\dots,a_d)}. \end{equation*} \end{proposition} Proposition \ref{prop:uf-vol} generalizes \cite[Prop.~4.5.5]{Ni} to the case $\iota \ge 2$. For the proof of Proposition \ref{prop:uf-vol} and in preparation for the proof of Theorem \ref{thm:main} we extend Batyrev's correspondence between weight systems of reflexive polyhedra and uf-partitions of $1$ given in \cite[Thm.~5.4.3]{Ba94} to the case of higher indices. \begin{definition} The \emph{index of a weight system} $Q = (q_0,\dots,q_d)$ is the positive integer \begin{equation*} \iota_Q \ := \ \min ( \, k \in {\mathbb Z}_{\ge 1} ; \, q_i \mid k |Q| \text{ for all } i = 0,\dots,d \, ). \end{equation*} \end{definition} \begin{definition} A tuple $A = (a_1,\dots,a_n) \in {\mathbb Z}^n_{\ge 1}$ is called a \emph{uf-partition} if it is a uf-partition of $\iota$ for some $\iota \in {\mathbb Z}_{\ge 1}$. For a uf-partition $A = (a_1,\dots,a_n)$ of $\iota$ we call \begin{equation*} t_A \ := \ {\rm lcm}(a_1,\dots,a_n), \qquad \lambda_A \ := \ \gcd(\iota,a_1,\dots,a_n), \qquad A_{{\rm red}} \ := \ A/\lambda_A \end{equation*} the \emph{total weight}, the \emph{factor} and the \emph{reduction} of $A$. A uf-partition $A$ is called \emph{reduced} if it coincides with its reduction and it is called \emph{well-formed} if $a_i \mid {\rm lcm}(a_j \, ; \, j \ne i)$ holds for all $i = 1,\dots,n$ \end{definition} \begin{proposition}\label{prop:ws-ufp} Let $Q = (q_0,\dots,q_d)$ any weight system of length $d$ and index $\iota$ and let $A = (a_0,\dots,a_d)$ any uf-partition of length $d+1$. Then the following hold: \begin{enumerate} \item $A(Q) := ( \iota |Q|/q_0, \dots, \iota |Q|/q_d )$ is a reduced uf-partition of $\iota$ of length $d+1$. \item $Q(A) := ( t_A/a_0, \dots, t_A/a_d )$ is a reduced weight system of length $d$. \item $Q(A(Q)) = Q_{\rm red}$ and $A(Q(A)) = A_{\rm red}$ and this correspondence respects well-formedness. \end{enumerate} \end{proposition} For the proof of Proposition \ref{prop:ws-ufp} we need the following Lemma. \begin{lemma}\label{lemma:detG} For $\iota, a_1,\dots,a_n \in {\mathbb Z}$ set \begin{equation*} G(\iota;a_1,\dots,a_n) \ := \ \left[\begin{array}{ccccccc} (a_1 - \iota ) && -\iota && \dots && -\iota \\ &&&&&&\\ -\iota && (a_2 - \iota) && \ddots && \vdots \\ &&&\ddots&&&\\ \vdots && \ddots && (a_{n-1} - \iota) && -\iota \\ &&&&&&\\ -\iota && \dots && -\iota && (a_n - \iota) \end{array}\right]. \end{equation*} Then \begin{equation*} \det(G(\iota;a_1,\dots,a_n)) \ = \ a_1\cdots a_n - \iota \sum\limits_{i=1}^n \prod\limits_{j \ne i} a_j. \end{equation*} \end{lemma} \begin{proof} We prove the Lemma by induction on $n$. The cases $n=1$ and $n=2$ are verified by direct computation. Let $n \ge 3$. Subtracting the second to last row of $G := G(\iota;a_1,\dots,a_n)$ from the last row, we obtain \begin{equation*} \det(G) \ = \ a_n \det(G') + a_{n-1} \det(G''), \end{equation*} where $G' = G(\iota; a_1,\dots,a_{n-1})$ and $G'' = G(\iota; a_1,\dots,a_{n-2},0)$. By the induction hypothesis we have \begin{equation*} \det(G') \ = \ a_1\cdots a_{n-1} - \iota \sum\limits_{i=1}^{n-1} \prod\limits_{j \ne i} a_j, \qquad \det(G'') \ = \ - \iota a_1\cdots a_{n-2}. \end{equation*} \end{proof} \begin{proof}[Proof of Proposition \ref{prop:ws-ufp}] We prove (i). The weight system $Q$ is of index $\iota$, so $q_i$ divides $\iota |Q|$. Hence $A(Q)$ consists of positive integers. Summing over the reciprocals of $A(Q)$ shows that it is a uf-partition of $\iota$. Assume $A(Q)$ is not reduced and let $A' = A(Q)_{\rm red}$. Then $A'$ is a uf-partition of some $\iota' < \iota$. Thus $q_i \mid \iota'|Q|$ holds for all $i = 0,\dots,d$, contradicting the minimality of $\iota$. So $A(Q)$ is reduced. Item (ii) follows from the fact that $t_A$ is the least common multiple of $a_0,\dots,a_d$. We prove (iii). Let $Q = (q_0,\dots,q_d)$ a weight system of length $d$ and index $\iota$ and write $ A(Q) = (a_0,\dots,a_d)$. The matrix $G = G(\iota;a_0,\dots,a_d)$ as defined in Lemma \ref{lemma:detG} has rank $d$. Both $Q$ and $Q(A(Q))$ are contained in the kernel of $G$ and the latter weight system is reduced. Thus it suffices to show that $G$ has rank $d$. Its kernel is non-trivial, so it has at most rank $d$. The inequality \begin{equation*} \frac{1}{\iota} \ > \ \frac{1}{a_0} + \dots + \frac{1}{a_{d-1}} \end{equation*} yields $\det(G(\iota;a_0,\dots,a_{d-1})) > 0$. Hence the minor of $G$, obtained by deleting the last column and row, does not vanish, which yields ${\rm rk}(G) = d$. Now let $A = (a_0,\dots,a_d)$ a uf-partition of $\iota$ of length $d+1$. Write $Q(A) = (q_0,\dots,q_d)$ and let $A(Q) = (a_0', \dots, a_d')$. This is a uf-partition of $\iota_Q$ and we have $a_i' q_i = \iota_Q |Q|$ for all $i=0,\dots,d$. Note that $\iota_Q$ divides $\iota$. Write $\iota = \lambda \iota_Q$. We obtain \begin{equation*} \lambda a_i' \ = \ \frac{\iota}{\iota_Q} \frac{\iota_Q|Q|}{q_i} \ = \ \frac{\iota \, \iota_Q t_{A(Q)} a_i}{\iota\,\iota_Q t_{A(Q)}} \ = \ a_i. \end{equation*} Hence $A(Q) = \lambda A'$ holds. As $A'$ is reduced, this yields $A(Q)_{\rm red} = A'$. For the last assertion in (iii) let $Q = (q_0,\dots,q_d)$ a reduced weight system of length $d$ and write $A := A(Q) = (a_0,\dots,a_d)$. We have $q_i = t_{A(Q)}/a_i$. The well-formedness of $Q$ is equivalent to saying that \begin{equation*} \prod\limits_{j \ne i} a_j \ = \ t_{A(Q)} \gcd\left( \prod\limits_{k \ne i, j} a_k; \, j \ne i \right) \end{equation*} holds for all $i = 0,\dots,d$. This in turn is equivalent to the well-formedness of $A(Q)$. \end{proof} \begin{corollary}\label{cor:ws-ufp} For any $d$-dimensional lattice simplex $P\subseteq N_{\mathbb R}$ we have $A(P)_{\rm red} = A(Q_P)$ and $\iota_P |Q_P| = \lambda_P t_{A(P)}$. \end{corollary} \begin{proof} The first assertion follows from the definitions of $A(P),\, A(Q_P)$ and Proposition \ref{prop:ws-ufp} (iii). For the second assertion note that $|(Q_P)_{\rm red}| = t_{A(Q_P)}/\iota_{Q_P} = t_{A(P)}/\iota_P$ holds. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:uf-vol}] For any weight system $Q = (q_0,\dots, q_d)$ of length $d$ we define (cf. \cite{Co,Ni}) \begin{equation*} m_Q \ := \ \frac{|Q|^{d-1}}{q_0 \cdots q_d} \ \in \ {\mathbb Q}_{>0}. \end{equation*} By \cite[Proposition~3.6]{Ni} the weight systems $Q_P$ and $Q_{\iota_P P^*}$ are related by $Q_{\iota_P P^*} = \iota_P^d m_{Q_P} Q_P$. Note that for the normalized volume of $P \subseteq N_{\mathbb R}$ we have ${\rm Vol}(\iota_P P^*) = |Q_{\iota_P P^*}|$. Moreover by Corollary \ref{cor:ws-ufp} we have $\iota_P |Q_P| = \lambda_P t_{A(P)}$. We obtain \begin{equation*} \lambda_P {\rm Vol}(\iota_P P^*) \ = \ \lambda_P \iota_P^d m_{Q_P} |Q_P| \ = \ \lambda_P \iota_P^d \frac{|Q_P|^d}{q_0 \cdots q_d} \ = \ \frac{a_0 \cdots a_d}{{\rm lcm}(a_0,\dots,a_d)}. \end{equation*} \end{proof} \section{Sharp bounds for uf-partitions}\label{section:syl-seq} Proposition \ref{prop:uf_ineq} gives an upper bound on the expression obtained in Proposition~\ref{prop:uf-vol}. This constitutes the number theoretic part of the proof of Theorem~\ref{thm:main}. The Lemmas thereafter are preparation for the proof of Proposition \ref{prop:uf_ineq}. \begin{definition}\label{def:sylseq} For any $\iota \in {\mathbb Z}_{\ge 1}$ we define a sequence $S_\iota = (s_{\iota,1}, s_{\iota,2}, \dots)$ of positive integers by \begin{equation*} s_{\iota,1} \ := \ \iota+1, \qquad s_{\iota,k+1} \ := \ s_{\iota,k} (s_{\iota,k} - 1) + 1. \end{equation*} Moreover, for any $k \in {\mathbb Z}_{\ge 1}$ we set $t_{\iota,k} := s_{\iota,k} - 1$. We denote by ${\rm syl}_{\iota,n}$ the uf-partition of $\iota$ of length $n$ given by \begin{equation*} {\rm syl}_{\iota,n} \ := \ (s_{\iota,1}, \dots, s_{\iota,n-2}, 2\, t_{\iota,n-1}, 2\, t_{\iota,n-1}). \end{equation*} Following the naming convention in \cite{Ni} we call ${\rm syl}_{\iota,n}$ the \emph{enlarged sylvester partition} (\emph{of $\iota$ of length $n$}). \end{definition} \begin{proposition}\label{prop:uf_ineq} Let $\iota \in {\mathbb Z}_{\ge 1}$ and $n \ge 3$. Assume $(\iota,n) \ne (1,3)$. For any uf-partition $A = (a_1,\dots,a_n)$ of $\iota$ with $a_1 \le \dots \le a_n$ we have \begin{equation*} \frac{a_1 \cdots a_n}{{\rm lcm}(a_1,\dots,a_n)} \ \le \ a_1 \cdots a_{n-1} \ \le \ \frac{2\, t_{\iota,n-1}^2}{\iota}. \end{equation*} Equality in the second case holds if and only if one of the following holds: \begin{enumerate} \item[-] $(\iota,n)=(2,3)$ and $A = (6,6,6)$. \item[-] $(\iota,n)=(1,4)$ and $A = (2,6,6,6)$. \item[-] $A$ is the enlarged sylvester partition ${\rm syl}_{\iota,n}$. \end{enumerate} \end{proposition} This Proposition is a generalization of \cite[Thm.~5.1.3]{Ni}. There Nill utilizes and expands the techniques of Izhboldin and Kurliandchik presented in \cite{IzKu}. Here we modify Nill's arguments to incorporate the cases for $\iota \ge 2$. Let $\iota, n \in {\mathbb Z}_{\ge 1}$. We denote by $A_\iota^n \subseteq {\mathbb R}^n$ the compact set of all tuples $x \in {\mathbb R}^n$ with \begin{enumerate} \item[(A1)] $x_1 \ge \dots \ge x_n \ge 0$, \item[(A2)] $x_1 + \dots + x_n = 1/\iota$, \item[(A3)] $x_1\cdots x_k \le \iota (x_{k+1} + \dots + x_n)$ for all $k = 1, \dots, n-1$. \end{enumerate} \begin{lemma}\label{lemma:ufp_in_A} For any uf-partition $A = (a_1,\dots,a_n)$ of $\iota$ with $a_1 \le \dots \le a_n$ the tuple $(1/a_1,\dots,1/a_n)$ is contained in $A_{\iota}^n$. \end{lemma} \begin{proof} The tuple $(1/a_1,\dots,1/a_n)$ fulfills conditions (A1) and (A2). For the third condition let $1 \le k \le n-1$. Then we have \begin{equation*} \iota \left( \frac{1}{a_{k+1}} + \dots + \frac{1}{a_{n}} \right) \ = \ 1 - \iota \left( \frac{1}{a_1} + \dots + \frac{1}{a_k} \right) \ = \ \frac{a_1 \cdots a_k - \iota\left(\sum_{j=1}^k \prod_{i \ne j} a_i\right)}{a_1 \cdots a_k}. \end{equation*} The numerator on the right hand side is a positive integer. In particular, it is at least one. \end{proof} The main part of the proof of Proposition \ref{prop:uf_ineq} is incorporated in the following Lemma, which extends \cite[Lemma~5.6]{Ni}. \begin{lemma}\label{lemma:izku} Let $n \ge 3$, $\iota \in {\mathbb Z}_{\ge 1}$ and let $x \in A_\iota^n$. Then, except for the case $(\iota,n) = (1,3)$, we have \begin{equation*} x_1\cdots x_{n-1} \ge \frac{\iota}{2\,t_{\iota,n-1}^2}. \end{equation*} Equality holds if and only if one of the following holds: \begin{enumerate} \item[-] $(\iota,n) = (2,3)$ and $(x_1,x_2,x_3) = (1/6,1/6,1/6)$. \item[-] $(\iota,n) = (1,4)$ and $(x_1,x_2,x_3,x_4) = (1/2,1/6,1/6,1/6)$. \item[-] $(1/x_1,\dots,1/x_n)$ is the enlarged sylvester partition ${\rm syl}_{\iota,n}$. \end{enumerate} \end{lemma} We will need the following result, which is an extension of \cite[Lemma~5.4]{Ni} to higher indices. \begin{lemma}\label{lemma:t_n-ineq} Let $\iota \in {\mathbb Z}_{\ge 1}$, $n \in {\mathbb Z}_{\ge 1}$ and $1 \le r \le n$. Then, except for the case $(\iota,n,r) = (1,2,2)$, we have \begin{equation*} (r+1)^r\, t_{\iota,n-r+1}^{r+1} \ \le \ 2\, t_{\iota,n}^2. \end{equation*} Equality holds if and only if either $r = 1$ or $(\iota,n,r) = (1,3,2)$ or $(\iota,n,r) = (2,2,2)$. \end{lemma} \begin{proof} We prove the Lemma by induction on $n$ and $r$. The case $r = 1$ is clear. Let $r \ge 2$. The cases $n = 2$ and $n = 3$ are verified by direct computation. Let $n \ge 4$. Then for any $2 \le r \le n$ we have $s_{\iota,n-1} > (r+1)^2/r$. Furthermore, for any $k\in {\mathbb Z}_{\ge 1}$ we have $s_{\iota,k} > (r+1)/r$. Combining these two inequalities, we obtain: \begin{equation*} r \left(\frac{r+1}{r}\right)^r \ < \ s_{\iota,n-r+1}\cdots s_{\iota,n-1}. \end{equation*} Moreover, $t_{\iota,n-1}^2 < t_{\iota,n}$ always holds. Now by the induction hypothesis the statement of the Lemma is true for $(\iota, n-1, r-1)$, ie. $r^{r-1}\, t_{\iota,n-r+1}^{r} \le 2\, t_{\iota,n-1}^2$ holds. Together with the previous considerations, we obtain: \begin{equation*} (r+1)^r\, t_{\iota,n-r+1}^{r+1} \ \le \ 2\, t_{\iota,n-1}^2\, r\, \left( \frac{r+1}{r} \right)^r t_{\iota,n-r+1} \ < \ 2\, t_{\iota,n-1}^2\, t_{\iota,n} \ < \ 2\, t_{\iota,n}^2. \end{equation*} \end{proof} \begin{lemma}\label{lemma:yn-1=yn} Let $n \ge 3$ and let $y \in A_\iota^n$ minimizing the product $y_1\cdots y_{n-1}$. Denote by $i_0 \in \{1,\dots,n\}$ the least index with $y_{i_0} = y_n$. Then the following hold: \begin{enumerate} \item $i_0 \le n-1$. \item For any $1 \le k \le i_0-2$ we have $y_k = 1/s_{\iota,k}$. \end{enumerate} \end{lemma} \begin{proof} We prove (i). Assume $y_{n-1} > y_n$. Choose $0 < \epsilon < (y_{n-1}-y_n)/2$. Then the tuple \begin{equation*} (\tilde{y}_1,\dots,\tilde{y}_n) \ = \ (y_1,\dots,y_{n-2},y_{n-1}-\epsilon,y_n+\epsilon). \end{equation*} is contained in $A_\iota^n$. We have $\tilde{y}_1 \cdots \tilde{y}_{n-1} < y_1 \cdots y_{n-1}$, contradicting the minimality of $y$. Thus $y_{n-1} = y_n$ holds. We prove (ii). For this we first show that $y_k > y_{k+1}$ and $y_1\cdots y_k = \iota(y_{k+1} + \dots + y_{n})$ holds for any $1 \le k \le i_0 - 2$. Assume $y_k = y_{k+1}$. Then we can find $1 \le i \le k < j < i_0$ with $y_{i-1} > y_i = \dots = y_k = \dots = y_j > y_{j+1}$. Here $y_1\cdots y_k < \iota(y_{k+1} + \dots + y_n)$ holds, since otherwise we had $0 = y_k(\iota - y_1\cdots y_{k-1}) + \iota(y_{k+2} + \dots + y_n)$, where the right hand side is positive. We can thus find $\epsilon > 0$ such that the tuple \begin{equation*} (\tilde{y}_1,\dots,\tilde{y}_n) \ = \ (y_1,\dots,y_{i-1},y_i+\epsilon,y_{i+1},\dots,y_{j-1},y_j-\epsilon,y_{j+1},\dots,y_n) \end{equation*} is contained in $A_\iota^n$. For the product of the first $n-1$ entries we have \begin{equation*} \tilde{y}_1 \cdots \tilde{y}_{n-1} \ = \ y_1\cdots y_{n-1} \left( 1 - \frac{\epsilon^2}{y_i y_j} \right) \ < \ y_1\cdots y_{n-1}, \end{equation*} contradicting the minimality of $y$. Hence $y_k > y_{k+1}$ holds for $k = 1,\dots,i_0-2$. Now assume that $y_1\cdots y_k < \iota(y_{k+1} + \dots + y_n)$ holds. Again, we find $\epsilon > 0$ such that the tuple \begin{equation*} (\tilde{y}_1,\dots,\tilde{y}_n) \ = \ (y_1,\dots,y_{k-1},y_k+\epsilon,y_{k+1}-\epsilon,y_{k+2},\dots,y_n) \end{equation*} is contained in $A_\iota^n$, leading to the same contradiction as before. Hence $y_1\cdots y_k = \iota(y_{k+1} + \dots + y_n)$ holds for $k = 1,\dots,i_0-2$. Using these identities we can compute $y_k$. We have $y_1 = \iota(y_2 + \dots + y_n) = 1 - \iota y_1$. Solving this for $y_1$ we obtain $y_1 = 1/(\iota+1) = 1/s_{\iota,1}$. Proceeding in this way we obtain $y_k = 1/s_{\iota_k}$ for all $1 \le k \le i_0-2$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:izku}] Let $y \in A_\iota^n$ minimizing the product $y_1\cdots y_{n-1}$. By Lemma \ref{lemma:ufp_in_A} the tuple of reciprocals of the enlarged sylvester partition ${\rm syl}_{\iota,n}$ is contained in $A_\iota^n$. Hence \begin{equation*} y_1\cdots y_{n-1} \ \le \ \frac{1}{s_{\iota,1}} \cdots \frac{1}{s_{\iota,n-2}} \cdot \frac{1}{2t_{\iota,n-1}} \ = \ \frac{\iota}{2\,t_{\iota,n-1}^2} \end{equation*} holds. Let $i_0 \in \{1,\dots,n\}$ the least index with $y_{i_0} = y_{n}$. By Lemma \ref{lemma:yn-1=yn} we have $i_0 \le n-1$. Set $r := n - i_0$. We distinguish three cases. \noindent\emph{Case 1.} Assume $i_0 = 1$. Then $r = n-1$ and $y_k = 1/(\iota n)$ holds for all $k=1,\dots,n$. We obtain \begin{equation*} \frac{\iota}{2\,t_{\iota,n-1}^2} \ \ge \ y_1 \cdots y_{n-1} \ = \ \frac{1}{(\iota n)^{n-1}} \ = \ \frac{1}{(r+1)^r t_{\iota,n-r}^r}. \end{equation*} Comparing this to Lemma \ref{lemma:t_n-ineq} for the case $r = n-1$, we see that this is only possible for $(\iota,n,r) = (2,3,2)$ and $(y_1, y_2, y_3) = (1/6,1/6,1/6)$ and in this case equality holds. \noindent\emph{Case 2.} Assume $i_0 = 2$. Then $r = n-2$ and $y_1 > y_2 = \dots = y_n$ holds. By (A2) we obtain $y_1 = 1/\iota - (n-1)y_n$. Using this identity, together with (A3), we obtain an interval of possible values for $y_n$. On this interval we define a function $f$ by \begin{equation*} f(y_n) \ := \ y_1 \cdots y_{n-1} \ = \ \left(\frac{1}{\iota} - (n-1)y_n\right) y_n^{n-2}, \qquad y_n \ \in \ \left[ \frac{1}{(r+1)t_{\iota,n-r}}, \frac{1}{\iota n} \right). \end{equation*} The function $f$ is monotone increasing, so it attains its minimum on the lower boundary of the interval. We obtain \begin{equation*} \qquad \frac{\iota}{2\,t_{\iota,n-1}^2} \ \ge \ y_1 \cdots y_{n-1} \ = \ f(y_n) \ \ge \ \frac{\iota}{(r+1)^r t_{\iota,n-r}^{r+1}} \end{equation*} Comparing this to Lemma \ref{lemma:t_n-ineq} for the case $r = n-2$, this is only possible for $(\iota,n) = (1,4)$ and $(y_1,y_2,y_3,y_4) = (1/2,1/6,1/6,1/6)$, or $n = 3$ and $(1/y_1,1/y_2,1/y_3) = {\rm syl}_{\iota,3}$ and in these cases equality holds. \noindent\emph{Case 3.} Assume $i_0 \ge 3$. Since $y_{n-1} = y_{n}$ holds, this case only appears for $n \ge 4$. We have $1 \le r \le n-3$. By Lemma \ref{lemma:yn-1=yn} we have $y_k = 1/s_{\iota,k}$ for all $1 \le k \le i_0-2$. Similar to the second case we use (A2) and (A3) to express $y_{i_0-1}$ in terms of $y_n$ and determine an interval of possible values for $y_n$: \begin{equation*} \qquad\qquad y_{i_0-1} \ = \ \frac{1}{t_{\iota,n-r-1}} - (r+1)\,y_n, \qquad y_n \ \in \ \left[ \frac{1}{(r+1)t_{\iota,n-r}}, \frac{1}{(r+2) t_{\iota,n-r-1}} \right). \end{equation*} Again, we define the function $f(y_n) := y_1 \cdots y_{n-1}$ on that interval. It is monotone increasing up to some point and then it is monotone decreasing, so it attains its minimum at the boundary. We obtain: \begin{equation*} \frac{\iota}{2\,t_{\iota,n-1}^2} \ \ge \ y_1 \cdots y_{n-1} \ \ge \ \min\left( \frac{\iota}{(r+1)^r t_{\iota,n-r}^r}, \frac{\iota}{(r+2)^{r+1} t_{\iota,n-r-1}^{r+1}} \right). \end{equation*} Comparing this to Lemma \ref{lemma:t_n-ineq} for $1 \le r \le n-3$, this is only possible for $r = 1$ and $y_n = 1/(2 t_{\iota,n-1})$. Hence $(1/y_1,\dots,1/y_n) = {\rm syl}_{\iota,n}$ and in this case equality holds. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:uf_ineq}] Let $A = (a_1,\dots,a_n)$ a uf-partition of $\iota$ with $a_1 \le \dots \le a_n$. The first inequality is due to the fact that $a_n$ divides ${\rm lcm}(a_1,\dots,a_n)$. By Lemma \ref{lemma:ufp_in_A} the tuple $x = (1/a_1,\dots,1/a_n)$ is contained in $A_\iota^n$. The second inequality and the assertions thereafter now follow immediately from Lemma \ref{lemma:izku}. \end{proof} \section{Proof of the main result}\label{section:proof} We state and prove the main result of the article. \begin{definition} For any $d \ge 2$ and any $\iota \in {\mathbb Z}_{\ge 1}$ we denote by $Q_d^\iota$ the well-formed weight system \begin{equation*} Q_{\iota,d} \ := \ Q({\rm syl}_{\iota,d+1}) \ = \ \left( \frac{2 t_{\iota,d}}{s_{\iota,1}}, \dots , \frac{2 t_{\iota,d}}{s_{\iota,d-1}}, 1, 1\right), \end{equation*} where $t_{\iota,d}$ and $s_{\iota,k}$, $k = 1,\dots,d-1$ are as in Definition \ref{def:sylseq}. \end{definition} \begin{theorem}\label{thm:main2} The anticanonical degree of any $d$-dimensional fake weighted projective space $X$ of Gorenstein index $\iota$ is bounded according to the following table. \begin{longtable}{c|c|cc|cc|c} $d$ & $1$ & $2$ & $2$ & $3$ & $3$ & $\ge 4$ \\[2pt]\hline &&&&&\\[-9pt] $\iota$ & $\ge 1$ & $1$ & $\ge\!2$ & $1$ & $\ge\!2$ & $\ge 1$ \\[2pt]\hline &&&&&\\[-10pt] $\begin{array}{c} \text{bound on}\\ (-\mathcal{K}_X)^d \end{array}$ & $2$ & $9$ & $\frac{2(\iota+1)^2}{\iota}$ & $72$ & $\frac{2t_{\iota,3}^2}{\iota^4}$ & $\frac{2t_{\iota,d}^2}{\iota^{d+1}}$ \\[8pt]\hline &&&&&\\[-10pt] $\begin{array}{c} \text{attained}\\ \text{exactly by} \end{array}$ & ${\mathbb P}^1$ & ${\mathbb P}^2$ & ${\mathbb P}(2\iota,1,1)$ & $\begin{array}{l} {\mathbb P}(3,1,1,1), \\ {\mathbb P}(6,4,1,1) \end{array}$ & ${\mathbb P}(Q_{\iota,3})$ & ${\mathbb P}(Q_{\iota,d})$ \end{longtable} \noindent Equality on the degree holds if and only if $X$ is isomorphic to one of the weighted projective spaces in the last row of the table. \end{theorem} \begin{proof} Let $X$ a $d$-dimensional fake weighted projective space of Gorenstein index $\iota$. Let $P \subseteq N_{\mathbb R}$ a $d$-dimensional lattice simplex with $X(P) \cong X$. Then $P$ has index $\iota$. Let $A := A(P) = (a_0,\dots,a_d)$ the uf-partition of $\iota$ associated to $P$. We may assume $a_0 \le \dots \le a_d$. By Lemma \ref{lemma:degvol} and Proposition \ref{prop:uf-vol} we have \begin{equation*} (-\mathcal{K}_X)^d \ = \ {\rm Vol}(P^*) \ = \ \frac{1}{\iota^d} {\rm Vol}(\iota P^*) \ \le \ \frac{1}{\iota^d} \frac{a_0\cdots a_d}{{\rm lcm}(a_0,\dots,a_d)}. \end{equation*} For $d = 1$ there is only one fake weighted projective space, namely ${\mathbb P}^1$, which has anticanonical degree $-\mathcal{K} = 2$. Let $d \ge 2$. In case $\iota = 1$ and $d = 2$ the right hand side of the inequality is bounded from above by $9$ and ${\mathbb P}^2$ is the only Gorenstein fake weighted projective plane whose degree attains that value, see~\cite[Ex.~4.7]{Ni}. If $(\iota,d) \ne (1,2)$, then Proposition \ref{prop:uf_ineq} provides the upper bound \begin{equation*} (-\mathcal{K}_X)^d \ \le \ \frac{1}{\iota^d} \frac{a_0\cdots a_d}{{\rm lcm}(a_0,\dots,a_d)} \ \le \ \frac{2 t_{\iota,d}^2}{\iota^{d+1}}. \end{equation*} Equality in the first case holds if and only if $X$ is a weighted projective space, see Corollary \ref{cor:gidegwpsfwps}. By Proposition \ref{prop:uf_ineq} equality in the second case holds if and only if one of the following holds: \begin{enumerate} \item $(\iota,d) = (2,2)$ and $A = (6,6,6)$. \item $(\iota,d) = (1,3)$ and $A = (2,6,6,6)$. \item $A = {\rm syl}_{\iota,d+1}$. \end{enumerate} Note that the uf-partition in (i) is not reduced. In particular, there is no weighted projective plane $X(P)$ of Gorenstein index $2$ with $A(P) = (6,6,6)$. The uf-partitions in (ii) and (iii) are reduced and well-formed. By Theorem \ref{thm:wpsfwps} and Proposition \ref{prop:ws-ufp} the uf-partition $A = (2,6,6,6)$ corresponds to the three-dimensional Gorenstein weighted projective space $X = {\mathbb P}(3,1,1,1)$ and the uf-partition $A = {\rm syl}_{\iota,d+1}$ corresponds to the $d$-dimensional weighted projective space $X = {\mathbb P}(Q_{\iota,d})$. \end{proof} \begin{bibdiv} \begin{biblist} \bib{Ba94}{article}{ title={Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties}, author={Batyrev, V. V.}, journal={J. Algebraic Geom.}, volume={3}, pages={493--535}, year={1994} } \bib{BoBo}{article}{ title={Singular toric Fano varieties}, author={Borisov, A. A.}, author={Borisov, L. A.}, journal={Mat. Sb.}, volume={183}, number={2}, pages={134--141}, year={1992} } \bib{Co}{article}{ author={Conrads, H.}, title={Weighted projective spaces and reflexive simplices}, journal={Manuscripta Math.}, volume={107}, number={2}, pages={215--227}, year={2002}, } \bib{CoLiSch}{book}{ author={Cox, D. A.}, author={Little, J. B.}, author={Schenck, H. K.}, title={Toric varieties}, series={Graduate Studies in Mathematics}, volume={124}, publisher={American Mathematical Society, Providence, RI}, date={2011}, pages={xxiv+841}, } \bib{Fu}{book}{ author={Fulton, W.}, title={Introduction to toric varieties}, series={Annals of Mathematics Studies}, volume={131}, note={The William H. Roever Lectures in Geometry}, publisher={Princeton University Press, Princeton, NJ}, date={1993}, pages={xii+157}, isbn={0-691-00049-2}, } \bib{IzKu}{article}{ author={Izhboldin, O.}, author={Kurliandchik, L}, title={Unit fractions}, journal={Translations of the American Mathematical Society-Series 2}, volume={166}, pages={193--200}, year={1995}, publisher={Providence [etc.] American Mathematical Society, 1949-} } \bib{Ka}{article}{ author={Kasprzyk, A. M.}, title={Bounds on fake weighted projective space}, journal={Kodai Mathematical Journal}, volume={32}, number={2}, pages={197--208}, year={2009}, publisher={Department of Mathematics, Tokyo Institute of Technology} } \bib{Ni}{article}{ author={Nill, B.}, title={Volume and lattice points of reflexive simplices}, journal={Discrete \& Computational Geometry}, volume={37}, number={2}, pages={301--320}, year={2007}, publisher={Springer} } \end{biblist} \end{bibdiv} \end{document}
arXiv
In Situ Synthesis and Photocatalytic Properties of Titanium Dioxide Nanoparticles on Cotton Fabrics Zhanxia Bao1, Shuhua Wang1, Xiaoying Yu1, Yindong Gao1 & Zeling Wen1 Water, Air, & Soil Pollution volume 230, Article number: 169 (2019) Cite this article In this paper, nanosized titanium dioxide as catalysts for degrading dye wastewater was in situ synthesized on the surface of cotton fabrics used tetrabutyl titanate as precursor. The morphology and structure of prepared catalysts were characterized by scanning electron microscopy, energy-dispersive spectrometer, and X-ray diffraction. The characterization results showed that anatase nanosized titanium dioxide was successfully synthesized in situ on cotton fabrics and had excellent dispersibility. Subsequently, the effects of irradiation time, catalyst dosage, dye concentration, initial pH value of dye, hydrogen peroxide dosage, and dye type on dye degradation rate were investigated one by one by a photocatalytic performance test. The test results indicated that the degradation rates of methylene blue, methyl orange, and rhodamine B were 90.4%, 81.4%, and 58.3%, separately, at catalyst dosage of 4.8 g/L, initial dye concentration of 10 mg/L, pH of 7, and hydrogen peroxide dosage of 0.24 mol/L, after 4 h of UV irradiation. As a kind of semiconductor material with efficient photocatalytic activity, nanosized titanium dioxide (nano-TiO2) possesses nontoxicity, high chemical stability, and excellent photocatalytic activity (Zhou et al. 2017; Goudarzi and Shahabi-Ghahfarrokhi 2018). Due to its attractive properties, nano-TiO2 could be widely applied in a variety of fields such as sterilization, air purification, and water treatment (Wang et al. 2015; Zhang et al. 2017). However, nano-TiO2 also has some disadvantages. First of all, the disadvantage of easy agglomeration of nano-TiO2 greatly limits its performance. Due to the large specific surface area and high surface energy of pure nano-TiO2 particles, which belong to thermodynamic unstable systems, the surface particles cluster together with each other due to van der Waals force and Coulomb force, so that the nano-TiO2 particles are agglomerated and lose the physical properties and functions when applied. Therefore, it is imperative to prepare highly dispersed nano-TiO2 to maintain its advantages in photocatalytic performance. Secondly, nano-TiO2 particles in the water system are easy to lose, difficult to recycle, and high in recycling cost (Liu et al. 2018). Therefore, it is necessary to find a suitable carrier to immobilize nano-TiO2 for recycling. In order to solve the above problems, many scholars have extensively studied supported nano-TiO2 photocatalysts. At present, the carrier materials commonly loading nano-TiO2 mainly include silicas (Sosnov et al. 2017), polymers (Vukoje et al. 2016), glasses (Yusoff et al. 2016), and carbon materials (Das et al. 2018; Xu et al. 2016a, b; Liang et al. 2018). All these materials have the characteristics of good stability, high strength, large specific surface area, and firm load. Because of large surface area, interwoven network structure of yarns, inherent strength, and flexibility, the cotton fabrics (CFs) can provide a good bearing platform for nano-TiO2, which is conducive to the formation of highly dispersed TiO2, thus maximizing its dye degradation performance. At the same time, TiO2 also provides a choice to impart multifunctional properties of the CFs (Mishra and Butola 2017; Giesz et al. 2016). The annual output of waste CFs is huge, and they cannot be used reasonably, which will not only pollute the environment but also waste resources greatly (Pensupa et al. 2017). Therefore, using CFs as the carrier of TiO2 not only has a wide source and low cost but also protects the environment and makes effective utilization of resources (Ma et al. 2018). As reported by Giesz et al., using the methods of sol-gel process and microwave treatment successfully prepared TiO2-anatase film on cotton and viscose fabric surfaces (Giesz et al. 2016). In this work, nano-TiO2 synthesized in situ and loaded on CFs was successfully prepared by using a hydrothermal method and low-temperature calcination, with cheap and readily available CFs as carrier and tetrabutyl titanate (TBT) as raw material. The loaded nano-TiO2 had anatase crystal form, high dispersion, and superior catalytic activity. Raw Materials and Reagents CFs were used as substrate for loading nano-TiO2. Nano-TiO2 was prepared from TBT as raw material. The dyes, methylene blue (MB), methyl orange (MO), and rhodamine B (RhB) are all of analytical reagent (AR) grade and are commercially available. Preparation of Loaded-TiO2 Cotton Fabrics (TiO2/CFs) According to the analysis of this experiment, there are many factors affecting photocatalytic degradation rate, such as hydrolysis temperature, hydrolysis time, hydrothermal temperature, hydrothermal time, calcination temperature, calcination time, dye type, dye concentration, solution pH value, illumination time, illumination intensity, and hydrogen peroxide (H2O2) dosage. After comprehensive consideration, the hydrolysis temperature, hydrothermal temperature, hydrothermal time, and calcination time were determined as the experimental factors for the preparation of the experiment and were marked as A, B, C, and D, respectively. Three-level and four-factor orthogonal experiments were carried out. The factors and levels are shown in Table 1. Table 1 Factors and levels of orthogonal design The specific preparation process of the experiment was as follows:1.6 g CFs were immersed in 8 mL TBT for 24 h at room temperature, followed by taking out the CFs and squeezing away as much liquid as possible from the surface of the CFs. Then, loaded TBT CFs were quickly immersed into 20 mL deionized water to hydrolyze for 30 min. Subsequently, it was transferred to a polytetrafluoroethylene autoclave for hydrothermal treatment. After a certain period of time, the products were poured into a beaker, washed three times with deionized water (to remove unloaded or not firmly loaded TiO2 particles from the CFs surface), and dried at 60 °C. Finally, the products were calcined at 200 °C and cooled to obtain TiO2/CFs. A simulation of the preparation process is shown in Fig. 1. The preparation of TiO2/CFs Photocatalytic Degradation of MB, MO, and RhB under UV Irradiation MB, MO, and RhB were selected as model chemicals to evaluate the photocatalytic performance of TiO2/CFs. About 0.30 g of TiO2/CFs was placed separately in 60 mL of MB, MO, and RhB solution (10 mg/L). Before irradiation, these solutions were stirred (500 r/min) in the dark for 30 min to achieve the adsorption equilibrium (Wu and Long 2011). Then, they were exposed to UV radiation with the wavelength of 365 nm (UV-365) maintaining vigorous stirring (500 r/min). The MB, MO, and RhB solutions were taken out at regular intervals, and the absorbances were measured by a 721 visible spectrophotometer, at the maximum absorption wavelength of 664 nm (Phaltane et al. 2017), 464 nm (John et al. 2017), and 552 nm (Cai et al. 2018), respectively. Before measuring, the suspensions were filtered through an organic micropore filter (pore size 0.45 μm). Taking MB solution as an example, since the MB solution concentration C (mg/L) was proportional to the absorbance within a certain range as shown in Fig. 2, it could be approximately expressed as (Eq. (1)): $$ A=0.12057\mathrm{C},{R}^2=0.99883. $$ Standard working curve of MB solution Therefore, the degradation rate D(%) could be calculated by replacing the concentration C with the absorbance A. Eventually, the degradation rate was calculated by the measured absorbance as follows (Eq. (2)) (Zhang and Yang 2012): $$ D=\left[\left({A}_0-{A}_t\right)/{A}_0\right]\times 100\% $$ where A0 is the absorbance of the MB solution after dark reaction for 30 min; At is the absorbance of the MB solution irradiated for t. Orthogonal Test This experiment was a three-level and four-factor test, where only the effect of the four factors on the degradation rate was considered, ignorance of the interactions among the factors. Therefore, L9 (34) orthogonal table should be selected. The specific test conditions and results are shown in Table 2. The degradation rates of MB evaluated and recorded as DMB, as the final evaluation index. Table 2 Specific test conditions and results As shown in Table 2, the orders for the average values (− K) of each factor were as follows: − KA2 > − KA1 > − KA3, − KB2 > − KB1 > − KB3, − KC3 > − KC1 > − KC2, and − KD1 > − KD3 > − KD2. Therefore, A2, B2, C3, and D1 could be determined as the optimal levels of factors A, B, C, and D, respectively. The combination of A2B2C3D1 was the optimal level combination in this experiment. This meant that the optimal preparation conditions were hydrolysis temperature 80 °C, hydrothermal temperature 150 °C, hydrothermal time 10 h, and calcination time 2 h. The range (R) calculation demonstrated that RB > RD > RA > RC. Hence, for MB, the greatest influence was hydrothermal temperature, followed by calcination time and hydrolysis temperature, while hydrothermal time was the least influence. Characterization of Morphology and Structure Scanning Electron Microscopy and Energy-Dispersive Spectrometer Scanning electron microscopy (SEM) was used to characterize the surface morphology of the samples and the distribution of TiO2 particles on CFs as shown in Fig. 3. The chemical composition of TiO2/CFs was characterized by energy-dispersive spectrometer (EDS) as shown in Fig. 4. SEM images of the two samples. a Pristine CFs. b TiO2/CFs EDS energy spectrum of TiO2/CFs Figure 3a is a SEM image of the original CFs, from which it could be observed that the surface had a small amount of natural impurities and warp (Yang et al. 2019), so it had a certain roughness and was beneficial to the loading of TiO2. Figure 3b is a SEM image of TiO2/CFs, from which it could be observed that CFs still maintained its original shape, and TiO2 was uniformly and firmly distributed on the surface of the CFs. This was because there were many free hydroxyl groups on the surface of CFs, which could react with hydroxyl radicals (·OH) on the surface of TiO2, so the combination between them was strong enough. Through EDS energy spectrum analysis, it could be found that the prepared TiO2/CFs contained C, Ti, and O elements, which further confirmed the in situ synthesis of TiO2 on CFs. X-ray diffraction (XRD) was carried out to determine the crystal structure (brookite, anatase, or rutile) of TiO2 particles on CFs at 2θ from 10° to 70° with Cu Kα radiation, as shown in Fig. 5. Figure 5a showed the XRD pattern of the pristine CFs as blank samples, from which only the characteristic peaks of CFs were observed. Figure 5b shows the XRD pattern of the prepared TiO2/CFs, from which not only the characteristic diffraction peaks 15.6°, 16.8°, and 23.1° of CFs but also the characteristic diffraction peaks 25.6°, 38.2°, 48.2°, 54.3°, and 62.2° of typical anatase TiO2 were observed, corresponding to the crystal faces of (101), (112), (200), (211), and (204) (Sun et al. 2000; Zhou et al. 2017). In addition, using the XRD diffraction pattern of Fig. 4b, the crystallite diameter (D) of anatase TiO2 crystal was calculated based on (101) crystal face reflection according to the Scherer formula (Eq. (3)) (Kinsinger et al. 2011; Sun et al. 2000): $$ D= k\lambda /\left(\upbeta \cos \theta \right) $$ where k is the shape constant (typical around 0.9) (Tang et al. 2009), λ is the wavelength of the Cu Kα radiation (λ = 0.154 nm), β is the full width at half maximum (FWHM), and θ is the diffraction angle. The TiO2 particle size was about 9.4 nm. Therefore, TiO2 was a nanosized particle. The XRD patterns of the samples. a Pristine CFs. b TiO2/CFs From above, it could be determined that in situ synthesis of anatase nano-TiO2 particles on CFs could be achieved by the hydrothermal method and low-temperature (200 °C) calcination. Therefore, in order to make use of the photocatalytic properties of nano-TiO2, the prepared TiO2/CFs would be used in the research of dye-simulated wastewater treatment in the subsequent work. Photocatalytic Degradation Performance Effects of UV Irradiation Time on Photocatalytic Activity Figure 6 shows the effects of irradiation time on photocatalytic performance of TiO2/CFs under the conditions of UV-365 irradiation, MB solution concentration of 10 mg/L, and TiO2/CFs dosage of 4.8 g/L. As can be seen from Fig. 6, the degradation rate of pristine CFs to MB solution in 9 h under UV-365 irradiation was about 15%, which was basically negligible in comparison with the higher degradation rate of TiO2/CFs. For TiO2/CFs, the degradation efficiency of MB solution gradually decreased with the increase of irradiation time, but the degradation rate gradually increased. This was because the intermediate products produced in the process of MB degradation occupied the active parts of TiO2, and a competition for degradation may occur between the MB and the intermediate products (Rauf et al. 2011), resulting in a decrease in degradation efficiency. Especially after 5 h, the degradation efficiency of MB solution was only 1.6%. However, with the gradual reaction of intermediate products and the continuous adsorption and degradation of MB solution, the degradation rate showed an increasing trend. However, after 5 h, the degradation rate also tended to be basically balanced. The effects of illumination time on degradation rate of MB solution under UV-365 Effects of TiO2/CFs Dosage on Photocatalytic Activity Figure 7 reveals the effects of TiO2/CFs dosage on photocatalytic performance of TiO2/CFs under the conditions of UV-365 irradiation for 5 h and MB solution concentration of 10 mg/L. When the dosage of TiO2/CFs was less than 4.8 g/L, the degradation rate of MB solution increased with the increase of TiO2/CFs. This was because the more TiO2/CFs could provide, the more active sites there are and the more dyes could be adsorbed and degraded. However, when the dosage of TiO2/CFs exceeded 4.8 g/L, the degradation rate of MB solution did not significantly increase with the addition of TiO2/CFs. This was because the presence of excess TiO2/CFs would lead to a decrease in light transmittance, thus limiting the irradiated areas of TiO2/CFs. Therefore, for this experiment, the optimum dosage of TiO2/CFs was 4.8 g/L. The effects of TiO2/CFs dosage on degradation rate of MB solution Effects of Dye Concentration on Photocatalytic Activity Figure 8 displays the effects of MB concentration on photocatalytic performance of TiO2/CFs under the conditions of UV-365 irradiation for 5 h and TiO2/CFs dosage of 4.8 g/L. As can be seen from Fig. 8, the degradation rate of MB solution decreased with the increase of initial concentration of MB solution. The reason was that when the concentration of MB solution was relatively low, the number of dyes in the solution was relatively small, so there were relatively few dyes migrating and enriching to the TiO2/CFs surface. However, there were relatively many active sites on the surface of TiO2/CFs, and fewer dyes were not enough to enable all active sites of TiO2/CFs to exert the catalytic ability. When the concentration of MB solution was 5 mg/L, the degradation rate of TiO2/CFs to MB solution was the highest, reaching 98%. With the increase of MB solution concentration, the amount of MB adsorbed on the TiO2/CFs surface gradually reached saturation, and the catalytic capacity also reached maximum. In addition, intermediate products that were difficult to decompose would also accumulate around TiO2/CFs due to lack of time for reaction, hindering the progress of photocatalytic reaction. Therefore, the prepared TiO2/CFs were suitable for the degradation of low concentration dyes (5 mg/L, 10 mg/L, and 15 mg/L). The effects of dye concentration on degradation rate of MB solution Effects of pH Value on Photocatalytic Activity Figure 9 describes the effects of initial pH value on photocatalytic performance of TiO2/CFs under the conditions of UV-365 irradiation for 5 h, TiO2/CFs dosage of 4.8 g/L, and MB concentration of 10 mg/L. It could be seen that the degradation of MB by TiO2/CFs first increased and then decreased with the increase of pH value in the pH range of 4.0–11.0. The isoelectric point of TiO2 in water was about pH = 6.0 (Devipriya and Yesodharan 2010), which meant that the surface of TiO2 was neutral. When the solution was pH < 6.0, the TiO2 surface had a net positive charge, as shown in Eq. (4) (Kaur et al. 2018). Because MB was a cationic dye, the existence of TiOH2+ was not conducive to the adsorption of MB by TiO2/CFs. So, the degradation of MB by TiO2/CFs was obviously inhibited under acidic conditions. When the solution was pH > 6.0, the TiO2 surface carried a net negative charge, as shown in Eq. (5) (Kaur et al. 2018). This was also conducive to the migration of holes to the surface of TiO2 and the formation of ·OH with strong oxidation by reacting with OH−, H2O, and other electron donors on the surface of TiO2. In addition, TiO2 was negatively charged in neutral and weakly alkaline solutions and electrostatically adsorbed MB cations (Azeez et al. 2018), thus greatly increasing the degradation of MB. Particularly, when the solution was pH = 7.0, the degradation rate of MB was the highest, up to 80.4%. $$ \mathrm{TiOH}+{\mathrm{H}}^{+}\to {{\mathrm{TiOH}}_2}^{+}\ \left(\mathrm{pH}<6.0\right) $$ $$ \mathrm{TiOH}+{\mathrm{OH}}^{-}\to {\mathrm{TiO}}^{-}+{\mathrm{H}}_2\mathrm{O}\ \left(\mathrm{pH}>6.0\right) $$ The effects of the initial pH value on degradation rate of MB solution However, when the pH the solution was pH > 9.0, the degradation rate of MB solution decreased significantly. To sum up, TiO2/CFs was more beneficial to the degradation of MB under neutral or weakly alkaline conditions. Since the pH value of MB solution itself was about 7, there was no need to adjust the pH value later. Effects of H2O2 Dosage on Photocatalytic Activity As a kind of strong oxidative reagent and electron scavenger, the dosage of H2O2 is considered to be an important parameter affecting the photocatalytic degradation process because it is directly related to the amount of ·OH generated in the photocatalytic reaction. First of all, in order to prove that H2O2 could promote TiO2 to degrade dyes, three experiments were designed as shown in Fig. 10. The curve (1) showed the degradation rate of MB solution only in the presence of TiO2/CFs. The curve (2) showed the degradation rate of MB solution only in the presence of H2O2. The dashed line (1) + (2) was the sum of the degradation rates of MB solution under conditions (1) and (2) alone. The curve (3) showed the degradation of MB solution in the presence of TiO2/CFs and H2O2. As can be seen from Fig. 11, the degradation rate was obviously (3) > (1) + (2). Therefore, it came to a conclusion that H2O2 could promote TiO2 to degrade dyes. The degradation rate of MB solution under different conditions The effects of H2O2 dosage on degradation rate of MB solution Subsequently, the effects of H2O2 dosage on photocatalytic performance of TiO2/CFs under the conditions of UV-365 irradiation for 2 h and 4 h, TiO2/CFs dosage of 4.8 g/L, MB concentration of 10 mg/L, and initial pH value of 7 were discussed, as shown in Fig. 11. According to the curve of illumination for 2 h (sealed), it could be concluded that in the presence of H2O2, the degradation rate of MB solution first increased and then tended to be stable with the increase of H2O2 dosage; the degradation efficiency of MB solution was the highest when 0.24 mol/L H2O2 was added to 60 mL MB solution. This was because when the dosage of added H2O2 was less than 0.24 mol/L, H2O2 could capture electrons on the surface of TiO2 (Eq. (6)), thus reducing the probability of electron-hole recombination and improving hole utilization (Eq. (7) and Eq. (8)) (Akpan and Hameed 2009). In addition, H2O2 may be decomposed directly to generate ·OH under UV irradiation (Eq. (9)) (Navarro et al. 2005), which was the strongest oxidant in aqueous phase systems and could degrade dyes into harmless products (Eq. (10) and Eq. (11)) such as CO2 and H2O. When the dosage of added H2O2 was above 0.24 mol/L, H2O2 made quenching reaction due to excess, i.e., H2O2 reacted with ·OH to form ·OOH (Eq. (12)) and the reaction (Eq. (13)) (Fayazi et al. 2016), thus reducing the degradation efficiency. $$ {\mathrm{H}}_2{\mathrm{O}}_2+{\mathrm{e}}^{-}\to {\mathrm{O}\mathrm{H}}^{-}+\cdotp \mathrm{OH} $$ $$ {\mathrm{h}}^{+}+{\mathrm{OH}}^{-}\to \cdotp \mathrm{OH} $$ $$ {\mathrm{h}}^{+}+{\mathrm{H}}_2\mathrm{O}\to \cdotp \mathrm{OH}+{\mathrm{H}}^{+} $$ $$ {\mathrm{H}}_2{\mathrm{O}}_2+\mathrm{h}\upnu \to 2\cdotp \mathrm{OH} $$ $$ \mathrm{OH}+\mathrm{dye}\to \mathrm{dye}\ \mathrm{intermediate}\ \mathrm{products} $$ $$ \mathrm{OH}+\mathrm{dye}\ \mathrm{intermediate}\ \mathrm{products}\to \mathrm{harmless}\ \mathrm{products} $$ $$ {\mathrm{H}}_2{\mathrm{O}}_2+\cdotp \mathrm{OH}\to {\mathrm{H}}_2\mathrm{O}+\cdotp \mathrm{OOH} $$ $$ \mathrm{OOH}+\cdotp \mathrm{OH}\to {\mathrm{H}}_2\mathrm{O}+{\mathrm{O}}_2 $$ According to the curve of illumination for 4 h (unsealed), it could be seen that the degradation rate of MB solution kept increasing obviously with the increase of H2O2 dosage. This may be because the opening of the preservative film after 2 h of light would bring in air and promote the photocatalytic degradation. Therefore, in the following work of this experiment, when studying the effect of UV-365 and H2O2 on other dyes, the optimal H2O2 dosage was 0.24 mol/L. Degradation Effect of TiO2/CFs on Different Dyes Figure 12 presents the effects of dye type on photocatalytic performance of TiO2/CFs under the conditions of UV-365 irradiation for 2 h and 4 h, TiO2/CFs dosage of 4.8 g/L, dye concentration of 10 mg/L, initial pH value of 7, and H2O2 dosage of 0.24 mol/L. The degradation rates of MB, MO, and RhB were 68.5%, 52.0%, and 23.5%, respectively, when the irradiation time was 2 h (sealed). The degradation rate increased by 20–30% to 90.4%, 81.4%, and 58.3% when the illumination time was further extended to 4 h (unsealed). Therefore, the prepared TiO2/CFs not only had excellent degradation effect on MB but also had certain degradation effect on MO and RhB. The effects of dye type on degradation rate Anatase TiO2 was synthesized with uniform distribution and firm loaded in situ with CFs as carrier and TBT as raw material by the hydrothermal method and low-temperature calcination. Therefore, the excellent photocatalytic performance of nano-TiO2 permanently endowed CFs. The photocatalytic properties of the prepared TiO2/CFs were tested by visible spectrophotometry. The results showed that TiO2/CFs had good degradation effects on MB, MO, and RhB. The degradation rates of MB, MO, and RhB were 90.4%, 81.4%, and 58.3%, separately, at TiO2/CFs dosage of 4.8 g/L, initial dye concentration of 10 mg/L, pH of 7, and H2O2 dosage of 0.24 mol/L, after 4 h of UV-365 irradiation. Akpan, U. G., & Hameed, B. H. (2009). Parameters affecting the photocatalytic degradation of dyes using TiO2-based photocatalysts: a review. Journal of Hazardous Materials, 170, 520–529. Azeez, F., Al-Hetlani, E., Arafa, M., et al. (2018). The effect of surface charge on photocatalytic degradation of methylene blue dye using chargeable titania nanoparticles. Sci Rep-UK, 8, 7104. Cai, L. J., Li, Y. W., Li, Y. H., et al. (2018). Synthesis of zincphthalocyanine-based conjugated microporous polymers with rigid-linker as novel and green heterogeneous photocatalysts. Journal of Hazardous Materials, 348, 47–55. Das, D., Sabaraya, I. V., Zhu, T., et al. (2018). Aggregation behavior of multiwalled carbon nanotube-titanium dioxide nanohybrids: probing the part-whole question. Environmental Science & Technology, 52, 8233–8241. Devipriya, S. P., & Yesodharan, S. (2010). Photocatalytic degradation of phenol in water using TiO2 and ZnO. Journal of Environmental Biology, 31, 247–249. Fayazi, M., Taher, M. A., Afzali, D., et al. (2016). Enhanced Fenton-like degradation of methylene blue by magnetically activated carbon/hydrogen peroxide with hydroxylamine as Fenton enhancer. Journal of Molecular Liquids, 216, 781–787. Giesz, P., Celichowski, G., Puchowicz, D., et al. (2016). Microwave-assisted TiO2: anatase formation on cotton and viscose fabric surfaces. Cellulose, 23, 2143–2159. Goudarzi, V., & Shahabi-Ghahfarrokhi, I. (2018). Photo-producible and photo-degradable starch/TiO2 bionanocomposite as a food packaging material: development and characterization. International Journal of Biological Macromolecules, 106, 661–669. John, N., Tharayil, N. J., & Somaraj, M. (2017). Photocatalytic degradation of methyl orange using biologically enhanced tin oxide nanoparticles under UV-irradiation. Journal of Materials Science: Materials in Electronics, 28, 5860–5865. Kaur, H., Kumar, S., Verma, N. K., et al. (2018). Role of pH on the photocatalytic activity of TiO2 tailored by W/T mole ratio. Journal of Materials Science: Materials in Electronics, 29, 16120–16135. Kinsinger, N., Tantuccio, A., Sun, M. W., et al. (2011). Photocatalytic titanium dioxide composite. Journal of Nanoscience and Nanotechnology, 11, 7015–7021. Liang, Y., Zhou, B., Li, N., et al. (2018). Enhanced dye photocatalysis and recycling abilities of semi-wrapped TiO2@carbon nanofibers formed via foaming agent driving. Ceramics International, 44(2), 1711–1718. Liu, X. H., Liu, Y., Lu, S. Y., et al. (2018). Performance and mechanism into TiO2/zeolite composites for sulfadiazine adsorption and photodegradation. Chemical Engineering Journal, 350, 131–147. Ma, J., Liu, Y., Ali, O., et al. (2018). Fast adsorption of heavy metal ions by waste cotton fabrics based double network hydrogel and influencing factors insight. Journal of Hazardous Materials, 344, 1034–1042. Mishra, A., & Butola, B. S. (2017). Deposition of Ag doped TiO2 on cotton fabric for wash durable UV protective and antibacterial properties at very low silver concentration. Cellulose, 24, 3555–3571. Navarro, P., Sarasa, J., Sierra, D., et al. (2005). Degradation of wine industry wastewaters by photocatalytic advanced oxidation. Water Science and Technology, 51, 113–120. Pensupa, N., Leu, S. Y., Hu, Y., et al. (2017). Recent trends in sustainable textile waste recycling methods: current situation and future prospects. Topics in Current Chemistry, 375, 76. Phaltane, S. A., Vanalakar, S. A., Bhat, T. S., et al. (2017). Photocatalytic degradation of methylene blue by hydrothermally synthesized CZTS nanoparticles. Journal of Materials Science: Materials in Electronics, 28, 8186–8191. Rauf, M. A., Meetani, M. A., & Hisaindee, S. (2011). An overview on the photocatalytic degradation of azo dyes in the presence of TiO2 doped with selective transition metals. Desalination, 276, 13–27. Sosnov, E. A., Trubina, T. S., & Malygin, A. A. (2017). Chemical assembly of a titanium oxide layer on microporous silica. Russian Journal of General Chemistry, 87, 1786–1793. Sun, Y. P., Guduru, R., Lin, F., et al. (2000). Preparation of nanoscale semiconductors through the rapid expansion of supercritical solution (RESS) into liquid solution. Industrial and Engineering Chemistry Research, 39, 4663–4669. Tang, H. W., Ng, K. M., Chui, S. S. Y., et al. (2009). Analysis of melamine cyanurate in urine using matrix-assisted laser desorption/ionization mass spectrometry. Analytical Chemistry, 81, 3676–3682. Vukoje, I., Kovač, T., Džunuzović, J. V., et al. (2016). Photocatalytic ability of visible-light-responsive TiO2 nanoparticles. Journal of Physical Chemistry C, 120, 18560–18569. Wang, W. J., Huang, G. C., Yu, J. C., et al. (2015). Advances in photocatalytic disinfection of bacteria: development of photocatalysts and mechanisms. Journal of Environmental Sciences (China), 34, 232–247. Wu, D. Y., & Long, M. C. (2011). Realizing visible-light-induced self-cleaning property of cotton through coating N-TiO2 film and loading AgI particles. ACS Applied Materials & Interfaces, 3, 4770–4774. Xu, Z., Li, X., Wang, W., et al. (2016a). Microstructure and photocatalytic activity of electrospun carbon nanofibers decorated by TiO2 nanoparticles from hydrothermal reaction/blended spinning. Ceramics International, 42(13), 15012–15022. Xu, Z., Wu, T., Shi, J., et al. (2016b). Photocatalytic antifouling PVDF ultrafiltration membranes based on synergy of graphene oxide and TiO2 for water treatment. Journal of Membrane Science, 520, 281–293. Yang, M. P., Liu, W. Q., Jiang, C., et al. (2019). Robust fabrication of superhydrophobic and photocatalytic self-cleaning cotton textile based on TiO2 and fluoroalkylsilane. Journal of Materials Science, 54, 2079–2092. Yusoff, M. M., Mamat, M. H., Malek, M. F., et al. (2016). Growth of titanium dioxide nanorod arrays through the aqueous chemical route under a novel and facile low-cost method. Materials Letters, 164, 294–298. Zhang, H., & Yang, L. (2012). Immobilization of nanoparticle titanium dioxide membrane on polyamide fabric by low temperature hydrothermal method. Thin Solid Films, 520, 5922–5927. Zhang, B., Maimaiti, H., Zhang, D. D., et al. (2017). Preparation of coal-based C-dots/TiO2 and its visible-light photocatalytic characteristics for degradation of pulping black liquor. Journal of Photochemistry and Photobiology A, 345, 54–62. Zhou, W. Y., Zhang, Y. Y., & Shi, Y. D. (2017). In situ loading TiO2 and its photocatalysis and UV resistance on cotton fabric. Fibers and Polymers, 18, 1073–1078. The authors acknowledge the Scientific Research Fund of Taiyuan University of Technology for financial support (Project No. 20504020203) and Shanxi Provincial Foundation for Returned Scholars, China (Project No. 2017048). College of Textile Engineering, Taiyuan University of Technology, Taiyuan, 030600, People's Republic of China Zhanxia Bao, Shuhua Wang, Xiaoying Yu, Yindong Gao & Zeling Wen Zhanxia Bao Shuhua Wang Xiaoying Yu Yindong Gao Zeling Wen Correspondence to Shuhua Wang. In this work, anatase nanosized titanium dioxide composite materials with high degradation performance to various types of dyes were prepared via an orthogonal design-oriented low-temperature calcination of waste cotton fabrics. (1) The introduction of orthogonal experiment could be efficient, fast, and economical in optimizing experimental parameters. (2) Titanium dioxide with high dispersibility could endow the cotton fabrics striking degradation performance to dyes. (3) Recycling and reusing waste fabrics could be significant for relieving resource shortage, saving energy, and reducing emission. Bao, Z., Wang, S., Yu, X. et al. In Situ Synthesis and Photocatalytic Properties of Titanium Dioxide Nanoparticles on Cotton Fabrics. Water Air Soil Pollut 230, 169 (2019). https://doi.org/10.1007/s11270-019-4219-5 Dye wastewater Tetrabutyl titanate Photocatalytic performance Degradation rate Not logged in - 54.85.57.0
CommonCrawl
What is the sum of the integers that are both greater than 3 and less than 12? We want to evaluate the arithmetic series $4+5+\dots+11$. The sum of an arithmetic series is equal to the average of the first and last term, multiplied by the number of terms. The number of terms is $11 - 4 + 1 = 8$, so the sum is $(4 + 11)/2 \cdot 8 = \boxed{60}$.
Math Dataset
Cremona group In algebraic geometry, the Cremona group, introduced by Cremona (1863, 1865), is the group of birational automorphisms of the $n$-dimensional projective space over a field $k$. It is denoted by $Cr(\mathbb {P} ^{n}(k))$ or $Bir(\mathbb {P} ^{n}(k))$ or $Cr_{n}(k)$. The Cremona group is naturally identified with the automorphism group $\mathrm {Aut} _{k}(k(x_{1},...,x_{n}))$ of the field of the rational functions in $n$ indeterminates over $k$, or in other words a pure transcendental extension of $k$, with transcendence degree $n$. The projective general linear group of order $n+1$, of projective transformations, is contained in the Cremona group of order $n$. The two are equal only when $n=0$ or $n=1$, in which case both the numerator and the denominator of a transformation must be linear. The Cremona group in 2 dimensions In two dimensions, Max Noether and Castelnuovo showed that the complex Cremona group is generated by the standard quadratic transformation, along with $\mathrm {PGL} (3,k)$, though there was some controversy about whether their proofs were correct, and Gizatullin (1983) gave a complete set of relations for these generators. The structure of this group is still not well understood, though there has been a lot of work on finding elements or subgroups of it. • Cantat & Lamy (2010) showed that the Cremona group is not simple as an abstract group; • Blanc showed that it has no nontrivial normal subgroups that are also closed in a natural topology. • For the finite subgroups of the Cremona group see Dolgachev & Iskovskikh (2009). The Cremona group in higher dimensions There is little known about the structure of the Cremona group in three dimensions and higher though many elements of it have been described. Blanc (2010) showed that it is (linearly) connected, answering a question of Serre (2010). There is no easy analogue of the Noether–Castelnouvo theorem as Hudson (1927) showed that the Cremona group in dimension at least 3 is not generated by its elements of degree bounded by any fixed integer. De Jonquières groups A De Jonquières group is a subgroup of a Cremona group of the following form . Pick a transcendence basis $x_{1},...,x_{n}$ for a field extension of $k$. Then a De Jonquières group is the subgroup of automorphisms of $k(x_{1},...,x_{n})$ mapping the subfield $k(x_{1},...,x_{r})$ into itself for some $r\leq n$. It has a normal subgroup given by the Cremona group of automorphisms of $k(x_{1},...,x_{n})$ over the field $k(x_{1},...,x_{r})$, and the quotient group is the Cremona group of $k(x_{1},...,x_{r})$ over the field $k$. It can also be regarded as the group of birational automorphisms of the fiber bundle $\mathbb {P} ^{r}\times \mathbb {P} ^{n-r}\to \mathbb {P} ^{r}$. When $n=2$ and $r=1$ the De Jonquières group is the group of Cremona transformations fixing a pencil of lines through a given point, and is the semidirect product of $\mathrm {PGL} _{2}(k)$ and $\mathrm {PGL} _{2}(k(t))$. References • Alberich-Carramiñana, Maria (2002), Geometry of the plane Cremona maps, Lecture Notes in Mathematics, vol. 1769, Berlin, New York: Springer-Verlag, doi:10.1007/b82933, ISBN 978-3-540-42816-9, MR 1874328 • Blanc, Jérémy (2010), "Groupes de Cremona, connexité et simplicité", Annales Scientifiques de l'École Normale Supérieure, Série 4, 43 (2): 357–364, doi:10.24033/asens.2123, ISSN 0012-9593, MR 2662668 • Cantat, Serge; Lamy, Stéphane (2010). "Normal subgroups in the Cremona group". Acta Mathematica. 210 (2013): 31–94. arXiv:1007.0895. Bibcode:2010arXiv1007.0895C. doi:10.1007/s11511-013-0090-1. S2CID 55261367. • Coolidge, Julian Lowell (1931), A treatise on algebraic plane curves, Oxford University Press, ISBN 978-0-486-49576-7, MR 0120551 • Cremona, L. (1863), "Sulla trasformazioni geometiche delle figure piane", Giornale di Matematiche di Battaglini, 1: 305–311 • Cremona, L. (1865), "Sulla trasformazioni geometiche delle figure piane", Giornale di Matematiche di Battaglini, 3: 269–280, 363–376 • Demazure, Michel (1970), "Sous-groupes algébriques de rang maximum du groupe de Cremona", Annales Scientifiques de l'École Normale Supérieure, Série 4, 3 (4): 507–588, doi:10.24033/asens.1201, ISSN 0012-9593, MR 0284446 • Dolgachev, Igor V. (2012), Classical Algebraic Geometry: a modern view (PDF), Cambridge University Press, ISBN 978-1-107-01765-8, archived from the original (PDF) on 2012-03-11, retrieved 2012-04-18 • Dolgachev, Igor V.; Iskovskikh, Vasily A. (2009), "Finite subgroups of the plane Cremona group", Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. I, Progr. Math., vol. 269, Boston, MA: Birkhäuser Boston, pp. 443–548, arXiv:math/0610595, doi:10.1007/978-0-8176-4745-2_11, ISBN 978-0-8176-4744-5, MR 2641179, S2CID 2188718 • Gizatullin, M. Kh. (1983), "Defining relations for the Cremona group of the plane", Mathematics of the USSR-Izvestiya, 21 (2): 211–268, Bibcode:1983IzMat..21..211G, doi:10.1070/IM1983v021n02ABEH001789, ISSN 0373-2436, MR 0675525 • Godeaux, Lucien (1927), Les transformations birationelles du plan, Mémorial des sciences mathématiques, vol. 22, Gauthier-Villars et Cie, JFM 53.0595.02 • "Cremona group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • "Cremona transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Hudson, Hilda Phoebe (1927), Cremona transformations in plane and space, Cambridge University Press, ISBN 978-0-521-35882-8, Reprinted 2012 • Semple, J. G.; Roth, L. (1985), Introduction to algebraic geometry, Oxford Science Publications, The Clarendon Press Oxford University Press, ISBN 978-0-19-853363-4, MR 0814690 • Serre, Jean-Pierre (2009), "A Minkowski-style bound for the orders of the finite subgroups of the Cremona group of rank 2 over an arbitrary field", Moscow Mathematical Journal, 9 (1): 193–208, doi:10.17323/1609-4514-2009-9-1-183-198, ISSN 1609-3321, MR 2567402, S2CID 13589478 • Serre, Jean-Pierre (2010), "Le groupe de Cremona et ses sous-groupes finis" (PDF), Astérisque, Seminaire Bourbaki 1000 (332): 75–100, ISBN 978-2-85629-291-4, ISSN 0303-1179, MR 2648675
Wikipedia
Leveraging Bayesian networks and information theory to learn risk factors for breast cancer metastasis Xia Jiang1, Alan Wells2,3, Adam Brufsky3,4, Darshan Shetty1, Kahmil Shajihan1 & Richard E. Neapolitan5 Even though we have established a few risk factors for metastatic breast cancer (MBC) through epidemiologic studies, these risk factors have not proven to be effective in predicting an individual's risk of developing metastasis. Therefore, identifying critical risk factors for MBC continues to be a major research imperative, and one which can lead to advances in breast cancer clinical care. The objective of this research is to leverage Bayesian Networks (BN) and information theory to identify key risk factors for breast cancer metastasis from data. We develop the Markov Blanket and Interactive risk factor Learner (MBIL) algorithm, which learns single and interactive risk factors having a direct influence on a patient's outcome. We evaluate the effectiveness of MBIL using simulated datasets, and compare MBIL with the BN learning algorithms Fast Greedy Search (FGS), PC algorithm (PC), and CPC algorithm (CPC). We apply MBIL to learn risk factors for 5 year breast cancer metastasis using a clinical dataset we curated. We evaluate the learned risk factors by consulting with breast cancer experts and literature. We further evaluate the effectiveness of MBIL at learning risk factors for breast cancer metastasis by comparing it to the BN learning algorithms Necessary Path Condition (NPC) and Greedy Equivalent Search (GES). The averages of the Jaccard index for the simulated datasets containing 2000 records were 0.705, 0.272, 0.228, and 0.147 for MBIL, FGS, PC, and CPC respectively. MBIL, NPC, and GES all learned that grade and lymph_nodes_positive are direct risk factors for 5 year metastasis. Only MBIL and NPC found that surgical_margins is a direct risk factor. Only NPC found that invasive is a direct risk factor. MBIL learned that HER2 and ER interact to directly affect 5 year metastasis. Neither GES nor NPC learned that HER2 and ER are direct risk factors. The results involving simulated datasets indicated that MBIL can learn direct risk factors substantially better than standard Bayesian network learning algorithms. An application of MBIL to a real breast cancer dataset identified both single and interactive risk factors that directly influence breast cancer metastasis, which can be investigated further. Breast cancer is one of the leading causes of cancer death in US women [1, 2]. It is estimated that 40,920 US women will die from breast cancer in 2018 [3]. Breast cancer is also one of the main causes of cancer related death in women globally, and it is estimated that without major changes in prevention or treatment, 846,241 women will die from breast cancer worldwide in 2035 [4]. Women do not die of breast cancer confined to the breast or draining lymph nodes; rather, they die mainly due to metastasis, a condition in which cancer spreads to other vital organs such as the lung and brain. Metastatic breast cancer (MBC) is the cause of over 90% of breast cancer related deaths [5, 6] and remains a largely incurable disease. Although most newly diagnosed breast cancer cases are not metastatic, all patients are at risk of developing metastatic cancer in the future, even if they are free of cancer for years after the initial treatment. Being able to identify the key direct risk factors for MBC (i.e. risk factors that affect MBC directly rather than just being associative) is important because 1) we can target therapy towards those risk factors; and 2) we can use the risk factors to more effectively predict for each individual patient the likelihood of metastatic occurrence, and this prediction can guide treatment plans tailored to a specific patient to prevent metastasis and to help avoid under- or over-treatment [7]. Researchers have established a few risk factors for MBC through epidemiologic studies, but these risk factors have not proven to be effective in predicting an individual's risk of developing metastasis, and it has not been shown they have a direct influence on MBC. Therefore, identifying direct risk factors for MBC continues to be a major research imperative. Making further progress in this effort will be a key advancement in breast cancer clinical care, and can significantly improve patient outcomes, and reduce healthcare costs at large. It is commonplace for medical researchers to obtain retrospective data on patient features and outcomes, and then to analyze the association of possible risk factors with those outcomes. Relative to breast cancer, Curtis et al. [8] developed the METABRIC dataset, which contains clinical and genomic data, along with breast cancer death results, on 1992 breast cancer patients. Zeng et al. [9] analyzed these data and found that P53_mutation_status, HER2_status, PR_category, ER_category, stage, lymph_nodes_positive, grade, tumor_size, age, and chemotherapy all are highly correlated with 5 year breast cancer death (p-value < 10− 6). A difficulty with simply identifying risk factors through such correlation analyses is that a risk factor can be correlated with a disease without having any direct or even indirect effect on the disease. For example, menopause status and age might both be associated with MBC. However, menopause status could be a direct risk factor, while age is associated with MBC only because it is correlated with menopause status. A second difficulty with these correlation analyses is that they usually do not investigate or identify interacting risk factors. An interaction is the phenomenon where the predictive effect of one variable is dependent on the presence of one or more other variables. One example of an interaction is genetic epistasis, which describes the situation where genetic mutations have different effects in combination than they do individually. These discrete interactions typically cannot be represented in a regression equation. Indeed they often have little or no marginal effects. So methods have been developed for learning such interactions without making strong mathematical assumptions. These matters are discussed in detail in [9]. Such discrete interactions can also exist in other contexts such as between mRNA and microRNA [10,11,12], among signal transduction pathways [13], and among risk factors for disease. A pure (no marginal effects) or near-pure interaction is often not-identifiable unless we purposely search for it using a special score measurement. Although Bayesian network algorithms can discover direct risk factors (features that directly affect the outcome of interest), they cannot in general discover interactions with little marginal effects because they assume the composition property, which states that if variable X and set of variables S are conditionally independent given a set of variables T, then there is a variable Y in S such that X and Y are conditionally independent give, and on the other hand, if a variable X and a set of variables S are not independent conditional on T, then there exists a variable Y in S such that X and Y are not independent conditional on T [14]. When T is the empty set, this property simply states if X and S are not independent then there is an Y in S such that X and Y are not independent. Therefore, at least one variable in S much be correlated with X. However, if two or more variables interact in some way to affect Z, there could be little marginal effect for each variable, and the observed data could easily not satisfy the composition property. Furthermore, if interacting variables have strong marginal effects, the causal learning algorithms do not distinguish them as interactions, but only as individual causes. So, the standard methods do not learn that risk factors that are interacting to influence a target, and do not even discover risk factors that are interacting with little or no marginal effect. An important task then is to learn such interactions from data. A method that does this could be a preliminary step before applying a causal learning algorithm. This paper concerns the development of a new method that does this in the case of discrete variables. We first provide some examples of situations where discrete variables interact. In this paper we learn single and interactive direct risk factors for MBC using a large-scale breast cancer dataset, which we curate, and a new algorithm which we develop. The algorithm, called the Markov Blanket and Interactive risk factor Learner (MBIL) algorithm, focuses on detecting both single and interactive risk factors that have a direct influence on a patient's outcome such as breast cancer metastasis. The algorithm does not merely find correlation, but rather identifies direct interactive risk factors. We first validate the effectiveness of MBIL using simulated data. We then use MBIL to learn the direct risk factors for 5 year breast cancer metastasis from our Lynn Sage Datasets (LSDS). Since the new methodology we develop utilizes Bayesian networks and learning interactive risk factors using information theory, we first briefly review these. Bayesian networks and the Markov blanket A Bayesian network (BN) consists of a directed acyclic graph (DAG), whose nodeset contains random variables, and the conditional probability distribution of every variable in the network given each set of values of its parents [14,15,16,17,18]. The edges in E represent direct probabilistic dependencies. The Markov blanket of a given target node T in a Bayesian network is the set of nodes M such that T is probabilistically independent of all other nodes in the network conditional on the nodes in M. If we know the value of the nodes in M, finding out the values of other nodes cannot change the conditional probability of T. In general, the Markov blanket of T consists of all parents of T, children of T, and parents of children of T [14]. If T is a leaf (a node with no children), then the Markov blanket consists only of the parents of T, which are the direct risk factors for T if T is a disease. Figure 1 shows a BN DAG model. Since T is a leaf in that model, the Markov blanket of T consists of is parents, namely nodes X11-X15. A BN model demonstrating the dependency and independency relationships among variables. The Markov Blanket of T consists of nodes X11, X12, X13, X14 and X15. These nodes are the direct risk factors and separate T from the influence of the noisy predictors X1-X10, X16, and X17 The MBIL algorithm Knowledge of the direct risk factors in the Markov blanket of a leaf target node T such as MBC is valuable in clinical research in two ways. First, we can target therapy towards these direct risk factors rather than risk factors which are merely correlated with MBC and might not even affect MBC. In the network in Fig. 1X17 is an example of a variable that is correlated with T but has no effect on T, and therefore should not be targeted. Second, without knowing the BN DAG model, nodes X1-X10X16, and X17 would all be learned as predictive risk factors due to the fact that they are correlated with T through the nodes in the Markov blanket. However, if we can identify the Markov blanket and know the values of the nodes in it, we will have blocked the connections between T and the other nodes. So, these noisy predictors can be completely removed from a prediction model, which should improve prediction performance. Learning a BN [14, 19, 20] from data concerns learning both the parameters and the structure (called a DAG model). In the score-based structure-learning approach, a score is used to measure how well a DAG represents the data. The Bayesian score [21] is the probability of the data given the DAG. A popular variant of this score is the Bayesian Dirichlet equivalent uniform (BDeu) score, which uses a parameter alpha to represent a prior equivalent sample size [20]. The BN model selection problem is NP-hard [22]. So, heuristic search algorithms are used [14]. Information theory and learning interactive risk factors As noted earlier, risk factors may interact to affect disease. For example, in Fig. 1X13 and X14 may have a joint effect on T, while each has no marginal effect on T. A joint predictor like this is often ignored and missed by standard BN structure learning [9, 23]. We previously designed a new information theory-based score to measure the strength of an interaction and applied it to interaction learning. Next we discuss information theory and that score. Information theory is the discipline that deals with the quantification and communication of information. If T is a discrete random variable with m alternatives, we define the entropy H(T) as follows: $$ H(T)=-\sum \limits_{i=1}^mP\left({t}_i\right){\log}_2P\left({t}_i\right) $$ Shannon [24] showed that if we repeat n trials of the experiment having outcome T, then the entropy H(T) is the limit as n → ∞ of the expected value of the number of bits needed to report the outcome of each trial of the experiment. Entropy is a measure of our uncertainty in the value of T since, as entropy increases, on the average it takes more bits to resolve our uncertainty. The conditional entropy of T given X is the expected value of the entropy of T conditional on X. It is defined as follows (where X has k alternatives): $$ H\left(T|X\right)=\sum \limits_{j=1}^kH\left(T|{x}_j\right)P\left({x}_j\right). $$ By learning the value of X, we can reduce our uncertainty in T. The information gain (IG) of T relative to X is defined to be the expected reduction in the entropy of T conditional on X: $$ IG\left(T;X\right)=H(T)-H\left(T/X\right) $$ The notation IG(T;A), where A is set, denotes the information gain based on the joint distribution of all variables in A. Although genetic epistasis serves as a good example of a discrete interaction and intuitively we understand what is meant by an interaction, a formal definition did not exist until Jiang et al. [23] and Zeng et al. [9] developed the interaction strength to both define an interaction and measure its strength. If we have a target variable T and two sets of variables A and B, we define the interaction strength (IS) of a set of variables M relative to a target T as follows: $$ IS\left(T;M\right)=\underset{A\subset M}{\min}\frac{IG\left(T;M\right)- IG\left(T;M-A\right)- IG\left(T;A\right)}{IG\left(T;M\right)} $$ Since information gain (IG) is nonnegative, it is straightforward that IS(T;M) ≤ 1. If the variables in M are causing T with no marginal effects, then IS is 1. We would consider this a very strong interaction. When the IS is small, the increase in IG obtained by considering the variables in M together is small compared to considering them separately. We would consider this a weak interaction or no interaction at all. Jiang et al. [23] show that if the variables in M are independent causes of T, then $$ IS\left(T;M-A,A\right)\ge 0 $$ So, in situations we often investigate, the IS is between 0 and 1, and therefore satisfies the notion of a fuzzy set [25], where the greater the value of the IS the greater membership the set has in the fuzzy set of interactions. Situations in which the IS can be negative are discussed in [23]. Zeng et al. [9] developed Exhaustive_IGain, which does an exhaustive search to learn interactions from low-dimensional data. The algorithm determines whether every set of variables M is an interaction by checking whether the IS exceeds a threshold t. If this condition is satisfied, M is considered an interaction. They applied Exhaustive_IGain to the METABRIC dataset [1] to investigate how clinical variables might interact to affect breast cancer survival. They found that hormone_therapy and menopausal_status interact to affect 10 year breast cancer death, but neither hormone_therapy nor menopausal_status are highly correlated with 10 year breast cancer death alone. A consultation with oncologist Dr. Adam Brufsky revealed that hormone therapy is more effective in post-menopausal women [26]. Jiang et al. [23] developed MBS_IGain, which uses the same strategy as Exhaustive_IGain, but learns interactions from high-dimensional data using a heuristic search. They compared MBS_IGain to 7 other methods using 100 simulated datasets, each based on 5 interactions. MBS_IGain decidedly performed best. The interactive risk factors we learn using the IGain algorithms may not have a direct influence on the target. That is, they might interact to have an effect on a feature that directly affects the target, or, in the Bayesian network modeling the probabilistic relationships among the variables, they might even be a descendent of a node influencing the target. To identify the direct risk factors of a target we need an algorithm that learns the direct risk factors, including interactive risk factors. We develop such an algorithm next. MBIL algorithm Next we present the new Markov Blanket and Interactive risk factor Learner (MBIL) algorithm. Figure 2 shows the algorithm, which proceeds as follows: First, procedure determine_single_predictors learns the set PA1 of single-variable risk factors of a target T. The procedure does this by determining whether the BDeu score of the model in which the variable has an edge to T is greater than the BDeu score of the model in which T has no parents. Then procedure determine_interactions uses Exhaustive_IGain to learn interactive risk factors of T. Next procedure transform_from_interactions collapses each interaction to a single variable. The set of these new single variables is PA2, and the set PA = PA1∪PA2 is the set of all candidate direct risk factors of T. Procedure find_parents then removes indirect risk factors of T from PA. It does this as follows. When i = 1, the procedure checks whether each one parent model has a lower score than the model with no parents. If it does, the variable in the model is not a direct risk factor and is removed from PA. When i = 2 it checks, for each two parent models, whether removing one of the variables in the model increases the score of the model. If so, T is independent of the removed variable given the other variable, and so the removed variable is not a direct risk factor and is removed from PA. Next when i = 3 the algorithm checks, for each three parent model, whether removing one of the variables in the model increases the score of the model. If so, T is independent of the removed variable given the other two variables, and so the removed variable is not a direct risk factor and is removed from PA. The procedure continues in this fashion until i equals the size of PA or some maximum number R of Markov blanket variables is reached. Finally, procedure transform_to_interactions transforms the variables representing interactions back to interactions. Note that the algorithm does not merely find correlation, but rather identifies direct interactive risk factors (features that directly influence the outcome of interest). The MBIL algorithm has the following parameters: $$ {\displaystyle \begin{array}{l}\alpha :\mathrm{The}\ \mathrm{value}\ \mathrm{of}\ \upalpha\ \mathrm{in}\ \mathrm{the}\ \mathrm{BDeu}\ \mathrm{score}.\\ {}R1:\mathrm{The} \operatorname {var}\mathrm{iable}\ R\ \mathrm{in}\ \mathrm{procedure}\ \mathrm{determine}\_\mathrm{interactions}\ \left(\mathrm{Exhaustive}\_\mathrm{IGain}\right),\mathrm{which}\ \mathrm{is}\ \mathrm{the}\ \mathrm{maximum}\ \mathrm{size}\ \mathrm{of}\ \mathrm{an}\ \mathrm{in}\mathrm{teraction}.\\ {}R2:\mathrm{The}\ \mathrm{variable}\ R\ \mathrm{in}\ \mathrm{procedure}\ \mathrm{prune}\_\mathrm{nodes},\mathrm{which}\ \mathrm{is}\ \mathrm{the}\ \mathrm{assumed}\ \mathrm{largest}\ \mathrm{number}\ \mathrm{of}\ \mathrm{variable}\mathrm{s}\ \mathrm{in}\ \mathrm{the}\ \mathrm{Markov}\ \mathrm{blanket}.\\ {}t:\mathrm{The}\ \mathrm{threshold}\ \mathrm{in}\ \mathrm{Exhaustive}\_\mathrm{IGain},\mathrm{which}\ \mathrm{is}\ \mathrm{the}\ \mathrm{minimum}\ IS\ \mathrm{needed}\ \mathrm{to}\ \mathrm{be}\ \mathrm{an}\ \mathrm{in}\mathrm{teraction}.\end{array}} $$ Experiments with simulated datasets Before applying MBIL to learn risk factors for MBC, we validated that it is effective at this task using simulated datasets. Simulated dataset development We developed six BNs called Pure_Weak, Pure_Strong, Marginal_Weak_Weak, Marginal_Weak_Strong, Marginal_Strong_Weak, and Marginal_Strong_Strong, which were used to generate the simulated datasets. Next we discuss how each BN was developed. All BNs were generated using the BN DAG model in Fig. 3, each with its own values of the parameters. The variables A and B are involved in a two-direct risk factor interaction, and this is implemented using the hidden variable H1. The variables C, D, and E are involved in a three-direct risk factor interaction, and this is implemented using the hidden variable H2. The variables F and G are each stand-alone binary direct risk factors and are implemented using H3 and H4. The effects of the hidden variables H1, H2, H3, and H4 are combined using the Noisy-OR model [14, 16]. Next we describe how each interaction was generated for each BN. The Bayesian network structure used to generate the simulated datasets The interaction between A and B was based on the relationships in Fig. 4, which provides a pure interaction. Variables A and B are both trinary predictors of the binary target H1, which has values 0 and 1. The number next to each variable value shows the fraction of occurrence of that value in the population, and the entries in the table show the probability H1 equals 1 given each combination of values of A and B. For example, $$ P\left(H1=1\ |A={a}_1,B={b}_2\right)=p. $$ The relationships used to generate interaction (A,B) in Fig. 3. (The number next to each variable value shows the fraction of occurrence of that value in the population, and the entries in the table show the probability that H1 equals 1 given each combination of values of A and B) Next we shows that variable A has no marginal effect. We have $$ {\displaystyle \begin{array}{c}P\left(H1=1|A={a}_1\right)=0.0\times 0.25+p\times 0.5+0.0\times 0.25=0.5p\\ {}P\left(H1=1|A={a}_2\right)=p\times 0.25+0\times 0.5+p\times 0.25=0.5p\\ {}P\left(H1=1|A={a}_3\right)=0.0\times 0.25+p\times 0.5+0.0\times 0.25=0.5p\end{array}} $$ Similarly, variable B has no marginal effect. Urbanowicz et al. [27] created GAMETES, which is a software package for generating pure, strict epistatic models with random architectures. The software allows the user to specify the heritability and the minor allele frequency. We used GAMETES to generate the interaction between C, D, and E. For the BN called Pure_Weak, interaction (A,B) was generated with p = 0.3 in Fig. 3; interaction (C,D,E) was generated with 0.3 heritability and 0.3 minor allele frequency in GAMETES; and the P(H3 = 1|F = 1) and P(H4 = 1|G = 1) (called the strengths with which binary risk factors F and G influence T) were set to 0.3, while P(H3 = 1|F = 0) and P(H4 = 1|G = 0) were set to 0. For the BN called Pure_Strong, interaction (A,B) was generated with p = 0.8 in Fig. 3; interaction (C,D,E) was generated with 0.5 heritability and 0.4 minor allele frequency in GAMETES; and the strengths for F and G were set to 0.8. Interactions in the BN called Marginal_Weak_Weak were generated using the same relationships as in the BN called Pure_Weak, except the variables involved in interactions were allowed to have weak marginal effects. Interactions in the BN called Marginal_Weak_Strong were generated using the same relationships as in the BN called Pure_Weak except the variables involved in interactions were allowed to have strong marginal effects. In the same way the interactions in the BNs called Marginal_Strong_Weak and Marginal_Strong_Strong were based on the interactions in the BN model called Pure_Strong. Using each of the 6 BNs, we generated 20 datasets with 2000 records, making a total of 120 datasets. We call the set of all these datasets the Learn_Datasets. Using each of the 6 BNs, we generated another 20 datasets with 2000 records and 20 datasets with 10,000 records, making a total of 240 datasets. We call the set of these 240 datasets the Test_Datasets. Simulated dataset analysis Using the simulated datasets, we compared MBIL to three standard BN learning algorithms, namely Fast Greedy Search (FGS), the PC algorithm (PC), and the CPC algorithm (CPC). We used our own implementation of MBIL, and the implementations of FGS, PC, and CPC in Tetrad [18]. The target was forced to be a leaf node in the models learned by FGS, PC, and CPC. We applied each method to the Learn_Datasets to learn direct risk factors for target T. For MBIL we set R1 = R2 = 5. We then used t = 0.1 and t = 0.2 combined with the values 1, 120, 240, and 480 of α. FGS has two parameters, SMP and STP. We used the following combinations of these parameters: SMP = 1, STP = 1; SMP = 1,STP = 2; SMP = 2,STP = 1; SMP = 2,STP = 2. PC and CPC each have one parameter α; we used α = 0.01, α = 0.05, α = 0.1, and α = 0.2, α = 0.3. The Jaccard index for two sets V and W is defines as follows: $$ Jaccard\left(A,B\right)=\frac{\#\left(A\cap B\right)}{\#\left(A\cup B\right)} $$ The Jaccard index equals 1 if the two sets are identical and equals 0 if their intersection is empty. After applying each of the 4 methods with each of its parameter settings to all 120 of the Learn_Datasets, we computed the Jaccard index of the set of true direct risk factors V = {A,B,C,D,E,F,G} and the set of learned direct risk factors W for each application. Then for each parameter setting, we computed the average Jaccard index. For each of the 4 methods we chose the parameter setting which gave the highest average Jaccard index. The best parameter setting were as follows: $$ {\displaystyle \begin{array}{l}\mathrm{MBIL}:t=0.2,\alpha =480\\ {}\mathrm{FGS}: SMP=2, STP=2\\ {}\mathrm{PC}:\alpha =0.2\\ {}\mathrm{CPC}:\alpha =0.2.\end{array}} $$ Finally, we applied each of the 4 methods with its best parameter settings to the Test_Datasets, and for each method we computed the average Jaccard index separately for the datasets containing 2000 records and for the datasets containing 10,000 records. Applying MBIL to learn direct risk factors for MBC Dataset Curation The Lynn Sage database (LSDB) contains information about patients who came to the Lynn Sage Comprehensive Breast Center at Northwestern Memorial Hospital for care. The Northwestern Medicine Enterprise Data Warehouse (NMEDW) is a joint initiative across the Northwestern University Feinberg School of Medicine and Northwestern Memorial HealthCare, which maintains comprehensive data obtain from electronic health records (EHR). Using the LSDB and the NMEDW, we curated a dataset called the Lynn Sage Dataset (LSDS). The LSDS is comprised of records on 6726 breast cancer patients, which span 03/02/1990 to 07/28/2015. The dataset contains 32 patient features such as HER2 status and grade, outcome variables including distant metastasis, local recurrence, and overall survival, along with the number of days after initial consultation these events occurred, and treatment variables including breast/chest wall radiation, nodal field radiation, alkylating agents, anthracyclines, antimetabolites, anti-tubulin, neoadjuvant therapy, gonadotrophin releasing hormone agonists, targeted therapy, HER2 inhibitors, anastrozole, tamoxifen, and letrozole. Our goal in this study was to find the direct risk factors for 5 year breast cancer metastasis. For the purpose of this study, we eliminated all variables that were redundant or composites of other variables (e.g. stage), and all variables that have 1/3 or more missing values. So, the candidate risk factors included in the study are the ones shown in Table 1. We assigned the value yes to metastasis if the patient metastasized within 5 years of initial diagnosis, the value no to metastasis if it was known that the patient did not metastasize within 5 years, and the value NULL to metastasis if the patient discontinued follow-up within the first 5 years and had not metastasized. The value NULL was also assigned to all missing data fields in all variables. Missing data (NULL values) were filled in using the nearest neighbor imputation algorithm. Table 1 The candidate direct risk factors in the LSDS that are analyzed in the study presented here We used MBIL, the constraint-based Bayesian network learning algorithm Necessary Path Condition (NPC, and the score-based Bayesian network learning algorithm Greedy Equivalent Search (GES) with the Bayesian information criterion (BIC) score to learn the direct risk factors for 5 year breast cancer metastasis. We used our own implementation of MBIL and the implementations of NPC and GES in the Hugin package (https://www.hugin.com/). The variable metastasis was forced to be a leaf node in the networks learned by NPC and GES. Simulated data Table 2 shows the average Jaccard indices for the Test_Datasets containing 2000 records. As would be expected, in general the performance of all methods increases as we increase the strength of the interactive effect and the strength of the marginal effects. MBIL substantially outperformed the other methods overall and for every dataset type. Oddly, CPC performs very poorly with the weak datasets, but performs the best of the other methods for the Marginal_Strong_Strong datasets. Table 2 Average Jaccard index results for 120 datasets containing 2000 records Table 3 shows the average Jaccard indices for the Test_Datasets containing 10,000 records. Again, MBIL substantially outperforms the other methods. In fact, its detection performance is close to perfect when the causal strength is strong, even when there are no marginal effects. This results indicates that, with a sufficient amount of data, MBIL can accurately detect all pure interactions with few false positives when the causal strength is strong. It is somewhat odd that, in the case of the weak interactions, the performance of MBIL degrades when we have marginal effects. This could be due to the parent of a variable, which is involved in an interaction, being detected as a stand-alone risk factor owing to the marginal effect of the variable. A visual inspection of some of the results indicated this. Table 3 Average Jaccard index results for 120 datasets containing 10,000 records Our results using simulated datasets indicate MBIL is more effective than current methods at learning direct risk factors. So, we next applied it to learn direct risk factors for 5 year MBC. Application of MBIL to the LSDS breast Cancer dataset Table 4 shows the direct risk factors learned by MBIL, NPC, and GES from the LSDS. All three methods learned that grade and lymph_nodes_positive are direct risk factors. It is plausible that each of these variables would have a direct influence on metastasis. Only MBIL and NPC found that surgical_margins is a direct risk factor. It is also plausible that this variable would have a direct influence on metastasis. Only NPC found that invasive is a direct risk factor of metastasis. An inspection of the entire network learned by GES reveals that GES determined that invasive and surgical_margins are independent of metastasis given lymph_nodes_positive according to that network. These conditional independencies are also plausible. Table 4 Direct risk factors for 5 year breast cancer metastasis learned by MBIL, NPC, and GES. (IS: interaction strength as described on page 6; NA: not applicable because IS requires at least two variables) Most notable is that MBIL learned that HER2 and ER interact to directly affect metastasis. This interaction is well-known [28]. Neither GES nor NPC learned that HER2 and ER are direct risk factors. An inspection of the entire networks learned by these algorithms reveals that each of them discovered that HER2 and ER are independent of metastasis given numerous variables. The effects of HER2 and ER on metastasis, when each is taken alone, are too small for them to be recognized as direct risk factors. The interactive strength of 0.1297 (Table 4) for HER2 and ER may not seem that high. However, most variable pairs have interactive strengths about equal to 0. This interaction of hormone receptors and growth factor receptors may be biologically due to physical protein interactions or to the availability of specific treatment against these two markers with either intervention improving survival. In the latter case, this would represent an interaction that is present due to our therapeutic developments and would not be observed in naïve tumors. Note that our algorithm detects the direct risk factors (features that directly influence the outcome of interest). This means that, according to our data, all other variables are correlated with 5-year metastasis only through their association with our learned direct risk factors. For example, according to a chi-square test, size and 5-year metastasis are associated with p-value less than 0.00001. However, the MBIL results indicate that this association is through grade, lymph_nodes_positive, surgical_margins, and/or HER2, ER. It could be that grade affects both size and 5-year metastasis, and size has no effect at all. Or size could indirectly affect 5-year metastasis. Regardless, based on our results, we can direct therapy at discovered direct risk factors rather than at variables only known to be correlated. As shown in Fig. 5, the direct risk factors learned by MBIL have a considerable probabilistic association with 5 year metastasis based on the data in the LSDS. The probabilities for values of stage, lymph_nodes_positive, and surgical_margins are substantial and in the order we would expect. The order for HER2, ER are in agreement with the order reported in [29], except we have HER2+,ER- and HER2-,ER- reversed relative to their order. However, we have 6726 records compared to their 1134 records, and other research indicates triple negative breast cancer (HER2-,ER-,PR-) has the worst prognosis [30], in agreement with our results. In addition, while the natural history of HER+ breast cancer demonstrates greater aggressiveness, the availability of targeted therapy has significantly changed the outcome for the better for these patients [31, 32].. The probabilities of metastasis given the values of the direct risk factors for 5 year breast cancer metastasis learned by MBIL The probability of 5 year metastasis (based on the LSDS) given the most benign values of all the direct risk factors is 0.015, while the probability of metastasis given the most severe values is 0.468. These results are not shown in the table or figure. We developed the LSDS and the MBIL algorithm to learn interactive direct risk factors for 5 year MBC. The results of experiments involving simulated data indicated MBIL can learn such direct risk factors significantly better than the standard Bayesian network algorithms FGS, PC, and CPC. We then applied MBIL and the standard algorithms GES and NPC to the real LSDS to learn the direct risk factors for 5 year MBC. The results for MBIL were more cogent than those for the other methods. In particular MBIL learned that HER2 and ER interact to directly affect 5 year breast cancer metastasis, while other methods did not. Their interaction is well-known. Knowledge of the direct risk factors for metastasis can be used in two ways. First, we can target interventions towards these risk factors. Since lymph_nodes_positive is a direct risk factor, there is no point in targeting therapy towards factors which only affect the primary tumor since by this time the nodes are positive or not. However, we can target the micro-metastases in the lymphatic system. If surgical_margins are known to be a direct risk factor, surgeons can re-explore the surgery site if there is a residual tumor. Further, if there are either positive margins or lymph nodes, the patient may be candidate for adjuvant therapy for cryptic micrometastases even prior to emergence. However, we can spare the toxic effects for those patients less likely to harbor such hidden tumor cells. The interaction of HER2 and ER was found to be a direct risk factor, with the combination of both taking negative values having the highest risk. We already have therapy targeting HER2 positive and ER positive. However, there is no targeted therapy in the case where both are negative. Currently, triple negative breast cancer (TNBC) is consider a candidate for aggressive but non-specific adjuvant therapies (chemotherapy and radiation therapy) even in the absence of evidence of metastasis. These finding herein, support such an approach based on risk factors. Still our results indicate developing targeted therapies against these cryptic micrometastases would be a promising area of research. Secondly, if we eliminate the "noisy" predictors and only use the direct risk factors in a metastasis prediction model, we should improve prediction performance and avoid over-fitting. We plan to investigate this conjecture in future research. Most of our learned risk factors are well-known. However, as discussed near the end of the last section, it is not known that they are the direct risk factors, Furthermore, to our knowledge, we are the first to further verify risk factors systematically from a different aspect, that is, through big data learning. We demonstrated via these results the importance of data-oriented precision medicine; namely a "discovery" that was made through years of cumulative observations and/or experiments can be identified effectively by running software using available passive data. Secondly, we used these known risk factors as gold standards to evaluate the effectiveness of our method. Note that our method is readily reusable, so, it can be used on other cancer datasets once they become available. If our method can identify well-known risk factors, then we would be more confident it can be applied to other dataset to identify unknown risk factors or interactions. We applied MBIL to one dataset, and obtained meaningful results that are consistent with current knowledge. We need to develop additional EHR-based breast cancer datasets like the LSDS. MBIL can be applied to each of them individually and together via a meta-analysis, enabling us to accurately determine direct risk factors and predict metastasis taking into account all patient information. This would set the stage for immediate tailoring of existing approaches taking into account metastatic probability, and over the longer term, allowing for the development of targeted, 'smart' therapies. We conclude that the MBIL algorithm is effective at identifying direct risk factors for patient outcomes, and that it has successfully identified likely direct risk factors for 5 year breast cancer metastasis. We have submitted our dataset LSDB-5YDM to datadryad.org (DOI 10. 5061/dryad.64964 m0). MBC: MBIL: Markov Blanket and Interactive risk factor Learner FGS: Fast Greedy Search PC algorithm CPC: CPC algorithm NPC: Necessary Path Condition GES: Greedy Equivalent Search LSDS: Lynn Sage Datasets BN: Bayesian network BDeu: Bayesian Dirichlet equivalent uniform IG: Information gain Interaction strength LSDB: Lynn Sage database NMEDW: Northwestern Medicine Enterprise Data Warehouse EHR: Bayesian information criterion TNBC: CDC (Centers for Disease Control and Prevention), Leading Causes of Death in Females, United States. https://www.cdc.gov/women/lcod/index.htm. Accessed Jan 2018. American Cancer Society. Cancer Facts and Figures. Atlanta: American Cancer Society, Inc; 2018. https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2018.html. U.S. breast cancer statistic, breast cancer.org. https://www.breastcancer.org/symptoms/understand_bc/statistics. Accessed Jan 2018. The Breast Cancer Landscape. https://cdmrp.army.mil/bcrp/pdfs/Breast%20Cancer%20Landscape.pdf. Accessed Jan 2018. Gupta GP, Massague J. Cancer metastasis: building a framework. Cell. 2006;127(4):679–95. CAS PubMed PubMed Central Article Google Scholar Statistics for Metastatic Breast Cancer Metastatic. Breast Cancer Network. http://mbcn.org/education/category/statistics/. Accessed Jan 2018. Ziegler RG, Benichou J, Byrne C, et al. Proportion of breast cancer cases in the United States explained by well-established risk factors. J Natl Cancer Inst. 1995;87(22):1681–5. Curtis C, Shah SP, Chin SF, et al. The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroup. Nature. 2012;486:346–52. Zeng Z, Jiang X, Neapolitan RE. Discovering causal interactions using Bayesian network scoring and Information gain. BMC Bioinformatics. 2016;17:21. Carrington JC, Ambros V. Role of microRNAs in plant and animal development. Science. 2003;301(5631):336–8. CAS PubMed Article Google Scholar Lee S, Jiang X. Modeling miRNA-mRNA interactions that cause phenotypic abnormality in breast cancer patients. PLoS One. 2017;12(8). https://doi.org/10.1371/journal.pone.0182666. Yan L-X, Huang X-F, Shao Q, Huang M-Y, Deng L, Wu Q-L, et al. MicroRNA miR-21 overexpression in human breast cancer is associated with advanced clinical stage, lymph node metastasis and patient poor prognosis. Rna. 2008;14(11):2348–60. Zhu S, Wu H, Wu F, Nie D, Sheng S, Mo Y-Y. MicroRNA-21 targets tumor suppressor genes in invasion and metastasis. Cell Res. 2008;18(3):350–9. Neapolitan RE. Learning Bayesian Networks. Upper Saddle River: Prentice Hall; 2004. Pearl J. Probabilistic Reasoning in Intelligent Systems. Burlington: Morgan Kaufmann; 1988. Neapolitan RE. Probabilistic reasoning in expert systems. NY: Wiley; 1989. Kjaerulff UB, Madsen AL. Bayesian networks and influence diagrams. NY: Springer; 2010. Spirtes P, Glymour C, Scheines R. Causation, Prediction, and Search. 2nd ed. New York: Springer-Verlag; 1993. Boston, MA; MIT Press; 2000. (https://bd2kccd.github.io/docs/tetrad/). Friedman N, Koller K. Being Bayesian about network structure: a Bayesian approach to structure discovery in Bayesian networks. Mach Learn. 2003;50:95–125. Heckerman D, Geiger D, Chickering D. Learning Bayesian networks: the combination of knowledge and statistical data. Mach Learn. 1995;20(3):197–243. Cooper GF, Herskovits E. A Bayesian method for the induction of probabilistic networks from data. Mach Learn. 1992;9:309–47. Chickering M. Learning Bayesian networks is NP-complete. In: Fisher D, Lenz H, editors. Learning from data: lecture notes in statistics. New York: Springer Verlag; 1996. Jiang X, Jao J, Neapolitan RE. Learning predictive interactions using Information Gain and Bayesian network scoring. PLOS ONE. 2015. https://doi.org/10.1371/journal.pone.0143247. Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948;27(3):379–423. Zadeh LA. Fuzzy sets. Inf Control. 1965;8:338–53. Fabian CJ. The what, why and how of aromatase inhibitors: hormonal agents for treatment and prevention of breast cancer. Int J Clin Pract. 2007;61(12):2051–63. https://onlinelibrary.wiley.com/doi/full/10.1111/j.1742-1241.2007.01587.x. Urbanowicz RJ, Kiralis J, Sinnott-Armstrong NA, et al. GAMETES: a fast, direct algorithm for generating pure, strict, epistatic models with random architectures. BioData Min. 2012;5:16. Arpino G, Wiechmann L, Osborne CK, Schiff R. Crosstalk between the estrogen receptor and the HER tyrosine kinase receptor family: molecular mechanism and clinical implications for endocrine therapy resistance. Endocr Rev. 2008;29(2):217–33. Onitilo AA, Engel JM, Greenlee RT, Mukesh BN. Breast cancer subtypes based on ER/PR and Her2 expression: comparison of clinicopathologic features and survival. Clin Med Res. 2009;7(1–2):4–13. Li X, Yang J, Peng L, Sahin AA, et al. Triple-negative breast cancer has worse overall survival and cause-specific survival than non-triple-negative breast cancer. Breast Cancer Res Treat. 2017;161(2):297–87. Parise CA, Caggiano V. Risk of mortality of node-negative, ER/PR/HER2 breast cancer subtypes in T1, T2, and T3 tumors. Breast Cancer Res Treat. 2017;165(3):743–50. Pichilingue-Febres AF, Arias-Linares MA, Araujo-Castillo RV. Comments on "risk of mortality of node-negative, ER/PR/HER2 breast cancer subtypes in T1, T2, and T3 tumors" by Parise CA and Caggiano V, breast Cancer res treat, 2017. Breast Cancer Res Treat. 2018;168(2):577–8. Research reported in this paper was supported by grant R01LM011663 awarded by the U.S. National Library of Medicine of the National Institutes of Health (to XJ), by the U.S. Department of Defense through the Breast Cancer Research Program under Award No. W81XWH-19-1-0495 (to XJ), and by a VA Merit Award (to AW). Other than supplying funds, the funding agencies played no role in the research. Department of Biomedical Informatics, University of Pittsburgh School of Medicine, 5607 Baum Blvd, Pittsburgh, PA, 15217, USA Xia Jiang, Darshan Shetty & Kahmil Shajihan Department of Pathology, University of Pittsburgh and Pittsburgh VA Health System, Pittsburgh, PA, USA Alan Wells UPMC Hillman Cancer Center, Pittsburgh, PA, USA Alan Wells & Adam Brufsky Division of Hematology/Oncology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA Adam Brufsky Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USA Richard E. Neapolitan Xia Jiang Darshan Shetty Kahmil Shajihan XJ and RN originated the study, developed the MBIL algorithm and wrote the first draft of the manuscript. XJ and DS implemented the MBIL algorithm. DS's contribution was completely done while he worked as a paid student helper in Dr. Jiang's AI lab at UPitt. RN performed all data analyses concerning the application of the MBIL, GES, and NPC to the LSDS, and conducted all data analyses concerning the application of the MBIL algorithm to the simulated datasets. KS conducted all data analyses concerning the application of the PC, CPC, and FGS algorithms to the simulated datasets. KS's contribution was completely done while he worked as a DBMI Summer Intern 2017 at UPitt. AW and AB reviewed and analyzed the findings. All authors contributed to the preparation and revision of the manuscript. The author(s) read and approved the final manuscript. Correspondence to Xia Jiang. The dataset is a result of the PROTOCOL TITLE: A New Generation Clinical Decision Support System, which was approved by Northwestern University IRB #: STU00200923-MOD0006. The need for patient consent was waived by the ethics committee because the data consists only of de-identified data mined from EHR databases. Jiang, X., Wells, A., Brufsky, A. et al. Leveraging Bayesian networks and information theory to learn risk factors for breast cancer metastasis. BMC Bioinformatics 21, 298 (2020). https://doi.org/10.1186/s12859-020-03638-8 Machine Learning and Artificial Intelligence in Bioinformatics
CommonCrawl
In the MPS (Moving Particle Semi-implicit) method, one of the particle methods, incompressible fluids are solved as "constant density". On the other hand, "velocity divergence zero" is used in the grid methods such as the Finite Volume Method. This difference is a typical characteristic of the MPS method, which is explained in this article. When Koshizuka started to study particle methods in 1993, SPH (Smoothed Particle Hydrodynamics) had already been proposed as a particle method. SPH solves compressible fluids using an explicit algorithm, though most of the engineering problems are those of incompressible fluids which have been solved by the grid methods using a semi-implicit algorithm. Koshizuka sought a particle method that can solve incompressible fluids using a semi-implicit algorithm and finally reached the MPS method. But there is a difference from the existing grid methods and it is the essential idea of the MPS method. Fundamental three physical laws of conservation are for mass, momentum, and energy. The conservation law of energy can be derived from the other two conservation laws of mass and momentum, so that the solution is obtained by solving two conservation laws. In fluid dynamics, the conservation law of mass called "continuity equation" and the conservation law of momentum called "Navier-Stokes equations" are solved simultaneously. This way is taken in the particle methods as well. Incompressibility is related to the conservation law of mass, "continuity equation". The continuity equation of compressible fluids is $$ \frac{D \rho}{D t}+\rho \nabla \cdot \boldsymbol{u}=0 \qquad. $$ In incompressible fluids, the first term of the left-hand side of Equation (1) is zero, and thus the second term is also zero: $$ \frac{D \rho}{D t}=0 \qquad, $$ $$ \nabla \cdot \boldsymbol{u}=0 \qquad. $$ In addition, by integrating Equation (2), the following equation representing the density being constant in time is obtained: $$ \rho=\it{const} \qquad. $$ ("\(\it{const}\)" in the right-hand side of Equation (3) means constant.) Equation (4) means "density being constant", which requires that the density should be calculated in each time step and be constant. This is the constraint condition used in the MPS method (Koshizuka and Oka, 1996). On the other hand, Equation (3), "velocity divergence being zero", is used in the grid method. This condition can be employed in particle methods in place of Equation (4). Which is more essential as the incompressibility? Are they equivalent to each other (Figure 1)? Figure 1 What incompressibility condition is essential ? First, please consider the difference between Equations (4) and (3). The basic law is the continuity equation. This requires that the mass should be constant which means that Equation (4) is more essential. By differentiating Equation (4), we obtain Equation (2) representing density change in time is zero, and then Equation (3), velocity divergence being constant, is deduced by using Equation (1). As an analogy of money savings, the deposit can be checked by confirming the amount of money, which is the way using Equation (4). Checking the balance between income and payment is like using Equation (3). If the calculation is accurate, two ways are equivalent. However, if there are possibilities of errors, checking the amount of money is direct and essential. Figure 2 Ball motion on a curved surface The next analogy is a rolling ball on a curved surface as shown in Figure 2(a). The direct constraint condition is that the ball should be on the curved surface, which corresponds to Equation (4) in the fluid dynamics. Differentiating the constraint condition in time, we obtain the condition that the ball velocity \(\boldsymbol{v}\) should be parallel to the curved surface (Figure 2(b)). This corresponds to Equation (3). When the motion of the ball during a small period \(\Delta t\) is considered, the movement is \(\boldsymbol{v} \times \Delta t\) and the ball may be a little inside the wall as shown in Figure 2(c). We can say "the constraint condition = differentiation of the time differentiation of the constraint condition" instantaneously but "the constraint condition ≠ differentiation of the time differentiation of the constraint condition" in the discretized calculation because of errors. From this consideration, we can say that Figure 2(a) is more essential than Figure 2(b). In the particle method, imposing Equation (3) as the incompressibility condition makes the fluid density gradually change due to the errors. The change is accumulated when time proceeds. Therefore, Equation (4) is essential, and Equation (3) is derivative as the incompressibility condition. In SPH, there is a variation method employing "density being constant" as the incompressibility condition (Shao and Lo, 2003). This variation method has imported the concept of the MPS method and is called ISPH (Incompressible SPH). This is written in the paper of Shao and Lo (2003). Next, which is superior, "density being constant" or "velocity divergence being constant"? The author thinks that "density being constant" is superior to "velocity divergence being constant" from the viewpoint of the robustness of the particle methods. In detail, there are advantages and disadvantages. Advantages of "density being constant" are that (1) the density error is not accumulated, (2) the particle distribution is kept uniform, and (3) the free-surface boundary condition is simple. A disadvantage is that the pressure distribution is likely to be oscillatory. The advantage and disadvantages of "velocity divergence being constant" are opposite. Then, an idea to mix Equations (2), (3), and (4) has been proposed (Kondo and Koshizuka, 2008; Tanaka and Masunaga, 2010). In keeping the advantages of "density being constant", the disadvantage of the pressure oscillation is improved. S. Koshizuka and Y. Oka, "Moving-Particle Semi-implicit Method for Fragmentation of Incompressible Fluid," Nucl. Sci. Eng. 123, 421-434 (1996). S. Shao and E. Y. M. Lo, "Incompressible SPH Method for Simulating Newtonian and non-Newtonian Flows with a Free Surface," Adv. Water Resources 26, 787-800 (2003). M. Kondo and S. Koshizuka, "Suppressing the Numerical Oscillation in Moving Particle Semi-implicit Method," Trans. JSCES, Paper No.20080015 (2008). (in Japanese) M. Tanaka and T. Masunaga, "Stabilization and Smoothing of Pressure in MPS Method by Quasi-Compressibility," J. Comput. Phys. 229, 4279-4290 (2010). Back to Technical Columns INDEX Prof. Seiichi Koshizuka Professor, Department of Systems Innovation, Graduate School of Engineering, The University of Tokyo Founder and Outside Director, Prometech Software Inventor of the particle method (MPS method) ​<Author Profile> Professor, Department of Quantum Engineering and Systems Science, Graduate School of Engineering, The University of Tokyo External Board Member, Prometech Software, Inc.
CommonCrawl
Multivariable calculus Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one.[1] Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis Multivariable calculus may be thought of as an elementary part of advanced calculus. For advanced calculus, see calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus. Typical operations Limits and continuity A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions.[1]: 19–22  For example, there are scalar functions of two variables with points in their domain which give different limits when approached along different paths. E.g., the function. $f(x,y)={\frac {x^{2}y}{x^{4}+y^{2}}}$ approaches zero whenever the point $(0,0)$ is approached along lines through the origin ($y=kx$). However, when the origin is approached along a parabola $y=\pm x^{2}$, the function value has a limit of $\pm 1/2$. Since taking different paths toward the same point yields different limit values, a general limit does not exist there. Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example.[1]: 17–19  In particular, for a real-valued function with two real-valued parameters, $f(x,y)$, continuity of $f$ in $x$ for fixed $y$ and continuity of $f$ in $y$ for fixed $x$ does not imply continuity of $f$. Consider $f(x,y)={\begin{cases}{\frac {y}{x}}-y&{\text{if}}\quad 0\leq y<x\leq 1\\{\frac {x}{y}}-x&{\text{if}}\quad 0\leq x<y\leq 1\\1-x&{\text{if}}\quad 0<x=y\\0&{\text{everywhere else}}.\end{cases}}$ It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle $(0,1)\times (0,1)$. Furthermore, the functions defined for constant $x$ and $y$ and $0\leq a\leq 1$ by $g_{a}(x)=f(x,a)\quad $ and $\quad h_{a}(y)=f(a,y)\quad $ are continuous. Specifically, $g_{0}(x)=f(x,0)=h_{0}(0,y)=f(0,y)=0$ for all x and y. However, the sequence $f\left({\tfrac {1}{n}},{\tfrac {1}{n}}\right)$ (for natural $n$) converges to $\lim _{n\to \infty }f\left({\tfrac {1}{n}},{\tfrac {1}{n}}\right)=1$, rendering the function as discontinuous at $(0,0)$. Approaching the origin not along parallels to the $x$- and $y$-axis reveals this discontinuity. Continuity of function composition If $f(x,y)$ is continuous at $(a,b),$ and $g$ is a single variable function continuous at $f(a,b),$ then the composite function $h=g\circ f$ defined by $h(x,y)=g(f(x,y))$ is continuous at $(a,b).$ For examples, $\exp(x-y)$ and $\ln(1+xy-4x+10y).$ Properties of continuous functions If $f(x,y)$ and $g(x,y)$ are both continuous at $(a,b)$ then (i) $f(x,y)\pm g(x,y)$ are continuous at $(a,b).$ (ii) $cf(x,y)$ is continuous at $(a,b)$ for any constant c. (iii) $f(x,y)$ $.$ $g(x,y)$ is continuous at point $(a,b).$ (iv) ${\frac {f(x,y)}{g(x,y)}}$ is continuous at $(a,b),$ if $g(a,b)\neq 0.$ (v) $\mid f(x,y)\mid $ is continuous at $(a,b).$ Partial differentiation Main article: Partial derivative The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant.[1]: 26ff  Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator ($\nabla $) is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable.[1]: 654ff  Multiple integration Main article: Multiple integral The multiple integral expands the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration.[1]: 367ff  The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves. Fundamental theorem of calculus in multiple dimensions In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus:[1]: 543ff  • Gradient theorem • Stokes' theorem • Divergence theorem • Green's theorem. In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds.[2] Applications and uses Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular, Type of functionsApplicable techniques Curves $f:\mathbb {R} \to \mathbb {R} ^{n}$ for $n>1$ Lengths of curves, line integrals, and curvature. Surfaces $f:\mathbb {R} ^{2}\to \mathbb {R} ^{n}$ for $n>2$ Areas of surfaces, surface integrals, flux through surfaces, and curvature. Scalar fields $f:\mathbb {R} ^{n}\to \mathbb {R} $Maxima and minima, Lagrange multipliers, directional derivatives, level sets. Vector fields $f:\mathbb {R} ^{m}\to \mathbb {R} ^{n}$Any of the operations of vector calculus including gradient, divergence, and curl. Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics. Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data. Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus. Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus. See also • List of multivariable calculus topics • Multivariate statistics References 1. Richard Courant; Fritz John (14 December 1999). Introduction to Calculus and Analysis Volume II/2. Springer Science & Business Media. ISBN 978-3-540-66570-0. 2. Spivak, Michael (1965). Calculus on Manifolds. New York: W. A. Benjamin, Inc. ISBN 9780805390216. External links Wikimedia Commons has media related to Multivariate calculus. • UC Berkeley video lectures on Multivariable Calculus, Fall 2009, Professor Edward Frenkel • MIT video lectures on Multivariable Calculus, Fall 2007 • Multivariable Calculus: A free online textbook by George Cain and James Herod • Multivariable Calculus Online: A free online textbook by Jeff Knisley • Multivariable Calculus – A Very Quick Review, Prof. Blair Perot, University of Massachusetts Amherst • Multivariable Calculus, Online text by Dr. Jerry Shurman Industrial and applied mathematics Computational • Algorithms • design • analysis • Automata theory • Coding theory • Computational geometry • Constraint programming • Computational logic • Cryptography • Information theory Discrete • Computer algebra • Computational number theory • Combinatorics • Graph theory • Discrete geometry Analysis • Approximation theory • Clifford analysis • Clifford algebra • Differential equations • Ordinary differential equations • Partial differential equations • Stochastic differential equations • Differential geometry • Differential forms • Gauge theory • Geometric analysis • Dynamical systems • Chaos theory • Control theory • Functional analysis • Operator algebra • Operator theory • Harmonic analysis • Fourier analysis • Multilinear algebra • Exterior • Geometric • Tensor • Vector • Multivariable calculus • Exterior • Geometric • Tensor • Vector • Numerical analysis • Numerical linear algebra • Numerical methods for ordinary differential equations • Numerical methods for partial differential equations • Validated numerics • Variational calculus Probability theory • Distributions (random variables) • Stochastic processes / analysis • Path integral • Stochastic variational calculus Mathematical physics • Analytical mechanics • Lagrangian • Hamiltonian • Field theory • Classical • Conformal • Effective • Gauge • Quantum • Statistical • Topological • Perturbation theory • in quantum mechanics • Potential theory • String theory • Bosonic • Topological • Supersymmetry • Supersymmetric quantum mechanics • Supersymmetric theory of stochastic dynamics Algebraic structures • Algebra of physical space • Feynman integral • Poisson algebra • Quantum group • Renormalization group • Representation theory • Spacetime algebra • Superalgebra • Supersymmetry algebra Decision sciences • Game theory • Operations research • Optimization • Social choice theory • Statistics • Mathematical economics • Mathematical finance Other applications • Biology • Chemistry • Psychology • Sociology • "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" Related • Mathematics • Mathematical software Organizations • Society for Industrial and Applied Mathematics • Japan Society for Industrial and Applied Mathematics • Société de Mathématiques Appliquées et Industrielles • International Council for Industrial and Applied Mathematics • European Community on Computational Methods in Applied Sciences • Category • Mathematics portal / outline / topics list
Wikipedia
\begin{document} \title{Poisson and quantum geometry of acyclic cluster algebras} \author{Sebastian Zwicknagl} \maketitle \begin{abstract} We prove that certain acyclic cluster algebras over the complex numbers are the coordinate rings of holomorphic symplectic manifolds. We also show that the corresponding quantum cluster algebras have no non-trivial prime ideals. This allows us to give evidence for a generalization of the conjectured variant of the orbit method for quantized coordinate rings and their classical limits. \end{abstract} \tableofcontents \section{Introduction} In the present paper we investigate the Poisson geometry associated with cluster algebras over the complex numbers defined by acyclic quivers, and relate them to the ideal theory of the corresponding quantum cluster algebras. Our main motivation is the following conjectural analogue of Kirillov's Orbit method for quantized coordinate rings which has been an open problem for roughly twenty years (see e.g.~\cite{brown-goodearl} or \cite[Section 4.3]{Yak-spec} and \cite{Soi} for the case of compact quantum groups). Let $G$ be a semisimple complex algebraic group and $\mathbb{C}[g]$ its coordinate ring while $\mathbb{C}_q[G]$ denote the corresponding quantized coordinate ring. It has been conjectured that there exists a homeomorphism between the space of primitive ideals in $R_q[G]$ and the symplectic leaves of the standard Poisson structure on $G$. For an excellent introduction to this conjecture we refer the reader to Goodearl's paper \cite{Goo1}. The conjecture appears extremely difficult to prove and it is only known to be true in the cases of $G=SL_2, SL_3$. The coordinate rings $\mathbb{C}[G]$ are known to have an upper cluster algebra structure (\cite{BFZ}) while the quantized coordinate rings are conjectured to have a quantum (upper) cluster algebra structure (\cite[Conjecture 10.10]{bz-qclust}). Indeed, it follows from recent results of Gei\ss, Leclerc and Schr\"oer (\cite[Section 12.4]{GLSq}) that $\mathbb{C}_q[SL_n]$ has a quantum cluster algebra structure. Cluster algebras are nowadays very well-established, hence we do not recall any of the definitions here, and refer the reader to the literature, resp.~our Section \ref{se:Cluster Algebras}. Most importantly for our purposes, a cluster algebra over $\mathbb{C}$ is defined by a combinatorial datum in a field of fractions $\mathbb{C}(x_1,\ldots, x_n)$. We will denote this {\it initial seed} by $({\bf x}, B)$ where ${\bf x}=(x_1,\ldots,x_n)$ and $B$ is an integer $m\times n$-matrix with $m\le n$ such that its principal $m\times m$ submatrix is skew-symmetrizable. The cluster variables $x_{m+1},\ldots, x_n$ are the frozen variables which we will call coefficients. A quantum cluster algebra is given by a {\it quantum seed} $( {\bf x}, B,\Lambda)$ where $(B,\Lambda)$ where $B$ is as above and $\Lambda$ is a skew-symmetric $n\times n$-matrix such that $(B,\Lambda)$ is a {\it compatible pair} (see Section \ref{se:Cluster Algebras} for details). The set ${\bf x}=(x_1,\ldots, x_n)$ now lives in the skew-field of fractions $\mathbb{C}_\Lambda(x_1,\ldots,x_n)$ defined by $\Lambda$. A compatible pair also defines a compatible Poisson structure in the sense of \cite{GSV} on the cluster algebra given by $({\bf x},B)$. It is well-known that the conjectured quantum cluster algebra structures on the rings $R_q[G]$ and the standard Poisson structure on $\mathbb{C}[G]$ arise from such a compatible pair. Therefore, we would like to suggest the following conjecture.. \begin{conjecture} \label{conj: homeo cluster} Let $(B,\Lambda)$ be a compatible pair and let $\mathfrak{A}$ and $\mathfrak{A}_q$ be a cluster, resp.~quantum cluster algebra defined by $({\bf x},B)$, resp.~$({\bf x},B,\Lambda)$. Suppose further that $\mathfrak{A}$ and $\mathfrak{A}_q$ are Noetherian and that $\mathfrak{A}$ is the coordinate ring of the affine variety $X$. Then, there exists a homeomorphism between the space of primitive ideals of $\mathfrak{A}_q$ and the symplectic leaves on $X$ defined by $\Lambda$. \end{conjecture} In light of Conjecture \ref{conj: homeo cluster}, we may think of quantum affine space and quantum tori as cluster algebras where all cluster variables are frozen. In this case the corresponding homeomorphism is well known and easy to construct. The other extreme case are cluster algebras without coefficients and here the class that is usually easiest to study are the acyclic cluster algebras. For example, it is known that such a cluster algebra is always Noetherian and the coordinate ring of an affine variety (see \cite{BFZ} and \cite{bz-qclust} for the classical and quantum versions). It is our main objective to give evidence for Conjecture \ref{conj: homeo cluster} by proving it in this very specific case. It is an immediate consequence of the following two main results. \begin{theorem} \label{th:Classical-intro} Let $\mathfrak{A}$ be a cluster algebra with initial seed $({\bf x}, B)$ defined by an acyclic quiver where $B$ is invertible satisfying \eqref{eq:poisson gen}, and suppose that it is the coordinate ring of an affine variety $X$ and that $(B,\Lambda)$ is a compatible pairs. Then $X$ has the structure of a symplectic manifold, whose symplectic form is the corresponding Poisson bivector. \end{theorem} \begin{theorem} \label{th:quantum-intro} Let $\mathfrak{A}_q$ be a quantum cluster algebra with quantum seed $( {\bf x}, B,\Lambda)$ satisfying the assumptions of Theorem \ref{th:Classical-intro}. Then $\{0\}$ is the only proper two sided prime ideal in $\mathfrak{A}_q$. \end{theorem} Our approach, is similar to that of \cite{ZW tpc}, however all the proofs are self-contained and much easier, as our set-up is less general. The main idea is to study the intersection of ideals with the polynomial ring generated by a given cluster--in this case the acyclic seed. We are able to derive rather strong conditions that Poisson prime ideals--resp.~two-sided prime ideals in the quantum case-- must satisfy and are able to show that no non-trivial ideals satisfying them exist. A straightforward argument, then allows us to conclude that the variety $X$ is a symplectic manifold. We should also remark that we do not know whether any acyclic cluster algebras exist that do not satisfy the assumptions made in Theorem \ref{th:Classical-intro}. The paper is organized as follows. We first briefly recall the definitions of cluster algebras and compatible Poisson structures, compatible pairs and quantum cluster algebras in Section \ref{se:Cluster Algebras}. Thereafter, we continue with some technical key propositions (Section \ref{se:key propositions}) and discuss in Section \ref{se:Poisson and symp geom} the symplectic geometry of acyclic cluster algebras. The proof of Conjecture \ref{conj: homeo cluster} in our specific case is completed in Section \ref{se:ideals in qca} by proving Theorem \ref{th:quantum-intro}. \section{Cluster Algebras} \label{se:Cluster Algebras} \subsection{Cluster algebras} \label{se:Cluster Algebras-def} In this section, we will review the definitions and some basic results on cluster algebras, or more precisely, on cluster algebras of geometric type over the field of complex numbers $\mathbb{C}$. Denote by $\mathfrak{F}=\mathbb{C}(x_1,\ldots, x_n)$ the field of fractions in $n$ indeterminates. Recall that a $m \times m$-integer matrix $B'$ is called skew-symmetrizable if there exists a $m \times m$-diagonal matrix $D$ with positive integer entries such that $B' \cdot D$ is skew-symmetric. Now, let $B$ be a $m\times n$-integer matrix such that its principal $m\times m$-submatrix is skew-symmetrizable. We call the tuple $(x_1,\ldots,x_n, B)$ the {\it initial seed} of the cluster algebra and $ (x_1,\ldots x_m)$ a cluster, while ${\bf x}=(x_1,\ldots x_n)$ is called an extended cluster. The cluster variables $x_{m+1},\ldots,x_n$ are called {\it coefficients}. We will now construct more clusters, $(y_1,\ldots, y_m)$ and extended clusters ${\bf y}=(y_1,\ldots, y_n)$, which are transcendence bases of $\mathfrak{F}$, and the corresponding seeds $({\bf y}, \tilde B)$ in the following way. Define for each real number $r$ the numbers $r^+={\rm max}(r,0)$ and $r^-={\rm min}(r,0)$. Given a skew-symmetrizable integer $m \times n$-matrix $B$, we define for each $1\le i\le m$ the {\it exchange polynomial} \begin{equation} P_i = \prod_{k=1}^n x_k^{b_{ik}^+}+ \prod_{k=1}^n x_k^{-b_{ik}^-}\ . \end{equation} We can now define the new cluster variable $x_i'\in\mathfrak{F}$ via the equation \begin{equation} \label{eq:exchange} x_ix_i'=P_i\ . \end{equation} This allows us to refer to the matrix $B$ as the {\it exchange matrix} of the cluster $(x_1,\ldots,x_n)$, and to the relations defined by Equation \ref{eq:exchange} for $i=1,\ldots,m$ as {\it exchange relations}. We obtain that $(x_1,x_2,\ldots, \hat x_i,x_i',x_{i+1},\ldots, x_n)$ is a transcendence basis of $\mathfrak{F}$. We next construct the new exchange matrix $B_i=B'=(b_{ij}')$, associated to the new (extended) cluster $${\bf x}_i=(x_1,x_2,\ldots, \hat x_i,x_i',x_{i+1},\ldots, x_n)$$ via its coefficients $b_{ij}'$ as follows: $\bullet$ $b_{ij}' = -b_{ij}$ if $j \le n$ and $i = k$ or $j = k$, $\bullet$ $b_{ij}' = b_{ij} + \frac{|b_{ik} |b_{kj} + b_{ik} |b_{kj} |}{2}$ if $j \le n$ and $i \ne k$ and $j \ne k$, $\bullet$ $b_{ij}'=b_{ij}$ otherwise. This algorithm is called {\it matrix mutation}. Note that $B_i$ is again skew-symmetrizable (see e.g.~\cite{FZI}). The process of obtaining a new seed is called {\it cluster mutation}. The set of seeds obtained from a given seed $({\bf x},B)$ is called the mutation equivalence class of $({\bf x},B)$. \begin{definition} The cluster algebra $\mathfrak{A}\subset \mathfrak{F}$ corresponding to an initial seed $(x_1,\ldots, x_n,B)$ is the subalgebra of $\mathfrak{F}$, generated by the elements of all the clusters in the mutation equivalence class of $({\bf x},B)$ . We refer to the elements of the clusters as the {\it cluster variables}. \end{definition} \begin{remark} Notice that the coefficients, resp.~frozen variables $x_{m+1},\ldots, x_n$ will never be mutated. Of course, that explains their name. \end{remark} We have the following fact, motivating the definition of cluster algebras in the study of total positivity phenomena and canonical bases. \begin{proposition} \cite[Section 3]{FZI}(Laurent phenomenon) Let $\mathfrak{A}$ be a cluster algebra with initial extended cluster $(x_1,\ldots, x_n)$. Any cluster variable $x$ can be expressed uniquely as a Laurent polynomial in the variables $x_1,\ldots, x_n$ with integer coefficients. \end{proposition} Moreover, it has been conjectured for all cluster algebras, and proven in many cases (see e.g.~\cite{MSW} and \cite{FST},\cite{FT}) that the coefficients of these polynomials are positive. Finally, we recall the definition of the lower bound of a cluster algebra $\mathfrak{A}$ corresponding to a seed $({\bf x}, B)$. Denote by $y_i$ for $1\le i\le m$ the cluster variables obtained from ${\bf x}$ through mutation at $i$; i.e., they satisfy the relation $x_iy_i=P_i$. \begin{definition}\cite[Definition 1.10]{BFZ} \label{def:lower bounds} Let $\mathfrak{A}$ be a cluster algebra and let $({\bf x}, B)$ be a seed. The lower bound $ \mathfrak{L}_B \subset \mathfrak{A}$ associated with $({\bf x}, B)$ is the algebra generated by the set $\{x_1,\ldots x_n,y_1\ldots, y_m\}$. \end{definition} \subsection{Upper cluster algebras} \label{se:upper cluster algebras} Berenstein, Fomin and Zelevinsky introduced the related concept of upper cluster algebras in \cite{BFZ}. \begin{definition} Let $\mathfrak{A} \subset \mathfrak{F}$ be a cluster algebra with initial cluster $(x_1, \ldots, x_n, B)$ and let, as above, $y_1, \ldots, y_m$ be the cluster variables obtained by mutation in the directions $1, \ldots, m$, respectively. \noindent(a) The upper bound $\mathcal{U}_{{\bf x},B} ( \mathfrak{A})$ is defined as \begin{equation} \mathcal{U}_{{\bf x},B} ( \mathfrak{A}) = \bigcap_{j = 1}^m \mathbb{C} [x_1^{\pm 1}, \ldots x_{j - 1}^{\pm 1}, x_j, y_j, x_{j + 1}^{\pm 1}, \ldots, x_m^{\pm 1}, x_{m+1},\ldots,x_n] \ . \end{equation} \noindent(b) The upper cluster algebra $\mathcal{U} ( \mathfrak{A})$ is defined as $$\mathcal{U} ( \mathfrak{A})=\bigcap_{({\bf x'},B')}\mathcal{U}_{\bf x'} ( \mathfrak{A})\ ,$$ where the intersection is over all seeds $({\bf x}',B')$ in the mutation equivalence class of $({\bf x},B)$. \end{definition} Observe that each cluster algebra is contained in its upper cluster algebra (see \cite{BFZ}). \subsection{Poisson structures} \label{se:poissonstructure} Cluster algebras are closely related to Poisson algebras. In this section we recall some of the related notions and results. \begin{definition} Let $k$ be a field of charactieristic $0$. A Poisson algebra is a pair $(A,\{\cdot,\cdot\})$ consisting of a commutative $k$-algebra $A$ and a bilinear map $\{\cdot,\cdot\}:A\otimes A\to A$, satisfying for all $a,b,c\in A$: \begin{enumerate} \item skew-symmetry: $\{a,b\}=-\{b,a\}$ \item Jacobi identity: $\{a,\{b,c\}\}+\{c,\{a,b\}\}+\{b,\{c,a\}\}=0$, \item Leibniz rule: $a\{b,c\}=\{a,b\}c+b\{a,c\}$. \end{enumerate} \end{definition} If there is no room for confusion we will refer to a Poisson algebra $(A,\{\cdot,\cdot\})$ simply as $A$. A {\it Poisson Ideal} $\mathcal{I}$ in a Poisson algebra $A$ is an ideal such that $\{\mathcal{I},A\}\subset \mathcal{I}$, and if $k$ is of characteristic zero, then a Poisson prime ideal is a prime ideal which is also Poisson. Gekhtman, Shapiro and Vainshtein showed in \cite{GSV} that one can associate Poisson structures to cluster algebras in the following way. Let $\mathfrak{A} \subset \mathbb{C}[x_1^{\pm 1}, \ldots, x_n^{\pm 1}] \subset \mathfrak{F}$ be a cluster algebra. A Poisson structure $\{\cdot, \cdot\}$ on $\mathbb{C} [x_1, \ldots, x_n]$ is called log-canonical if $\{ x_i,x_j\}=\lambda_{ij} x_ix_j$ with $\lambda_{ij}\in \mathbb{C}$ for all $1\le i,j\le n$. The Poisson structure can be naturally extended to $\mathfrak{F}$ by using the identity $0=\{f\cdot f^{-1},g\}$ for all $f,g\in\mathbb{C} [x_1, \ldots, x_n]$. We thus obtain that $\{f^{-1},g\}=-f^{-2}\{f,g\}$ for all $f,g\in \mathfrak{F}$. We call $\Lambda=\left( \lambda_{ij}\right)_{i,j=1}^n$ the {\it coefficient matrix} of the Poisson structure. We say that a Poisson structure on $\mathfrak{F}$ is compatible with $\mathfrak{A}$ if it is log-canonical with respect to each cluster $(y_1,\ldots, y_n)$; i.e., it is log canonical on $\mathbb{C}[y_1, \ldots, y_n]$. \begin{remark} \label{re:Class Poisson} A classification of Poisson structures compatible with cluster algebras was obtained by Gekhtman, Shapiro and Vainshtein in \cite[Theorem 1.4]{GSV}. It is easy to see from their description that if $n$ is even, then the cluster algebra has an admissible Poisson structure of maximal rank. \end{remark} We will refer to the cluster algebra $\mathfrak{A}$ defined by the initial seeed $({\bf x},B)$ together with the compatible Poisson structure defined by the coefficient matrix $\Lambda$ with respect to the cluster ${\bf x}$ as the {\it Poisson cluster algebra} defined by the {\it Poisson seed} $({\bf x},B,\Lambda)$. It is not obvious under which conditions a Poisson seed $({\bf x},B,\Lambda)$ would yield a Poisson bracket $\{\cdot,\cdot\}_\Lambda$ on $\mathfrak{F}$ such that $\{\mathfrak{A},\mathfrak{A}\}_\Lambda\subset \mathfrak{A}$. We have, however, the following fact. \begin{proposition} Let $({\bf x},B,\Lambda)$ be a Poisson seed and $\mathfrak{A}$ the corresponding cluster algebra. Then $\Lambda$ defines a Poisson algebra structure on the upper bound $\mathcal{U}_{{\bf x},B}(\mathfrak{A})$ and the upper cluster algebra $\mathcal{U}(\mathfrak{A})$. \end{proposition} \begin{proof} Denote as above by $\{\cdot,\cdot\}_\Lambda$ the Poisson bracket on $\mathfrak{F}$ by $\Lambda$. Observe that the algebras $\mathbb{C}[x_1^{\pm 1},\ldots x_{i-1}^{\pm 1}, x_i,y_i, x_{i+1}^{\pm 1}, \ldots, x_n^{\pm 1}]$ are Poisson subalgebras of the Poisson algebra $\mathbb{C}[x_1^{\pm 1},\ldots x_n^{\pm 1}]$ for each $1\le i\le m$, as $\{x_i,y_i\}_\Lambda=\{x_i,x_i^{-1}P_i\}_\Lambda\in \mathbb{C}[x_1,\ldots, x_n]$. If $A$ is a Poisson algebra and $\{B_i\subset A:i\in I\}$ is a family of Poisson subalgebras, then $\bigcap_{i\in I} B_i$ is a Poisson algebra, as well. The assertion follows. \end{proof} \subsection{Compatible Pairs and Their Mutation} \label{se:Compatible Pairs and Mut} Section \ref{se:Compatible Pairs and Mut} is dedicated to compatible pairs and their mutation. Compatible pairs yield important examples of Poisson brackets which are compatible with a given cluster algebra structure, and as we shall see below, they are also integral in defining quantum cluster algebras. Note that our definition is slightly different from the original one in \cite{bz-qclust}. Let, as above, $m\le n$. Consider a pair consisting of a skew-symmetrizable $m\times n$-integer matrix $B$ with rows labeled by the interval $[1,m]=\{1,\ldots, m\}$ and columns labeled by $[1,n]$ together with a skew-symmetrizable $n\times n$-integer matrix $\Lambda$ with rows and columns labeled by $[1,n]$. \begin{definition} \label{def:compa pair} Let $B$ and $\Lambda$ be as above. We say that the pair $(B,\Lambda)$ is compatible if the coefficients $d_{ij}$ of the $m\times n$-matrix $D=B\cdot \Lambda$ satisfy $d_{ij}=d_i\delta_{ij}$ for some positive integers $d_i$ ($i\in [1,m]$). \end{definition} This means that $D=B\cdot \Lambda$ is a $m\times n$ matrix where the only non-zero entries are positive integers on the diagonal of the principal $m\times m$-submatrix. The following fact is obvious. \begin{lemma} \label{le:full rank} Let $(B,\Lambda)$ be a compatible pair. Then $B\cdot \Lambda$ has full rank. \end{lemma} Let $(B,\Lambda)$ be a compatible pair and let $k\in [1,m]$. We define for $\varepsilon\in \{+1,-1\}$ a $n\times n$ matrix $E_{k,\varepsilon}$ via \begin{itemize} \item $(E_{k,\varepsilon})_{ij}=\delta_{ij}$ if $j\ne k$, \item $(E_{k,\varepsilon})_{ij}= -1$ if $i=j= k$, \item $(E_{k,\varepsilon})_{ij}= max(0,-\varepsilon b_{ik})$ if $i\ne j= k$. \end{itemize} Similarly, we define a $m\times m$ matrix $F_{k,\varepsilon}$ via \begin{itemize} \item $(F_{k,\varepsilon})_{ij}=\delta_{ij}$ if $i\ne k$, \item $(F_{k,\varepsilon})_{ij}= -1$ if $i=j= k$, \item $(F_{k,\varepsilon})_{ij}= max(0,\varepsilon b_{kj})$ if $i= k\ne j$. \end{itemize} We define a new pair $(B_k,\Lambda_k)$ as \begin{equation} \label{eq:mutation matrix and Poisson} B_k= F^T_{k,\varepsilon} B E_{k,\varepsilon}^T \ , \quad \Lambda_k=E_{k,\varepsilon} \Lambda E_{k,\varepsilon}^T\ , \end{equation} where $X^T$ denotes the transpose of $X$. We chose this rather non-straightforward way of defining $E_{k\varepsilon}$ and $F_{k,\varepsilon}$ in order to show how our definition relates to that of \cite{bz-qclust}. We will not need it in what follows. The motivation for the definition is the following fact. \begin{proposition}\cite[Prop. 3.4]{bz-qclust} \label{pr:comp under mutation} The pair $(B_k,\Lambda_k)$ is compatible. Moreover, $\Lambda_k$ is independent of the choice of the sign $\varepsilon$. \end{proposition} The following fact is clear. \begin{corollary} Let $\mathfrak{A}$ be a cluster algebra given by an initial seed $({\bf x}, B)$ where $B$ is a $m\times n$-matrix. If $(B,\Lambda)$ is a compatible pair, then $\Lambda$ defines a compatible Poisson bracket on $\mathfrak{F}$ and on $\mathcal{U}(\mathfrak{A})$. \end{corollary} \begin{example} \label{ex:acyclic} If $m=n$ (i.e.~there are no coefficients/frozen variables) and $B$ has full rank, then $(B, \mu B^{-1})$ is a compatible pair for all $\mu\in \mathbb{Z}_{> 0}$ such that $\mu B^{-1}$ is an integer matrix. It follows from \cite[Theorem 1.4]{GSV} that in this case all compatible Poisson brackets arise in this way. \end{example} \begin{example} Recall that double Bruhat cells in complex semisimple connected and simply connected algebraic groups have a natural structure of an upper cluster algebra (see \cite{BFZ}). Berenstein and Zelevinsky showed that the standard Poisson structure is given by compatible pairs relative to this upper cluster algebra structure (see \cite[Section 8]{bz-qclust}). \end{example} \subsection{Quantum Cluster Algebras} \label{se: q-cluster algebra spec} In this section we recall the the definition of a quantum cluster algebra, introduced by Berenstein and Zelevinsky in \cite{bz-qclust}. We define, for each skew-symmetric $n\times n$-integer matrix $\Lambda$, the skew-polynomial ring $\mathbb{C}_\Lambda^t[x_1,\ldots, x_n]$ to be the $\mathbb{C}[t^{\pm 1}]$-algebra generated by $x_1,\ldots, x_n$ subject to the relations $$x_ix_j=t^{\lambda_{ij}} x_jx_i\ .$$ Analogously, the quantum torus $H_\Lambda^t=\mathbb{C}_\Lambda^t[x_1^{\pm 1},\ldots, x_n^{\pm 1} ]$ is defined as the localization of $\mathbb{C}_\Lambda^t[x_1,\ldots, x_n]$ at the monoid generated by ${x_1,\ldots, x_n}$, which is an Ore set. The quantum torus is clearly contained in the skew-field of fractions $\mathcal{F}_\Lambda$ of $\mathbb{C}_\Lambda^t[x_1,\ldots, x_n]$, and the Laurent monomials define a lattice $L\subset H_\Lambda^t\subset \mathcal{F}_\Lambda$ isomorphic to $\mathbb{Z}^n$. Denote for each ${\bf e}=(e_1,\ldots e_n)\in \mathbb{Z}^n$ by $x^{\bf e}$ the monomial $x_1^{e_1}\ldots x_n^{e_n}$. We need the notion of a toric frame in order to define the quantum cluster algebra. \begin{definition} A toric frame in $ \mathfrak F$ is a mapping $M:\mathbb{Z}^n\to \mathfrak F-\{0\}$ of the form $$M(c)=\phi(X^{\eta(c)})\ ,$$ where $\phi$ is a $\mathbb{Q}(\frac{1}{2})$-algebra automorphism of $\mathfrak F$ and $\eta: \mathbb{Z}^n\to L$ an isomorphism of lattices. \end{definition} Since a toric frame $M$ is determined uniquely by the images of the standard basis vectors $\phi(X^{\eta(e_1)})$,\ldots, $\phi(X^{\eta(e_n)})$ of $\mathbb{Z}^n$, we can associate to each toric frame a skew commutative $n\times n$-integer matrix $\Lambda_M$. We can now define the quantized version of a seed. \begin{definition}\cite[Definition 4.5]{bz-qclust} A quantum seed is a pair $(M,B)$ where \begin{itemize} \item $M$ is a toric frame in $\mathfrak F$. \item $B$ is a $n\times m$-integer matrix with rows labeled by $[1,m]$ and columns labeled by $[1,n]$. \item The pair $(B,\Lambda_M)$ is compatible. \end{itemize} \end{definition} Now we define the seed mutation in direction of an exchangeable index $k\in [1,m]$. For each $\varepsilon\in \{ 1,-1\}$ we define a mapping $M_k: \mathbb{Z}^n\to \mathfrak F$ via $$M_k(c)=\sum_{p=0}^{c_k} \binom{c_k}{p}_{q^{d_k}{2}} M(E_\varepsilon c +\varepsilon p b^k)\ , \quad M_k(-c)=M_k(c)^{-1}\ ,$$ where we use the well-known $q$-binomial coefficients (see e.g. \cite[Equation 4.11]{bz-qclust}), and the matrix $E_{k,\varepsilon}$ defined in Section \ref{se:Compatible Pairs and Mut}. Define $B_k$ to be obtained from $B$ by the standard matrix mutation in direction $k$, as in Section \ref{se:Cluster Algebras-def}. One obtains the following fact. \begin{proposition} \cite[Prop. 4.7]{bz-qclust} (a) The map $M_k$ is a toric frame, independent of the choice of sign $\varepsilon$. \noindent(b) The pair $(B_k,\Lambda_{M_k})$ is a quantum seed. \end{proposition} Now, given an {\it initial quantum seed} $(B, \Lambda_M) $ denote, in a slight abuse of notation, by $X_1=M(e_1),\ldots, X_r=M(e_r)$, which we refer to as the {\it cluster variables} associated to the quantum seed $(M,B)$. Here our nomenclature differs slightly from \cite{bz-qclust}, since there one considers the coefficients not to be cluster variables. We now define the seed mutation $$X_k'=M(-e_k+\sum_{b_{ik}>0} b_{ik}e_i)+ M(-e_k-\sum_{b_{ik}<0} b_{ik}e_i)\ .$$ We obtain that $X_k'=M_k(e_k)$ (see \cite[Prop. 4.9]{bz-qclust}). We say that two quantum seeds $(M,B)$ and $(M',B')$ are mutation-equivalent if they can be obtained from one another by a sequence of mutations. Since mutations are involutive (see \cite[Prop 4.10]{bz-qclust}), the quantum seeds in $\mathfrak{F}$ can be grouped in equivalence classes, defined by the relation of mutation equivalence. The quantum cluster algebra generated by a seed $(M,B)\subset \mathfrak F$ is the $\mathbb{C}[t^{\pm 1}]$-subalgebra generated by the cluster variables associated to the seeds in an equivalence class. \begin{remark} There are definitions of quantum lower bounds, upper bounds and quantum upper cluster algebras (see \cite[Sections 5 and 7]{bz-qclust}), analogous to the classical case. \end{remark} \section{Intersections of Ideals with Clusters} \label{se:key propositions} In the present chapter we consider the intersection between Poisson ideals in a cluster algebra and individual clusters. Moreover, we prove quantum analogues of the propositions whenever available. \begin{proposition} \label{pr:TPP super toric} Let ${\bf x}$ be a cluster, $rank(\Lambda)=n$ and $\mathcal{I}$ be a non-zero Poisson ideal. Then the ideal $\mathcal{I}$ contains a monomial in $x^m\in \mathbb{C}[{\bf x}]$. \end{proposition} \begin{proof} Notice first that $\mathcal{I}_{\bf x}=\mathcal{I}\cap \mathbb{C}[x_1,\ldots, x_n]\ne 0$. Indeed, let $0\ne f\in \mathcal{I}$. We can express $f$ as a Laurent polynomial in the variables $x_1,\ldots, x_n$; i.e., $f=x_{1}^{-c_1}\ldots x_n^{-c_n} g$ where $c_1,\ldots, c_n\in \mathbb{Z}_{\ge 0}$ and $0\ne g\in\mathbb{C}[x_1,\ldots, x_n]$. Clearly, $g= x_{1}^{c_1}\ldots x_n^{c_n}f \in \mathcal{I}_{\bf x}$. We complete the proof by contradiction. Let $f=\sum_{{\bf w}\in \mathbb{Z}^n} c_{\bf w} x^{\bf w}\in\mathcal{I}_{\bf x}$ We assume that $f$ has the smallest number of nonzero summands such that no monomial term $c_{\bf w} x^{\bf w}$ with $c_{\bf w}\neq 0$ is contained in $\mathcal{I}$. It must therefore have at least two monomial terms. Assume, as above that $c_{\bf w},c_{{\bf w}'}\ne 0$ and denote by ${\bf v}$ the difference ${\bf v}={\bf w}-{\bf w}'$. Since $\Lambda$ has full rank, there exists $i\in [1,n]$ such that $\{x_i,x^{\bf v}\}\ne 0$. Therefore, $\{x_i, x^{\bf w}\}=cx_ix^{\bf w}\ne dx_ix^{{\bf w}'}\{x_i, x^{{\bf w}'}\}$ for some $c,d\in \mathbb{C}$. Note that $\{x_i,f\}=\sum_{{\bf w}\in \mathbb{Z}^n} c_{\bf w} \lambda_{\bf w}x^{\bf w}x_i$ for certain $ \lambda_{\bf w}\in\mathbb{Z}$. Clearly, $cx_if-\{x_i,f\}\in \mathcal{I}$ and $$ cx_if-\{x_i,f\}=(c-c)c_{\bf w} x^{\bf w}x_i+(c-d)c_{{\bf w}'} x^{{\bf w}'}x_i+\ldots\ . $$ Hence, $cx_if-\{x_i,f\}\ne 0$ and it has fewer monomial summands than $f$ which contradicts our assumption. Therefore, $\mathcal{I}$ contains a monomial. The proposition is proved. \end{proof} The following fact is an obvious corollary of Proposition \ref{pr:TPP super toric}. \begin{proposition} \label{th:intersection} Let $\mathfrak{A}$ be the cluster algebra defined by a Poisson seed $({\bf x}, B,\Lambda)$ with $rank(\Lambda)=n$. If $\mathcal{I}\subset \mathfrak{A}$ is a non-zero Poisson prime ideal, then $\mathcal{I}$ contains a cluster variable $x_i$. \end{proposition} We also have the following quantum version of Proposition \ref{pr:TPP super toric} which is proved analogously to the classical case. \begin{proposition} \label{pr:TPP super toric q} Let $({\bf x},B,\Lambda)$ be a quantum seed, $rank(\Lambda)=n$ and $\mathcal{I}$ a non-zero two-sided ideal. Then the ideal $\mathcal{I}$ contains a monomial in $x^m\in \mathbb{C}_\Lambda[{\bf x}]$. \end{proposition} \begin{remark} We do not have a quantum version of Proposition \ref{th:intersection} because we do not know whether prime ideals in quantum cluster algebras are completely prime. \end{remark} \section{Poisson ideals in acyclic cluster algebras} \label{se:Poisson and symp geom} In this section we recall results from our previous paper \cite{ZW tpc}. As the proofs are rather short, we shall include them for convenience. Recall e.g.~from \cite{BFZ} that acyclic cluster algebras associated with an acyclic quiver and with trivial coefficients correspond, up to a reordering of the variables of the acyclic seed, to cluster algebras defined by a seed $({\bf x},B)$ where $B$ is a skew-symmetric $n\times n$-matrix with $b_{ij}>0$ if $i<j$. Berenstein, Fomin and Zelevinsky proved in \cite{BFZ} that such a cluster algebra $\mathfrak{A}$ is equal to both its lower and upper bounds. Thus, it is Noetherian and, if $B$ has full rank, a Poisson algebra with the Poisson brackets given by compatible pairs $(B,\Lambda)$ with $\Lambda=\mu B^{-1}$ for certain $\mu\in \mathbb{Z}$ (see Example \ref{ex:acyclic}). In order for $B$ to have full rank we have to assume that $n=2k$ is even. Let $P_i=m_i^+ +m_i^-$ where $m_i^+$ and $m_i^-$ denote the monomial terms in the exchange polynomial. Then $\{y_i,x_i\}=\mu_1m_i^+ +\mu_2m_i^-$ for some $\mu_1,\mu_2\in \mathbb{Z}$. We, additionally, want to require that $\mu_1\ne \mu_2$. To assure this, we assume that \begin{equation} \label{eq:poisson gen} \sum_{j=1}^n (b^{-1})_{ij}\left(max(b_{ij},0)+min(b_{ij},0)\right)\ne 0\ \end{equation} for all $i\in [1,n]$. We have the following result. \begin{theorem}\cite{ZW tpc} \label{th:acyclic} Let $\mathfrak{A}$ be an acyclic cluster algebra over $\mathbb{C}$ with trivial coefficients of even rank $n=2k$, given by a seed $(x_1,\ldots, x_{n}, B)$ where $B$ is a skew-symmetric $n\times n$-integer matrix satisfying $b_{ij}>0$ if $i<j$ and suppose that $B$ and $B^{-1}$ satisfy Equation \ref{eq:poisson gen} for each $i\in[1,n]$. Then, the Poisson cluster algebra defined by a compatible pair $(B,\Lambda)$ where $\Lambda=\mu B^{-1}$ with $0\ne \mu\in \mathbb{Z}$ contains no non-trivial Poisson prime ideals. \end{theorem} \begin{proof} Suppose that there exists a non-trivial Poisson prime ideal $\mathcal{I}$. Then, $\mathcal{I}\cap {\bf x}$ is nonempty by Proposition \ref{pr:TPP super toric}, hence $\mathcal{I}\cap {\bf x}=\{x_{i_1},\ldots, x_{i_j}\}$ for some $1\le i_1\le i_2\le \ldots\le i_j\le 2k$. Additionally, observe that $P_{i_1}=m_{i_1}^++m_{i_1}^-$ has to be contained in $\mathcal{I}$, as well as $$\{y_{i_1},x_{i_1}\}=\mu_1m_{i_1}^+ +\mu_2m_{i_1}^-\ .$$ By our assumption, we have $\mu_1\ne \mu_2$, and therefore $m_{i_1}^-\in \mathcal{I}$. Hence, $x_h\in\mathcal{I}$ for some $h\in[1, i_{1}-1]$ or $1\in \mathcal{I}$. But $b_{i,h}<0$ implies that $h<i$ and we obtain the desired contradiction. The theorem is proved. \end{proof} The theorem has the following corollary which was also independently proved by Muller very recently \cite{Mu 1}, though in more generality. \begin{corollary} \label{co:acyclic smooth} Let $\mathfrak{A}$ be as in Theorem \ref{th:acyclic}. Then, the variety $X$ defined by $\mathfrak{A}=\mathbb{C}[X]$ is smooth. \end{corollary} \begin{proof} The singular subset is contained in a Poisson ideal of co-dimension greater or equal to one by a result of Polishchuk (\cite{Pol}). It is well known that the Poisson ideal must be contained in a proper Poisson prime ideal (see also \cite{ZW tpc}). The assertion follows. \end{proof} \begin{remark} The assumption that the cluster algebra has even rank is very important. Indeed, Muller has recently shown that the variety corresponding to the cluster algebra of type $A_3$ has a singularity (\cite[Section 6.2]{Mu}). \end{remark} \subsection{Symplectic Structure} \subsubsection{Symplectic geometry of Poisson varieties} In this section we will recall some well-known properties of the symplectic structure on Poisson varieties. Our discussion follows along the lines of \cite[Part III.5]{brown-goodearl}. If $A$ is a Poisson algebra over a field $k$, then each $a\in A$ defines a derivation $X_a$ on $A$ via $$X_a(b)=\{a,b\}\ .$$ This derivation is called the {\it Hamiltonian vectorfield} of $a$ on $A$. Now suppose that $A$ is the coordinate ring of an affine complex variety $Y$. We will associate to the Poisson bracket the {\it Poisson bivector} $u\in \Lambda^2 T(Y)$ where $T(Y)$ denotes the tangent bundle of $Y$. Let $p\in Y$ be a point and $\mathfrak{m}_p\subset A$ the corresponding maximal ideal. Let $\alpha,\beta\in \mathfrak{m}_p/\mathfrak{m}_p^2$ be elements of the cotangent space and let $f,g$ be lifts of $\alpha$ and $\beta$, respectively. We define $u_p\in \Lambda^2 T_p(Y)$ $$u_p(\alpha,\beta)=\{f,g\}(p)\ .$$ Note that $u_p$ is a well-defined skew-symmetric form. Indeed, if $I\subset A$ is an ideal and $b\in \mathcal{I}^2$, then for all $a\in A$ $$ \{a,b\}\in \mathcal{I}\ $$ by the Leibniz rule. The form $u_P$ may be degenerate, indeed if it is non-degenerate at every point $p\in Y$, then we call $u_P$ symplectic and, moreover, if $Y$ is connected, then $Y$ is smooth and a (holomorphic) symplectic manifold. Define $$N(p)=\{\alpha \in T^*_p Y:u_p(\alpha,\cdot)=0\}\ ,$$ and $H(p)\subset T_p Y$ its orthogonal complement. The space $H(P)$ is the tangent space of the linear span of the Hamiltonian vectorfields at $p$. Recall that by the Theorem of Frobenius, a Poisson variety $Y$ decomposes as a disjoint union of symplectic leaves, maximal symplectic submanifolds. The tangent space of the symplectic leaf at the point $p$ is $H(p)$. \subsubsection{The Main Theorem} Corollary \ref{co:acyclic smooth} implies that $X$ is smooth, hence it has the structure of a complex manifold. We have the following result. \begin{theorem} \label{th:symplectic leaves} Let $\mathfrak{A}$ be an acyclic cluster algebra as defined above, $X$ an affine variety such that $\mathfrak{A}=\mathbb{C}[X]$ and let $\Lambda$ define a compatible Poisson bracket. Then, $X$ is a holomorphic symplectic manifold. \end{theorem} \begin{proof} First, let $p\in X$ be a generic point, by which we mean that $x_i(p)\ne 0$ for all $i=[1,\ldots,n]$. Set $x_i(p)=p_i$. It is easy to see that the Hamiltonian vectorfield $X_{x_i}$ at $p$ evaluates in the local coordinates $(x_1,\ldots, x_n)$ as $$ X_{x_i}(p)= (\lambda_{1i} p_i,\ldots, \lambda_{ni} p_i)\ ,$$ where $\Lambda=(\lambda_{ij})_{i,j=1}^n$. Since, $\Lambda$ is non-degenerate, we obtain that the Hamiltonian vectorfields span the tangent space $T_p X$ at $p$. It remains to consider the case when $p_i=0$ for some $i\in[1,n]$. Suppose that $p\in X$ such that $p_i=0$ and $p_j\ne 0$ for all $j<i$. We have to show that the symplectic leaf containing $p$ is not contained in the hyper-surface $x_i=0$. We may assume, employing induction, that if $p_1,\ldots, p_i\ne 0$ then the symplectic leaf at $p$ has full rank. We now claim that $\{x_i,y_i\}(p)\ne 0 $. Indeed, suppose that $\{x_i,y_i\}(p)=(\mu_1 m^+\mu_2 m^-)(p)=0$. Since $p_i=0$ implies that $(m^++m^-)(p)=0$, we would conclude as in the proof of Theorem \ref{th:acyclic} that $m^+(p)=m^-(p)=0$, but that is a contradiction to our assumption that $p_j\ne 0$ for all $j<i$. Denote by $u\in \Lambda^2 T(Y)$ the Poisson bivector. We obtain that $u_p(\frac{\delta}{\delta x_i},\cdot)\ne 0$, hence the symplectic leaf containing $p$ is not tangent to the hypersurface $x_i=0$ at $p$. It must contain a point in an analytic neighborhood of $p$ at which $x_i(p)\ne 0$ and $x_j(p)\ne 0$ for all $j<i$ by our assumption. We obtain the desired contradiction and, hence, every symplectic leaf has dimension $n$. But the manifold $X$ is connected and, therefore, cannot be a union of disjoint open submanifolds of equal dimension, hence $X$ contains only one symplectic leaf and the theorem is proved. \end{proof} \begin{remark} This result can be easily generalized to acyclic cluster algebras with invertible coefficients (using Remark \ref{re:Class Poisson}). This would imply that in the set-up of locally acyclic cluster algebras it should be easy to show that the spectrum of a locally acyclic cluster algebra is a holomorphic symplectic manifold. \end{remark} \section{Ideals in Acyclic Quantum Cluster Algebras} \label{se:ideals in qca} \begin{theorem} \label{th:ideals in qclalg} Let $\mathfrak{A}_q$ be a quantum cluster algebra with quantum seed $({\bf x}, B,\Lambda)$ satisfying Equation \ref{eq:poisson gen}. Then, $\mathfrak{A}_q$ does not contain any non-trivial proper prime ideals. \end{theorem} \begin{proof} Let $\mathcal{I}$ be a prime ideal in $\mathfrak{A}_q$. We obtain from Proposition \ref{pr:TPP super toric q} that $\mathcal{I}$ contains a monomial $x^{\bf v}$ with ${\bf v}\in\mathbb{Z}_{\ge 0}^n$. It is easy to observe that we can choose ${\bf v}$ to be minimal with respect to the lexicographic order on $\mathbb{Z}^n_{\ge 0}$. Recall that the lexicographic ordering defines ${\bf u}<{\bf w}$ for ${\bf u},{\bf w}\in \mathbb{Z}^n$ if and only if there exists $i\in [1,n]$ such that $u_i,w_i$ and $u_j=w_j$ if $j>i$. Recall that this defines a total ordering on $\mathbb{Z}^n_{\ge0}$. There exists some $i\in[1,n]$ such that $v_i\ne 0$ and $v_k=0$ for all $k>i$, and we write $x^{\bf v}=x^{\bf v'} x_i^{v_i}$ with ${\bf v'}={\bf v}-v_i{\bf e}_i$. Recall that an ideal $\mathcal{I}$ in a non-commutative ring $R$ is prime if $arb\in \mathcal{I}$ for all $r\in R$ implies that $a\in \mathcal{I}$ or $b\in \mathcal{I}$. Now, since $\mathfrak{A}_q$ is acyclic, we know from \cite[Theorem 7.3 and 7.5]{bz-qclust} that it is isomorphic to its lower bound. Hence, employing the notation of Definition \ref{def:lower bounds}, each element $a\in \mathfrak{A}_q$ can be written as $a=\sum_{p=1}^r c_p x^{\bf w}y_i^h y^{\bf w'}$ where $w_kw'_k=w_i h =0$ and $c_p\in\mathbb{C}[q^{\pm 1}]$ for all $p\in[1,r]$. Observe that $y_k$ and $x_\ell$ skew-commute if $k\ne \ell$, as they are cluster variables in the cluster ${\bf x}_k$. We now compute that $$ x^{\bf v'} ax_i^{j}=\sum_{p=1}^r c_p q^{\lambda_a} x^{\bf w} y_i^h x^{\bf v'} x_i^{v_i} y^{w'}= \sum_{p=1}^r c_p q^{\lambda_p} x^{\bf w} y_i^h x^{\bf v} y^{w'} \in \mathcal{I}\ ,$$ for all $a\in\mathfrak{A}_q$ and certain $\lambda_p\in\mathbb{Z}$. Hence, since $\mathcal{I}$ is a prime ideal, $x^{\bf v'}\in \mathcal{I}$ or $x_{i}^{v_i}\in\mathcal{I}$. But ${\bf v'}<{\bf v}$ and $v_i{\bf e}_i\le {\bf v}$, and therefore, $ x_i^{v_i}=x^{\bf v}\in\mathcal{I}$. But we have assumed in Equation \ref{eq:poisson gen} that $x_iy_i$ and $y_ix_i$, are not linearly dependent, hence $x_i^{v_i} y_i$ and $y_i x_i^{v_i}$ are not linearly dependent. As in the proof of Theorem \ref{th:acyclic}, we now argue that $x_i^{v_i-1 }m_i^{+}\in \mathcal{I}\ $, where $P_i=m_i^++m_i^-$ (see Section \ref{se:Poisson and symp geom}). But this is a contradiction to our minimality assumption. The theorem is proved. \end{proof} Theorem \ref{th:ideals in qclalg} and Theorem \ref{th:symplectic leaves} have the following immediate corollary. \begin{corollary} Let $({\bf x}, B,\Lambda)$ be an acyclic Poisson or quantum seed, as defined above. Then the space of primitive ideals in the quantum cluster algebra (one point corresponding to the $0$ ideal) and the space of symplectic leaves (also just one point) are homeomorphic. \end{corollary} \end{document}
arXiv
Arrangement of lines In geometry, an arrangement of lines is the subdivision of the plane formed by a collection of lines. Problems of counting the features of arrangements have been studied in discrete geometry, and computational geometers have found algorithms for the efficient construction of arrangements. Definition Intuitively, any finite set of lines in the plane cuts the plane into two-dimensional polygons (cells), one-dimensional line segments or rays, and zero-dimensional crossing points. This can be formalized mathematically by classifying the points of the plane according to which side of each line they are on. Each line separates the plane into two open half-planes, and each point of the plane has three possibilities per line: it can be in either one of these two half-planes, or it can be on the line itself. Two points can be considered to be equivalent if they have the same classification with respect to all of the lines. This is an equivalence relation, whose equivalence classes are subsets of equivalent points. These subsets subdivide the plane into shapes of the following three types: 1. The cells or chambers of the arrangement are two-dimensional regions not part of any line. They form the interiors of bounded or unbounded convex polygons. If the plane is cut along all of the lines, these are the connected components of the points that remain uncut. 2. The edges or panels of the arrangement are one-dimensional regions belonging to a single line. They are the open line segments and open infinite rays into which each line is partitioned by its crossing points with the other lines. That is, if one of the lines is cut by all the other lines, these are the connected components of its uncut points. 3. The vertices of the arrangement are isolated points belonging to two or more lines, where those lines cross each other. The boundary of a cell is the system of edges that touch it, and the boundary of an edge is the set of vertices that touch it (one vertex for a ray and two for a line segment). The system of objects of all three types, linked by this boundary operator, form a cell complex covering the plane. Two arrangements are said to be isomorphic or combinatorially equivalent if there is a one-to-one boundary-preserving correspondence between the objects in their associated cell complexes.[1] The same classification of points, and the same shapes of equivalence classes, can be used for infinite but locally finite arrangements, in which every bounded subset of the plane may be crossed by only finitely many lines,[2] although in this case the unbounded cells may have infinitely many sides.[3] Complexity of arrangements The study of arrangements was begun by Jakob Steiner, who proved the first bounds on the maximum number of features of different types that an arrangement may have.[4] The most straightforward features to count are the vertices, edges, and cells: • An arrangement with $n$ lines has at most $n(n-1)/2$ vertices (a triangular number), one per pair of crossing lines. This maximum is achieved for simple arrangements, those in which each two lines cross at a vertex that is disjoint from all the other lines. The number of vertices is smaller when some lines are parallel, or when some vertices are crossed by more than two lines.[5] • Any arrangement can be rotated to avoid axis-parallel lines, without changing its number of cells. Any arrangement with no axis-parallel lines has $n$ infinite-downward rays, one per line. These rays separate $n+1$ cells of the arrangement that are unbounded in the downward direction. The remaining cells all have a unique bottommost vertex (again, because there are no axis-parallel lines). For each pair of lines, there can be only one cell where the two lines meet at the bottom vertex, so the number of downward-bounded cells is at most the number of pairs of lines, $n(n-1)/2$. Adding the unbounded and bounded cells, the total number of cells in an arrangement can be at most $n(n+1)/2+1$.[5] These are the numbers of the lazy caterer's sequence.[6] • The number of edges of the arrangement is at most $n^{2}$, as may be seen either by using the Euler characteristic to calculate it from the numbers of vertices and cells, or by observing that each line is partitioned into at most $n$ edges by the other $n-1$ lines. Again, this worst-case bound is achieved for simple arrangements.[5] More complex features go by the names of "zones", "levels", and "many faces": • The zone of a line $\ell $ in a line arrangement is the collection of cells having edges belonging to $\ell $. The zone theorem states that the total number of edges in the cells of a single zone is linear. More precisely, the total number of edges of the cells belonging to a single side of line $\ell $ is at most $5n-1$,[7] and the total number of edges of the cells belonging to both sides of $\ell $ is at most $\lfloor 9.5n\rfloor -1$.[8] More generally, the total complexity of the cells of a line arrangement that are intersected by any convex curve is $O(n\alpha (n))$, where $\alpha $ denotes the inverse Ackermann function, as may be shown using Davenport–Schinzel sequences.[9] The sum of squares of cell complexities in an arrangement is $O(n^{2})$, as can be shown by summing the zones of all lines.[10] • The $k$-level of an arrangement is the polygonal chain formed by the edges that have exactly $k$ other lines directly below them. The $\leq k$-level is the portion of the arrangement below the $k$-level. Finding matching upper and lower bounds for the complexity of a $k$-level remains a major open problem in discrete geometry;. The best upper bound is $O(nk^{1/3})$, while the best lower bound is $n2^{\Omega ({\sqrt {\log k}})}$.[11] In contrast, the maximum complexity of the $\leq k$-level is known to be $\Theta (nk)$.[12] A $k$-level is a special case of a monotone path in an arrangement; that is, a sequence of edges that intersects any vertical line in a single point. However, monotone paths may be much more complicated than $k$-levels: there exist arrangements and monotone paths in these arrangements where the number of points at which the path changes direction is $n^{2-o(1)}$.[13] • Although a single cell in an arrangement may be bounded by all $n$ lines, it is not possible in general for $m$ different cells to all be bounded by $n$ lines. Rather, the total complexity of $m$ cells is at most $\Theta (m^{2/3}n^{2/3}+n)$,[14] almost the same bound as occurs in the Szemerédi–Trotter theorem on point-line incidences in the plane. A simple proof of this follows from the crossing number inequality:[15] if $m$ cells have a total of $x+n$ edges, one can form a graph with $m$ nodes (one per cell) and $x$ edges (one per pair of consecutive cells on the same line). The edges of this graph can be drawn as curves that do not cross within the cells corresponding to their endpoints, and then follow the lines of the arrangement. Therefore, there are $O(n^{2})$ crossings in this drawing. However, by the crossing number inequality, there are $\Omega (x^{3}/m^{2})$ crossings. In order to satisfy both bounds, $x$ must be $O(m^{2/3}n^{2/3})$.[16] Projective arrangements and projective duality It is often convenient to study line arrangements not in the Euclidean plane but in the projective plane, due to the fact that in projective geometry every pair of lines has a crossing point.[17] In the projective plane, it is not possible to define arrangements using sides of lines, because a line in the projective plane does not separate the plane into two distinct sides.[18] However, one may still define the cells of an arrangement to be the connected components of the points not belonging to any line, the edges to be the connected components of sets of points belonging to a single line, and the vertices to be points where two or more lines cross. A line arrangement in the projective plane differs from its Euclidean counterpart in that the two Euclidean rays at either end of a line are replaced by a single edge in the projective plane that connects the leftmost and rightmost vertices on that line, and in that pairs of unbounded Euclidean cells are replaced in the projective plane by single cells that are crossed by the projective line at infinity.[19] Due to projective duality, many statements about the combinatorial properties of points in the plane may be more easily understood in an equivalent dual form about arrangements of lines. For instance, the Sylvester–Gallai theorem, stating that any non-collinear set of points in the plane has an ordinary line containing exactly two points, transforms under projective duality to the statement that any projective arrangement of finitely many lines with more than one vertex has an ordinary point, a vertex where only two lines cross. The earliest known proof of the Sylvester–Gallai theorem, by Melchior (1940), uses the Euler characteristic to show that such a vertex must always exist.[20] Triangles in arrangements An arrangement of lines in the projective plane is said to be simplicial if every cell of the arrangement is bounded by exactly three edges. Simplicial arrangements were first studied by Melchior.[21] Three infinite families of simplicial line arrangements are known: 1. A near-pencil consisting of $n-1$ lines through a single point, together with a single additional line that does not go through the same point, 2. The family of lines formed by the sides of a regular polygon together with its axes of symmetry, and 3. The sides and axes of symmetry of an even regular polygon, together with the line at infinity. Additionally there are many other examples of sporadic simplicial arrangements that do not fit into any known infinite family.[22] As Branko Grünbaum writes, simplicial arrangements "appear as examples or counterexamples in many contexts of combinatorial geometry and its applications."[23] For instance, Artés, Grünbaum & Llibre (1998) use simplicial arrangements to construct counterexamples to a conjecture on the relation between the degree of a set of differential equations and the number of invariant lines the equations may have. The two known counterexamples to the Dirac–Motzkin conjecture (which states that any $n$-line arrangement has at least $n/2$ ordinary points) are both simplicial.[24] The dual graph of a line arrangement has one node per cell and one edge linking any pair of cells that share an edge of the arrangement. These graphs are partial cubes, graphs in which the nodes can be labeled by bitvectors in such a way that the graph distance equals the Hamming distance between labels. In the case of a line arrangement, each coordinate of the labeling assigns 0 to nodes on one side of one of the lines and 1 to nodes on the other side.[25] Dual graphs of simplicial arrangements have been used to construct infinite families of 3-regular partial cubes, isomorphic to the graphs of simple zonohedra.[26] An arrangement with the minimum number of triangles according to Roberts's triangle theorem Kobon triangles in an arrangement of 17 lines It is also of interest to study the extremal numbers of triangular cells in arrangements that may not necessarily be simplicial. Any arrangement in the projective plane must have at least $n$ triangles. Every arrangement that has only $n$ triangles must be simple.[27] For Euclidean rather than projective arrangements, the minimum number of triangles is $n-2$, by Roberts's triangle theorem.[28] The maximum possible number of triangular faces in a simple arrangement is known to be upper bounded by $n(n-1)/3$ and lower bounded by $n(n-3)/3$; the lower bound is achieved by certain subsets of the diagonals of a regular $2n$-gon.[29] For non-simple arrangements the maximum number of triangles is similar but more tightly bounded.[30] The closely related Kobon triangle problem asks for the maximum number of non-overlapping finite triangles in an arrangement in the Euclidean plane, not counting the unbounded faces that might form triangles in the projective plane. For some but not all values of $n$, $n(n-2)/3$ triangles are possible.[31] Multigrids and rhombus tilings The dual graph of a simple line arrangement may be represented geometrically as a collection of rhombi, one per vertex of the arrangement, with sides perpendicular to the lines that meet at that vertex. These rhombi may be joined together to form a tiling of a convex polygon in the case of an arrangement of finitely many lines, or of the entire plane in the case of a locally finite arrangement with infinitely many lines. This construction is sometimes known as a Klee diagram, after a publication of Rudolf Klee in 1938 that used this technique. Not every rhombus tiling comes from lines in this way, however.[32] de Bruijn (1981) investigated special cases of this construction in which the line arrangement consists of $k$ sets of equally spaced parallel lines. For two perpendicular families of parallel lines this construction just gives the familiar square tiling of the plane, and for three families of lines at 120-degree angles from each other (themselves forming a trihexagonal tiling) this produces the rhombille tiling. However, for more families of lines this construction produces aperiodic tilings. In particular, for five families of lines at equal angles to each other (or, as de Bruijn calls this arrangement, a pentagrid) it produces a family of tilings that include the rhombic version of the Penrose tilings.[33] Tetrakis square tiling Triangular tiling Bisected hexagonal tiling There also exist three infinite simplicial arrangements formed from sets of parallel lines. The tetrakis square tiling is an infinite arrangement of lines forming a periodic tiling that resembles a multigrid with four parallel families, but in which two of the families are more widely spaced than the other two, and in which the arrangement is simplicial rather than simple. Its dual is the truncated square tiling. Similarly, the triangular tiling is an infinite simplicial line arrangement with three parallel families, which has as its dual the hexagonal tiling, and the bisected hexagonal tiling is an infinite simplicial line arrangement with six parallel families and two line spacings, dual to the great rhombitrihexagonal tiling. These three examples come from three affine reflection groups in the Euclidean plane, systems of symmetries based on reflection across each line in these arrangements.[34] Algorithms Constructing an arrangement means, given as input a list of the lines in the arrangement, computing a representation of the vertices, edges, and cells of the arrangement together with the adjacencies between these objects, for instance as a doubly connected edge list. Due to the zone theorem, arrangements can be constructed efficiently by an incremental algorithm that adds one line at a time to the arrangement of the previously added lines: each new line can be added in time proportional to its zone, resulting in a total construction time of $O(n^{2})$.[7] However, the memory requirements of this algorithm are high, so it may be more convenient to report all features of an arrangement by an algorithm that does not keep the entire arrangement in memory at once. This may again be done efficiently, in time $O(n^{2})$ and space $O(n)$, by an algorithmic technique known as topological sweeping.[35] Computing a line arrangement exactly requires a numerical precision several times greater than that of the input coordinates: if a line is specified by two points on it, the coordinates of the arrangement vertices may need four times as much precision as these input points. Therefore, computational geometers have also studied algorithms for constructing arrangements efficiently with limited numerical precision.[36] As well, researchers have studied efficient algorithms for constructing smaller portions of an arrangement, such as zones,[37] $k$-levels,[38] or the set of cells containing a given set of points.[39] The problem of finding the arrangement vertex with the median $x$-coordinate arises (in a dual form) in robust statistics as the problem of computing the Theil–Sen estimator of a set of points.[40] Marc van Kreveld suggested the algorithmic problem of computing shortest paths between vertices in a line arrangement, where the paths are restricted to follow the edges of the arrangement, more quickly than the quadratic time that it would take to apply a shortest path algorithm to the whole arrangement graph.[41] An approximation algorithm is known,[42] and the problem may be solved efficiently for lines that fall into a small number of parallel families (as is typical for urban street grids),[43] but the general problem remains open. Non-Euclidean line arrangements A non-stretchable pseudoline arrangement of nine pseudolines. (All arrangements of fewer than nine pseudolines are stretchable.) Per Pappus's hexagon theorem, this arrangement cannot be realized in a projective plane over any field. A hyperbolic line arrangement combinatorially equivalent to a chord diagram used by Ageev (1996) to show that triangle-free circle graphs may sometimes need 5 colors. A pseudoline arrangement is a family of curves that share similar topological properties with a line arrangement.[44] These can be defined most simply in the projective plane as simple closed curves any two of which meet in a single crossing point.[45] A pseudoline arrangement is said to be stretchable if it is combinatorially equivalent to a line arrangement. Determining stretchability is a difficult computational task: it is complete for the existential theory of the reals to distinguish stretchable arrangements from non-stretchable ones.[46] Every arrangement of finitely many pseudolines can be extended so that they become lines in a "spread", a type of non-Euclidean incidence geometry in which every two points of a topological plane are connected by a unique line (as in the Euclidean plane) but in which other axioms of Euclidean geometry may not apply.[47] Another type of non-Euclidean geometry is the hyperbolic plane, and arrangements of hyperbolic lines in this geometry have also been studied.[48] Any finite set of lines in the Euclidean plane has a combinatorially equivalent arrangement in the hyperbolic plane (e.g. by enclosing the vertices of the arrangement by a large circle and interpreting the interior of the circle as a Klein model of the hyperbolic plane). However, parallel (non-crossing) pairs of lines are less restricted in hyperbolic line arrangements than in the Euclidean plane: in particular, the relation of being parallel is an equivalence relation for Euclidean lines but not for hyperbolic lines.[49] The intersection graph of the lines in a hyperbolic arrangement can be an arbitrary circle graph. The corresponding concept to hyperbolic line arrangements for pseudolines is a weak pseudoline arrangement,[50] a family of curves having the same topological properties as lines[51] such that any two curves in the family either meet in a single crossing point or have no intersection.[50] See also • Configuration (geometry), an arrangement of lines and a set of points with all lines containing the same number of points and all points belonging to the same number of lines • Arrangement (space partition), a partition of the plane given by overlaid curves or of a higher dimensional space by overlaid surfaces, without requiring the curves or surfaces to be flat • Mathematical Bridge, a bridge in Cambridge, England whose beams form an arrangement of tangent lines to its arch Notes 1. Grünbaum (1972), p. 4. 2. Eppstein, Falmagne & Ovchinnikov (2007), pp. 177–178. 3. Ovchinnikov (2011), p. 210. 4. Steiner (1826); Agarwal & Sharir (2000). 5. Halperin & Sharir (2018), p. 724. 6. Sloane, N. J. A. (ed.), "Sequence A000124 (Central polygonal numbers (the Lazy Caterer's sequence))", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation 7. Chazelle, Guibas & Lee (1985), Edelsbrunner (1987), Edelsbrunner, O'Rourke & Seidel (1986). 8. Bern et al. (1991); an unpublished manuscript of Rom Pinchasi from 2011 claims the slightly stronger bound $\lfloor 9.5n\rfloor -3$. 9. Bern et al. (1991). 10. Aronov, Matoušek & Sharir (1994). 11. Dey (1998); Tóth (2001). The problem of bounding the complexity of k-levels was first studied by Lovász (1971) and Erdős et al. (1973). 12. Alon & Győri (1986). 13. Balogh et al. (2004); see also Matoušek (1991). 14. Canham (1969); Clarkson et al. (1990). 15. Ajtai et al. (1982); Leighton (1983). 16. Székely (1997). 17. Goodman & Pollack (1993), p. 109: "The natural setting for arrangements of lines is the real projective plane" 18. Polster (1998), p. 223. 19. Goodman & Pollack (1993), p. 110. 20. This is the earliest proof cited by Borwein & Moser (1990), but they write that the same proof was likely given "much earlier by others". 21. Melchior (1940); Grünbaum (2009). 22. Grünbaum (1972); Grünbaum (2009). 23. Grünbaum (2009). 24. Crowe & McKee (1968); Dirac (1951); Kelly & Moser (1958); Grünbaum (1972), page 18. 25. Eppstein, Falmagne & Ovchinnikov (2007), p. 180. 26. Eppstein (2006). 27. Grünbaum (1972); Levi (1926); Roudneff (1988). 28. Grünbaum (1972). 29. Füredi & Palásti (1984); Grünbaum (1972). 30. Purdy (1979); Purdy (1980); Strommer (1977). 31. Moreno & Prieto-Martínez (2021). 32. Klee (1938). 33. de Bruijn (1981). 34. Abramenko & Brown (2008). 35. Edelsbrunner & Guibas (1989). 36. Fortune & Milenkovic (1991); Greene & Yao (1986); Milenkovic (1989). 37. Aharoni et al. (1999). 38. Agarwal et al. (1998); Chan (1999); Cole, Sharir & Yap (1987); Edelsbrunner & Welzl (1986). 39. Agarwal (1990); Agarwal, Matoušek & Sharir (1998); Edelsbrunner, Guibas & Sharir (1990). 40. Cole et al. (1989). 41. Erickson (1997). 42. Bose et al. (1996). 43. Eppstein & Hart (1999). 44. Grünbaum (1972); Agarwal & Sharir (2002). 45. This definition is from Grünbaum (1972). For a comparison of alternative definitions of pseudolines, see Eppstein, Falmagne & Ovchinnikov (2007). 46. Shor (1991); Schaefer (2010). 47. Goodman et al. (1994). 48. Dress, Koolen & Moulton (2002). 49. Martin (1996), pp. 41, 338. 50. de Fraysseix & Ossona de Mendez (2003). 51. Here an alternative definition from Shor (1991), that a pseudoline is the image of a line under a homeomorphism of the plane, is appropriate. References • Abramenko, Peter; Brown, Kenneth S. (2008), Buildings: Theory and Applications, Graduate Texts in Mathematics, vol. 248, New York: Springer, doi:10.1007/978-0-387-78835-7, ISBN 978-0-387-78834-0, MR 2439729; see Example 10.14, pp. 519–520 • Agarwal, P. K. (1990), "Partitioning arrangements of lines II: Applications", Discrete & Computational Geometry, 5 (1): 533–573, doi:10.1007/BF02187809 • Agarwal, P. K.; de Berg, M.; Matoušek, J.; Schwarzkopf, O. (1998), "Constructing levels in arrangements and higher order Voronoi diagrams", SIAM Journal on Computing, 27 (3): 654–667, CiteSeerX 10.1.1.51.5064, doi:10.1137/S0097539795281840 • Agarwal, P. K.; Matoušek, J.; Sharir, M. (1998), "Computing many faces in arrangements of lines and segments", SIAM Journal on Computing, 27 (2): 491–505, doi:10.1137/S009753979426616X, hdl:1874/17088 • Agarwal, P. K.; Sharir, M. (2000), "Arrangements and their applications" (PDF), in Sack, J.-R.; Urrutia, J. (eds.), Handbook of Computational Geometry, Elsevier, pp. 49–119 • Agarwal, P. K.; Sharir, M. (2002), "Pseudo-line arrangements: duality, algorithms, and applications", Proc. 13th ACM-SIAM Symposium on Discrete Algorithms (SODA '02), San Francisco: Society for Industrial and Applied Mathematics, pp. 800–809 • Ageev, A. A. (1996), "A triangle-free circle graph with chromatic number 5", Discrete Mathematics, 152 (1–3): 295–298, doi:10.1016/0012-365X(95)00349-2 • Aharoni, Y.; Halperin, D.; Hanniel, I.; Har-Peled, S.; Linhart, C. (1999), "On-line zone construction in arrangements of lines in the plane", in Vitter, Jeffrey S.; Zaroliagis, Christos D. (eds.), Algorithm Engineering: 3rd International Workshop, WAE'99, London, UK, July 19–21, 1999, Proceedings, Lecture Notes in Computer Science, vol. 1668, Springer-Verlag, pp. 139–153, CiteSeerX 10.1.1.35.7681, doi:10.1007/3-540-48318-7_13, ISBN 978-3-540-66427-7 • Ajtai, M.; Chvátal, V.; Newborn, M.; Szemerédi, E. (1982), "Crossing-free subgraphs", Theory and Practice of Combinatorics, North-Holland Mathematics Studies, vol. 60, North-Holland, pp. 9–12, MR 0806962 • Alon, N.; Győri, E. (1986), "The number of small semi-spaces of a finite set of points in the plane", Journal of Combinatorial Theory, Series A, 41: 154–157, doi:10.1016/0097-3165(86)90122-6 • Aronov, B.; Matoušek, J.; Sharir, M. (1994), "On the sum of squares of cell complexities in hyperplane arrangements", Journal of Combinatorial Theory, Series A, 65 (2): 311–321, doi:10.1016/0097-3165(94)90027-2 • Artés, J. C.; Grünbaum, B.; Llibre, J. (1998), "On the number of invariant straight lines for polynomial differential systems" (PDF), Pacific Journal of Mathematics, 184 (2): 207–230, doi:10.2140/pjm.1998.184.207 • Balogh, J.; Regev, O.; Smyth, C.; Steiger, W.; Szegedy, M. (2004), "Long monotone paths in line arrangements", Discrete & Computational Geometry, 32 (2): 167–176, doi:10.1007/s00454-004-1119-1 • Bern, M. W.; Eppstein, D.; Plassman, P. E.; Yao, F. F. (1991), "Horizon theorems for lines and polygons", in Goodman, J. E.; Pollack, R.; Steiger, W. (eds.), Discrete and Computational Geometry: Papers from the DIMACS Special Year, DIMACS Ser. Discrete Math. and Theoretical Computer Science (6 ed.), Amer. Math. Soc., pp. 45–66, MR 1143288 • Borwein, P.; Moser, W. O. J. (1990), "A survey of Sylvester's problem and its generalizations", Aequationes Mathematicae, 40 (1): 111–135, doi:10.1007/BF02112289, MR 1069788, S2CID 122052678 • Bose, P.; Evans, W.; Kirkpatrick, D. G.; McAllister, M.; Snoeyink, J. (1996), "Approximating shortest paths in arrangements of lines", Proc. 8th Canadian Conf. Computational Geometry, pp. 143–148 • de Bruijn, N. G. (1981), "Algebraic theory of Penrose's non-periodic tilings of the plane" (PDF), Indagationes Mathematicae, 43: 38–66 • Canham, R. J. (1969), "A theorem on arrangements of lines in the plane", Israel Journal of Mathematics, 7 (4): 393–397, doi:10.1007/BF02788872, S2CID 123541779 • Chan, T. (1999), Remarks on k-level algorithms in the plane, archived from the original on 2010-11-04 • Chazelle, B.; Guibas, L. J.; Lee, D. T. (1985), "The power of geometric duality", BIT Numerical Mathematics, 25 (1): 76–90, doi:10.1007/BF01934990, S2CID 122411548 • Clarkson, K.; Edelsbrunner, H.; Guibas, L. J.; Sharir, M.; Welzl, E. (1990), "Combinatorial complexity bounds for arrangements of curves and spheres", Discrete & Computational Geometry, 5 (1): 99–160, doi:10.1007/BF02187783 • Cole, Richard; Salowe, Jeffrey S.; Steiger, W. L.; Szemerédi, Endre (1989), "An optimal-time algorithm for slope selection", SIAM Journal on Computing, 18 (4): 792–810, doi:10.1137/0218055, MR 1004799 • Cole, R.; Sharir, M.; Yap, C.-K. (1987), "On k-hulls and related problems", SIAM Journal on Computing, 16 (1): 61–77, doi:10.1137/0216005 • Crowe, D. W.; McKee, T. A. (1968), "Sylvester's problem on collinear points", Mathematics Magazine, 41 (1): 30–34, doi:10.2307/2687957, JSTOR 2687957 • Dey, T. L. (1998), "Improved bounds for planar k-sets and related problems", Discrete & Computational Geometry, 19 (3): 373–382, doi:10.1007/PL00009354, MR 1608878 • Dirac, G. (1951), "Collinearity properties of sets of points", Quarterly Journal of Mathematics, 2 (1): 221–227, Bibcode:1951QJMat...2..221D, doi:10.1093/qmath/2.1.221 • Dress, A.; Koolen, J. H.; Moulton, V. (2002), "On line arrangements in the hyperbolic plane", European Journal of Combinatorics, 23 (5): 549–557, doi:10.1006/eujc.2002.0582, MR 1931939 • Edelsbrunner, H. (1987), Algorithms in Combinatorial Geometry, EATCS Monographs in Theoretical Computer Science, Springer-Verlag, ISBN 978-3-540-13722-1 • Edelsbrunner, H.; Guibas, L. J. (1989), "Topologically sweeping an arrangement", Journal of Computer and System Sciences, 38 (1): 165–194, doi:10.1016/0022-0000(89)90038-X • Edelsbrunner, H.; Guibas, L. J.; Sharir, M. (1990), "The complexity and construction of many faces in arrangements of lines and of segments", Discrete & Computational Geometry, 5 (1): 161–196, doi:10.1007/BF02187784 • Edelsbrunner, H.; O'Rourke, J.; Seidel, R. (1986), "Constructing arrangements of lines and hyperplanes with applications", SIAM Journal on Computing, 15 (2): 341–363, doi:10.1137/0215024 • Edelsbrunner, H.; Welzl, E. (1986), "Constructing belts in two-dimensional arrangements with applications", SIAM Journal on Computing, 15 (1): 271–284, doi:10.1137/0215019 • Eppstein, D. (2006), "Cubic partial cubes from simplicial arrangements", Electronic Journal of Combinatorics, 13 (1, R79): 1–14, arXiv:math.CO/0510263, doi:10.37236/1105, MR 2255421, S2CID 8608953 • Eppstein, D.; Falmagne, J.-Cl.; Ovchinnikov, S. (2007), Media Theory, Springer-Verlag • Eppstein, D.; Hart, D. (1999), "Shortest paths in an arrangement with k line orientations", Proceedings of the 10th ACM–SIAM Symposium on Discrete Algorithms (SODA '99), pp. 310–316 • Erdős, P.; Lovász, L.; Simmons, A.; Straus, E. G. (1973), "Dissection graphs of planar point sets", A Survey of Combinatorial Theory (Proc. Internat. Sympos., Colorado State Univ., Fort Collins, Colo., 1971), Amsterdam: North-Holland, pp. 139–149, MR 0363986 • Erickson, J. (1997), Shortest paths in line arrangements, archived from the original on 2008-12-03, retrieved 2008-12-15 • Fortune, S.; Milenkovic, V. (1991), "Numerical stability of algorithms for line arrangements", Proc. 7th ACM Symposium on Computational Geometry (SoCG '91), pp. 334–341, CiteSeerX 10.1.1.56.2404, doi:10.1145/109648.109685, ISBN 978-0897914260, S2CID 2861855 • de Fraysseix, H.; Ossona de Mendez, P. (2003), "Stretching of Jordan arc contact systems", Proceedings of the 11th International Symposium on Graph Drawing (GD 2003), Lecture Notes in Computer Science (2912 ed.), Springer-Verlag, pp. 71–85 • Füredi, Z.; Palásti, I. (1984), "Arrangements of lines with a large number of triangles" (PDF), Proceedings of the American Mathematical Society, 92 (4): 561–566, doi:10.2307/2045427, JSTOR 2045427 • Goodman, Jacob E.; Pollack, Richard (1993), "Allowable sequences and order types in discrete and computational geometry", in Pach, János (ed.), New Trends in Discrete and Computational Geometry, Algorithms and Combinatorics, vol. 10, Berlin: Springer, pp. 103–134, doi:10.1007/978-3-642-58043-7_6, MR 1228041 • Goodman, Jacob E.; Pollack, Richard; Wenger, Rephael; Zamfirescu, Tudor (1994), "Every arrangement extends to a spread", Combinatorica, 14 (3): 301–306, doi:10.1007/BF01212978, MR 1305899, S2CID 42055590 • Greene, D.; Yao, F. F. (1986), "Finite-resolution computational geometry", Proceedings of the 27th IEEE Symposium on Foundations of Computer Science (FOCS '86), pp. 143–152, doi:10.1109/SFCS.1986.19, ISBN 978-0-8186-0740-0, S2CID 2624319 • Grünbaum, B. (1972), Arrangements and Spreads, Regional Conference Series in Mathematics, vol. 10, Providence, R.I.: American Mathematical Society • Grünbaum, B. (1974), Notes on Arrangements, University of Washington, hdl:1773/15699; see p. 6 of "Euclidean arrangements" (p. 101 of linked pdf) • Grünbaum, Branko (2009), "A catalogue of simplicial arrangements in the real projective plane", Ars Mathematica Contemporanea, 2 (1): 1–25, doi:10.26493/1855-3974.88.e12, hdl:1773/2269, MR 2485643 • Halperin, D.; Sharir, M. (2018), "Arrangements", in Goodman, Jacob E.; O'Rourke, Joseph; Tóth, Csaba D. (eds.), Handbook of Discrete and Computational Geometry, Discrete Mathematics and its Applications (3rd ed.), Boca Raton, Florida: CRC Press, pp. 723–762, ISBN 978-1-4987-1139-5, MR 3793131 • Kelly, L. M.; Moser, W. O. J. (1958), "On the number of ordinary lines determined by n points", Canadian Journal of Mathematics, 10: 210–219, doi:10.4153/CJM-1958-024-6 • Klee, R. (1938), Über die einfachen Konfigurationen der euklidischen und der projektiven Ebene, Dresden: Focken & Oltmanns • Leighton, F. T. (1983), Complexity Issues in VLSI, Foundations of Computing Series, Cambridge, MA: MIT Press • Levi, F. (1926), "Die Teilung der projektiven Ebene durch Gerade oder Pseudogerade", Ber. Math.-Phys. Kl. Sächs. Akad. Wiss. Leipzig, 78: 256–267 • Lovász, L. (1971), "On the number of halving lines", Annales Universitatis Scientiarum Budapestinensis de Rolando Eőtvős Nominatae Sectio Mathematica, 14: 107–108 • Martin, George E. (1996), The foundations of geometry and the non-Euclidean plane, Undergraduate Texts in Mathematics, Springer-Verlag, ISBN 0-387-90694-0, MR 1410263 • Matoušek, J. (1991), "Lower bounds on the length of monotone paths in arrangements", Discrete & Computational Geometry, 6 (1): 129–134, doi:10.1007/BF02574679 • Melchior, E. (1940), "Über Vielseite der projektiven Ebene", Deutsche Mathematik, 5: 461–475 • Milenkovic, V. (1989), "Double precision geometry: a general technique for calculating line and segment intersections using rounded arithmetic", Proceedings of the 30th IEEE Symposium on Foundations of Computer Science (FOCS '89), pp. 500–505, doi:10.1109/SFCS.1989.63525, ISBN 978-0-8186-1982-3, S2CID 18564700 • Moreno, José Pedro; Prieto-Martínez, Luis Felipe (2021), "El problema de los triángulos de Kobon" [The Kobon triangles problem], La Gaceta de la Real Sociedad Matemática Española (in Spanish), 24 (1): 111–130, hdl:10486/705416, MR 4225268 • Motzkin, Th. (1951), "The lines and planes connecting the points of a finite set", Transactions of the American Mathematical Society, 70 (3): 451–464, doi:10.2307/1990609, JSTOR 1990609 • Ovchinnikov, Sergei (2011), Graphs and Cubes, Universitext, New York: Springer, doi:10.1007/978-1-4614-0797-3, ISBN 978-1-4614-0796-6, MR 3014880 • Polster, Burkard (1998), A geometrical picture book, Universitext, Springer-Verlag, New York, doi:10.1007/978-1-4419-8526-2, ISBN 0-387-98437-2, MR 1640615 • Purdy, G. B. (1979), "Triangles in arrangements of lines", Discrete Mathematics, 25 (2): 157–163, doi:10.1016/0012-365X(79)90018-9 • Purdy, G. B. (1980), "Triangles in arrangements of lines, II", Proceedings of the American Mathematical Society, 79: 77–81, doi:10.1090/S0002-9939-1980-0560588-4 • Roudneff, J.-P. (1988), "Arrangements of lines with a minimum number of triangles are simple", Discrete & Computational Geometry, 3 (1): 97–102, doi:10.1007/BF02187900 • Schaefer, Marcus (2010), "Complexity of some geometric and topological problems" (PDF), Graph Drawing, 17th International Symposium, GS 2009, Chicago, IL, USA, September 2009, Revised Papers, Lecture Notes in Computer Science, vol. 5849, Springer-Verlag, pp. 334–344, doi:10.1007/978-3-642-11805-0_32, ISBN 978-3-642-11804-3 • Shor, P. W. (1991), "Stretchability of pseudolines is NP-hard", in Gritzmann, P.; Sturmfels, B. (eds.), Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 4, Providence, R.I.: American Mathematical Society, pp. 531–554 • Steiner, J. (1826), "Einige Gesetze über die Theilung der Ebene und des Raumes", J. Reine Angew. Math., 1: 349–364, doi:10.1515/crll.1826.1.349, S2CID 120477563 • Strommer, T. O. (1977), "Triangles in arrangements of lines", Journal of Combinatorial Theory, Series A, 23 (3): 314–320, doi:10.1016/0097-3165(77)90022-X • Székely, L. A. (1997), "Crossing numbers and hard Erdős problems in discrete geometry" (PDF), Combinatorics, Probability and Computing, 6 (3): 353–358, doi:10.1017/S0963548397002976, S2CID 36602807 • Tóth, G. (2001), "Point sets with many k-sets", Discrete & Computational Geometry, 26 (2): 187–194, doi:10.1007/s004540010022 External links • Database of Combinatorially Different Line Arrangements
Wikipedia
\begin{document} \title[Unique continuation for the Cauchy-Riemann operator] {A remark on unique continuation for the Cauchy-Riemann operator} \author{Ihyeok Seo} \subjclass[2010]{Primary 35B60, 35F05} \keywords{Unique continuation, Cauchy-Riemann operator} \address{Department of Mathematics, Sungkyunkwan University, Suwon 440-746, Republic of Korea} \email{[email protected]} \begin{abstract} In this note we obtain a unique continuation result for the differential inequality $|\overline{\partial}u|\leq|Vu|$, where $\overline{\partial}=(i\partial_y+\partial_x)/2$ denotes the Cauchy-Riemann operator and $V(x,y)$ is a function in $L^2(\mathbb{R}^2)$. \end{abstract} \maketitle \section{Introduction} The unique continuation property is one of the most interesting properties of holomorphic functions $f\in H(\mathbb{C})$. This property says that if $f$ vanishes in a non-empty open subset of $\mathbb{C}$ then it must be identically zero. Note that $u\in C^1(\mathbb{R}^2)$ satisfies the Cauchy-Riemann equation $(i\partial_y+\partial_x)u=0$ if and only if it defines a holomorphic function $f(x+iy)\equiv u(x,y)$ on $\mathbb{C}$. From this point of view, one can see that a $C^1$ function satisfying the equation has the unique continuation property. In this note we consider a class of non-holomorphic functions $u$ which satisfy the differential inequality \begin{equation}\label{inequality} |\overline{\partial}u|\leq|Vu|, \end{equation} where $\overline{\partial}=(i\partial_y+\partial_x)/2$ denotes the Cauchy-Riemann operator and $V(x,y)$ is a function on $\mathbb{R}^2$. The best positive result for \eqref{inequality} is due to Wolff \cite{W} (see Theorem 4 there) who proved the property for $V\in L^p$ with $p>2$. On the other hand, there is a counterexample \cite{M} to unique continuation for \eqref{inequality} with $V\in L^p$ for $p<2$. The remaining case $p=2$ seems to be unknown for the differential inequality \eqref{inequality}, and note that $L^2$ is a scale-invariant space of $V$ for the equation $\overline{\partial}u=Vu$. Here we shall handle this problem. Our unique continuation result is the following theorem which is based on bounds for a Fourier multiplier from $L^p$ to $L^q$. \begin{thm}\label{thm} Let $1<p<2<q<\infty$ and $1/p-1/q=1/2$. Assume that $u\in L^p\cap L^q$ satisfies the inequality \eqref{inequality} with $V\in L^2$ and vanishes in a non-empty open subset of $\mathbb{R}^2$. Then it must be identically zero. \end{thm} The unique continuation property also holds for harmonic functions, which satisfy the Laplace equation $\Delta u=0$, since they are real parts of holomorphic functions. This was first extended by Carleman ~\cite{C} to a class of non-harmonic functions satisfying the inequality $|\Delta u|\leq|Vu|$ with $V\in L^\infty(\mathbb{R}^2)$. There is an extensive literature on later developments in this subject. In particular, the problem of finding all the possible $L^p$ functions $V$, for which $|\Delta u|\leq|Vu|$ has the unique continuation, is completely solved (see \cite{JK,KN,KT}). See also the survey papers of Kenig \cite{K} and Wolff \cite{W2} for more details, and the recent paper of Kenig and Wang \cite{KW} for a stronger result which gives a quantitative form of the unique continuation. Throughout the paper, the letter $C$ stands for positive constants possibly different at each occurrence. Also, the notations $\widehat{f}$ and $\mathcal{F}^{-1}(f)$ denote the Fourier and the inverse Fourier transforms of $f$, respectively. \section{A preliminary lemma} The standard method to study the unique continuation property is to obtain a suitable Carleman inequality for relevant differential operator. This method originated from Carleman's classical work \cite{C} for elliptic operators. In our case we need to obtain the following inequality for the Cauchy-Riemann operator $\overline{\partial}=(i\partial_y+\partial_x)/2$, which will be used in the next section for the proof of Theorem \ref{thm}: \begin{lem}\label{lem} Let $f\in C_0^\infty(\mathbb{R}^2\setminus\{0\})$. For all $t>0$, we have \begin{equation}\label{Sobo} \big\||z|^{-t}f\big\|_{L^q} \leq C\big\||z|^{-t}\overline{\partial}f\big\|_{L^p} \end{equation} if $1<p<2<q<\infty$ and $1/p-1/q=1/2$. Here, $z=x+iy\in\mathbb{C}$ and $C$ is a constant independent of $t$. \end{lem} \begin{proof} First we note that $$\overline{\partial}(z^{-t}f)=z^{-t}\overline{\partial}f+f\overline{\partial}(z^{-t}) =z^{-t}\overline{\partial}f$$ for $z\in\mathbb{C}\setminus\{0\}$. Then the inequality \eqref{Sobo} is equivalent to $$\big\|z^{-t}f\big\|_{L^q} \leq C\big\|\overline{\partial}(z^{-t}f)\big\|_{L^p}.$$ By setting $g=z^{-t}f$, we are reduced to showing that \begin{equation*} \|g\|_{L^q} \leq C\|(i\partial_y+\partial_x)g\|_{L^p} \end{equation*} for $g\in C_0^\infty(\mathbb{R}^2\setminus\{0\})$. To show this, let us first set \begin{equation}\label{0} (i\partial_y+\partial_x)g=h, \end{equation} and let $\psi_\delta:\mathbb{R}^2\rightarrow[0,1]$ be a smooth function such that $\psi_\delta=0$ in the ball $B(0,\delta)$ and $\psi_\delta=1$ in $\mathbb{R}^2\setminus B(0,2\delta)$. Then, using the Fourier transform in~\eqref{0}, we see that \begin{equation*} (-\eta+i\xi)\widehat{g}(\xi,\eta)=\widehat{h}(\xi,\eta). \end{equation*} Thus, by Fatou's lemma we are finally reduced to showing the following uniform boundedness for a multiplier operator having the multiplier $m(\xi,\eta)=\psi_\delta(\xi,\eta)/(-\eta+i\xi)$: \begin{equation}\label{multi} \bigg\|\mathcal{F}^{-1}\bigg(\frac{\psi_\delta(\xi,\eta) \widehat{h}(\xi,\eta)}{-\eta+i\xi}\bigg)\bigg\|_{L^q}\leq C\|h\|_{L^p} \end{equation} uniformly in $\delta>0$. From now on, we will show \eqref{multi} using Young's inequality for convolutions and Littlewood-Paley theorem (\cite{G}). Let us first set for $k\in\mathbb{Z}$ $$\widehat{Th}(\xi,\eta)=m(\xi,\eta)\widehat{h}(\xi,\eta)\quad\text{and}\quad \widehat{T_kh}(\xi,\eta)=m(\xi,\eta)\chi_k(\xi,\eta)\widehat{h}(\xi,\eta),$$ where $\chi_k(\cdot)=\chi(2^k\cdot)$ for $\chi\in C_0^\infty(\mathbb{R}^2)$ which is such that $\chi(\xi,\eta)=1$ if $|(\xi,\eta)|\sim1$, and zero otherwise. Also, $\sum_k\chi_k=1$. Now we claim that \begin{equation}\label{multi2} \|T_kh\|_{L^q}\leq C\|h\|_{L^p} \end{equation} uniformly in $k\in\mathbb{Z}$. Then, since $1<p<2<q<\infty$, by the Littlewood-Paley theorem together with Minkowski's inequality, we get the desired inequality \eqref{multi} as follows: \begin{align*} \big\|\sum_kT_kh\big\|_{L^q}&\leq C\big\|\big(\sum_k|T_kh|^2\big)^{1/2}\big\|_{L^q}\\ &\leq C\big(\sum_k\|T_kh\|_{L^q}^2\big)^{1/2}\\ &\leq C\big(\sum_k\|h_k\|_{L^p}^2\big)^{1/2}\\ &\leq C\big\|\big(\sum_k|h_k|^2\big)^{1/2}\big\|_{L^p}\\ &\leq C\big\|\sum_kh_k\big\|_{L^p}, \end{align*} where $h_k$ is given by $\widehat{h_k}(\xi,\eta)=\chi_k(\xi,\eta)\widehat{h}(\xi,\eta)$. Now it remains to show the claim \eqref{multi2}. But, this follows easily from Young's inequality. Indeed, note that $$T_kh=\mathcal{F}^{-1}\bigg(\frac{\psi_\delta(\xi,\eta)\chi_k(\xi,\eta)}{-\eta+i\xi}\bigg) \ast h$$ and by Plancherel's theorem \begin{align*} \bigg\|\mathcal{F}^{-1}\bigg(\frac{\psi_\delta(\xi,\eta)\chi_k(\xi,\eta)}{-\eta+i\xi}\bigg)\bigg\|_{L^2} &=\bigg\|\frac{\psi_\delta(\xi,\eta)\chi_k(\xi,\eta)}{-\eta+i\xi}\bigg\|_{L^2}\\ &\leq C\bigg(\int_{|(\xi,\eta)|\sim2^{-k}}\frac1{\eta^2+\xi^2}d\xi d\eta\bigg)^{1/2}\\ &\leq C. \end{align*} Since we are assuming the gap condition $1/p-1/q=1/2$, by Young's inequality for convolutions, this readily implies that $$\|T_kh\|_{L^q}\leq \bigg\|\mathcal{F}^{-1}\bigg(\frac{\psi_\delta(\xi,\eta)\chi_k(\xi,\eta)}{-\eta+i\xi}\bigg)\bigg\|_{L^2}\|h\|_{L^p} \leq C\|h\|_{L^p}$$ as desired. \end{proof} \section{Proof of Theorem \ref{thm}} The proof is standard once one has the Carleman inequality \eqref{Sobo} in Lemma \ref{lem}. Without loss of generality, we may show that $u$ must vanish identically if it vanishes in a sufficiently small neighborhood of zero. Then, since we are assuming that $u\in L^p\cap L^q$ vanishes near zero, from \eqref{Sobo} with a standard limiting argument involving a $C_0^\infty$ approximate identity, it follows that \begin{equation*} \big\||z|^{-t}u\big\|_{L^q} \leq C\big\||z|^{-t}\overline{\partial}u\big\|_{L^p}. \end{equation*} Thus by \eqref{inequality} we see that \begin{align*} \big\||z|^{-t}u\big\|_{L^q(B(0,r))}&\leq C\big\||z|^{-t}Vu\big\|_{L^p(B(0,r))}\\ &+C\big\||z|^{-t}\overline{\partial}u\big\|_{L^p(\mathbb{R}^2\setminus B(0,r))}, \end{align*} where $B(0,r)$ is the ball of radius $r>0$ centered at $0$. Then, using H\"older's inequality with $1/p-1/q=1/2$, the first term on the right-hand side in the above can be absorbed into the left-hand side as follows: \begin{align*} C\big\||z|^{-t}Vu\big\|_{L^p(B(0,r))}&\leq C\|V\|_{L^2(B(0,r))}\big\||z|^{-t}u\big\|_{L^q(B(0,r))}\\ &\leq \frac12\big\||z|^{-t}u\big\|_{L^q(B(0,r))} \end{align*} if we choose $r$ small enough. Here, $\||z|^{-t}u\|_{L^q(B(0,r))}$ is finite since $u\in L^q$ vanishes near zero. Hence we get \begin{align*} \|(r/|z|)^{t}u\|_{L^q(B(0,r))} &\leq2C\|\overline{\partial}u\|_{L^p(\mathbb{R}^2\setminus B(0,r))}\\ &\leq2C\|V\|_{L^2}\|u\|_{L^q}\\ &<\infty. \end{align*} By letting $t\rightarrow\infty$, we now conclude that $u=0$ on $B(0,r)$. This implies $u\equiv0$ by a standard connectedness argument. \section*{Acknowledgment} I would like to thank Jenn-Nan Wang for pointing out a preprint (\cite{KW}) and for some comments. \end{document}
arXiv
Wiener measure The probability measure $ \mu _ {W} $ on the space $ C[ 0, 1] $ of continuous real-valued functions $ x $ on the interval $ [ 0, 1] $, defined as follows. Let $ 0 < t _ {1} < \dots < t _ {n} \leq 1 $ be an arbitrary sample of points from $ [ 0, 1] $ and let $ A _ {1} \dots A _ {n} $ be Borel sets on the real line. Let $ C( t _ {1} \dots t _ {n} ; A _ {1} \dots A _ {n} ) $ denote the set of functions $ x \in C[ 0, 1] $ for which $ x( t _ {k} ) \in A _ {k} $, $ k = 1 \dots n $. Then $$ \tag{* } \mu _ {W} ( C ( t _ {1} \dots t _ {n} ; A _ {1} \dots A _ {n} )) = $$ $$ = \ \int\limits _ { A _ 1 } p ( t _ {1} , x _ {1} ) dx _ {1} \int\limits _ { A _ 2 } p ( t _ {2} - t _ {1} , x _ {2} - x _ {1} ) dx _ {2} \dots $$ $$ {} \dots \int\limits _ { A _ n } p ( t _ {n} - t _ {n-} 1 , x _ {n} - x _ {n-} 1 ) dx _ {n} , $$ $$ p ( t, x) = { \frac{1}{\sqrt {2 \pi t } } } e ^ {- x ^ {2} / 2 t } . $$ Using the theorem on the extension of a measure it is possible to define the value of the measure $ \mu _ {W} $ on all Borel sets of $ C[ 0, 1] $ on the basis of equation (*). The Wiener measure was introduced by N. Wiener [a1] in 1923; it was the first major extension of integration theory beyond a finite-dimensional setting. The construction outlined above extends easily to define Wiener measure $ \mu _ {W} $ on $ C [ 0, \infty ) $. The coordinate process $ x( t) $ is then known as Brownian motion or the Wiener process. Its formal derivative "dxt/dt" is known as Gaussian white noise. [a1] N. Wiener, "Differential space" J. Math. & Phys. , 2 (1923) pp. 132–174 [a2] T. Hida, "Brownian motion" , Springer (1980) [a3] I. Karatzas, S.E. Shreve, "Brownian motion and stochastic calculus" , Springer (1988) [a4] L. Partzsch, "Vorlesungen zum eindimensionalen Wienerschen Prozess" , Teubner (1984) [a5] J. Yeh, "Stochastic processes and the Wiener integral" , M. Dekker (1973) [a6] S. Albeverio, J.E. Fenstad, R. Høegh-Krohn, T. Lindstrøm, "Nonstandard methods in stochastic analysis and mathematical physics" , Acad. Press (1986) Wiener measure. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Wiener_measure&oldid=49220 This article was adapted from an original article by A.V. Skorokhod (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Wiener_measure&oldid=49220"
CommonCrawl
\begin{document} \title{Characteristic Formulae for Session Types (extended version)} \begin{abstract} Subtyping is a crucial ingredient of session type theory and its applications, notably to programming language implementations. In this paper, we study effective ways to check whether a session type is a subtype of another by applying a characteristic formulae approach to the problem. Our core contribution is an algorithm to generate a modal $\mu$-calculus formula that characterises all the supertypes (or subtypes) of a given type. Subtyping checks can then be off-loaded to model checkers, thus incidentally yielding an efficient algorithm to check safety of session types, soundly and completely. We have implemented our theory and compared its cost with other classical subtyping algorithms. \end{abstract} \section{Introduction} \paragraph{\bf Motivations} Session types~\cite{THK94,HVK98,betty-survey} have emerged as a fundamental theory to reason about concurrent programs, whereby not only the data aspects of programs are typed, but also their \emph{behaviours} wrt.\ communication. Recent applications of session types to the reverse-engineering of large and complex distributed systems~\cite{zdlc,LTY15} have led to the need of handling potentially large and complex session types. Analogously to the current trend of modern compilers to rely on external tools such as SMT-solvers to solve complex constraints and offer strong guarantees~\cite{haskell-smt,haskell-measures,LY12,Leino10}, state-of-the-art model checkers can be used to off-load expensive tasks from session type tools such as~\cite{scribble,LTY15,YHNN2013}. A typical use case for session types in software (reverse-) engineering is to compare the type of an existing program with a candidate replacement, so to ensure that both are ``compatible''. In this context, a crucial ingredient of session type theory is the notion of \emph{subtyping}~\cite{GH99,DemangeonH11,CDY2014} which plays a key role to guarantee safety of concurrent programs while allowing for the refinement of specifications and implementations. Subtyping for session types relates to many classical theories such as simulations and pre-orders in automata and process algebra theories; but also to subtyping for recursive types in the $\lambda$-calculus~\cite{AC93}. The characteristic formulae approach~\cite{GS86,si94,steffen89,ails12,ai97,ai07,cs91}, which has been studied since the late eighties as a method to compute simulation-like relations in process algebra and automata, appears then as an evident link between subtyping in session type theory and model checking theories. In this paper, we make the first formal connection between session type and model checking theories, to the best of our knowledge. We introduce a novel approach to session types subtyping based on characteristic formulae; and thus establish that subtyping for session types can be decided in quadratic time wrt.\ the size of the types. This improves significantly on the classical algorithm~\cite{GH05}. Subtyping can then be reduced to a model checking problem and thus be discharged to powerful model checkers. Consequently, any advance in model checking technology has an impact on subtyping. \paragraph{\bf Example} Let us illustrate what session types are and what subtyping covers. Consider a simple protocol between a server and a client, from the point of view of the server. The client sends a message of type $\msg{request}$ to the server who decides whether or not the request can be processed by replying $\msg{ok}$ or $\msg{ko}$, respectively. If the request is rejected, the client is offered another chance to send another request, and so on. This may be described by the \emph{session type} below \begin{equation}\label{ex:intro-example-u-1} U_1 = \rec{\var{x}} \rcv{request} . \{ \snd{ok} . \mathtt{end} \;\, \inchoicetop \; \snd{ko} . \var{x} \, \} \end{equation} where $\recND{\var{x}}$ binds variable $\var{x}$ in the rest of the type, $\rcv{msg}$ (resp.\ $\snd{msg}$) specifies the reception (resp.\ emission) of a message $\msg{msg}$, $\inchoicetop$ indicates an \emph{internal choice} between two behaviours, and $\mathtt{end}$ signifies the termination of the conversation. An implementation of a server can then be \emph{type-checked} against $U_1$. The client's perspective of the protocol may be specified by the \emph{dual} of $U_1$: \begin{equation}\label{ex:intro-example-u-2} \dual{U}_1 = U_2 = \rec{\var{x}} \snd{request} . \{ \rcv{ok} . \mathtt{end} \;\, \outchoicetop \; \rcv{ko} . \var{x} \, \} \end{equation} where $\outchoicetop$ indicates an \emph{external choice}, i.e., the client expects two possible behaviours from the server. A classical result in session type theory essentially says that if the types of two programs are \emph{dual} of each other, then their parallel composition is free of errors (e.g., deadlock). Generally, when we say that $\mathtt{integer}$ is a subtype of $\mathtt{float}$, we mean that one can safely use an $\mathtt{integer}$ when a $\mathtt{float}$ is required. Similarly, in session type theory, if $T$ is a \emph{subtype} of a type $U$ (written $T \subtype U$), then $T$ can be used whenever $U$ is required. Intuitively, a type $T$ is a \emph{subtype} of a type $U$ if $T$ is ready to receive no fewer messages than $U$, and $T$ may not send more messages than $U$~\cite{DemangeonH11,CDY2014}. For instance, we have \begin{equation}\label{ex:intro-example-subs} \begin{array}{l} T_1 = \rcv{request} . \snd{ok} . \mathtt{end} \; \subtype \; U_1 \\ T_2 = \rec{\var{x}} \snd{request} . \{ \rcv{ok} . \mathtt{end} \, \outchoicetop \rcv{ko} . \var{x} \, \outchoicetop \rcv{error} . \mathtt{end} \, \} \; \subtype \; U_2 \end{array} \end{equation} A server of type $T_1$ can be used whenever a server of type $U_1$~\eqref{ex:intro-example-u-1} is required ($T_1$ is a more refined version of $U_1$, which always accepts the request). A client of type $T_2$ can be used whenever a client of type $U_2$~\eqref{ex:intro-example-u-2} is required since $T_2$ is a type that can deal with (strictly) more messages than $U_2$. In Section~\ref{subsec:CF}, we will see that a session type can be naturally transformed into a $\mu$-calculus formula that characterises all its subtypes. The transformation notably relies on the diamond modality to make some branches mandatory, and the box modality to allow some branches to be made optional; see Example~\ref{ex:char-formula}. \paragraph{\bf Contribution \& synopsis} In \S~\ref{sec:session-type-theory} we recall session types and give a new abstract presentation of subtyping. In \S~\ref{sec:mucal-char-formu} we present a fragment of the modal $\mu$-calculus and, following~\cite{steffen89}, we give a simple algorithm to generate a $\mu$-calculus formula from a session type that characterises either all its subtypes or all its supertypes. In \S~\ref{sec:safety}, building on results from~\cite{CDY2014}, we give a sound and complete model-checking characterisation of safety for session types. In \S~\ref{sec:algos}, we present two other subtyping algorithms for session types: Gay and Hole's classical algorithm~\cite{GH05} based on inference rules that unfold types explicitly; and an adaptation of Kozen et al.'s automata-theoretic algorithm~\cite{KPS95}. In \S~\ref{sec:tool}, we evaluate the cost of our approach by comparing its performances against the two algorithms from \S~\ref{sec:algos}. Our performance analysis is notably based on a tool that generates arbitrary well-formed session types. We conclude and discuss related works in \S~\ref{sec:related}. Due to lack of space, full proofs are relegated to Appendix~\ref{app:proofs} (also available online~\cite{appendix}). Our tool and detailed benchmark results are available online~\cite{tool}. \section{Session types and subtyping}\label{sec:session-type-theory} Session types are abstractions of the behaviour of a program wrt.\ the communication of this program on a given \emph{session} (or conversation), through which it interacts with another program (or component). \subsection{Session types}\label{sub:session-types} We use a two-party version of the multiparty session types in~\cite{DY13}. For the sake of simplicity, we focus on first order session types (that is, types that carry only simple types (sorts) or values and not other session types). We discuss how to lift this restriction in Section~\ref{sec:conc}. Let $\mathcal{V}$ be a countable set of variables (ranged over by $\var{x}, \var{y}$, etc.); let $\mathbb{A}$ be a (finite) alphabet, ranged over by $a$, $b$, etc.; and $\mathcal{A}$ be the set defined as $\{ \snd{a} \, \mid \, a \in \mathbb{A} \} \cup \{ \rcv{a} \, \mid \, a \in \mathbb{A} \}$. We let $\Op$ range over elements of $\{ !, ? \}$, so that ${\Op \! a}$ ranges over $\mathcal{A}$. The syntax of session types is given by \[ T \coloneqq \mathtt{end} \;\mid\; \inchoice \;\mid\; \outchoice \;\mid\; \rec{\var{x}} T \;\mid\; \var{x} \] where $I \neq \emptyset$ is finite, $a_i \in \mathbb{A}$ for all $i \in I$, $\msg{a}_i \neq \msg{a}_j$ for $i \neq j$, and $\var{x} \in \mathcal{V}$. Type $\mathtt{end}$ indicates the end of a session. Type $\inchoice$ specifies an \emph{internal} choice, indicating that the program chooses to send one of the $\msg{a}_i$ messages, then behaves as $T_i$. Type $\outchoice$ specifies an \emph{external} choice, saying that the program waits to receive one of the $\msg{a}_i$ messages, then behaves as $T_i$. Types $\rec{\var{x}} T$ and $\var{x}$ are used to specify recursive behaviours. We often write, e.g., $\{\snd{a}_1. T_1 \inchoicetop {\ldots} \inchoicetop \snd{a}_k . T_k \}$ for $\inchoiceop_{1 \leq i \leq k} { \snd{a}_i . T_i}$, write $\snd{a_1} . T_1$ when $k =1$, similarly for $\outchoice$, and omit trailing occurrences of $\mathtt{end}$. The sets of free and bound variables of a type $T$ are defined as usual (the unique binder is the recursion operator $\rec{\var{x}} T$). For each type $T$, we assume that two distinct occurrences of a recursion operator bind different variables, and that no variable has both free and bound occurrences. In coinductive definitions, we take an equi-recursive view of types, not distinguishing between a type $\rec{\var{x}} T$ and its unfolding $T \subs{\rec{\var{x}} T}{\var{x}}$. We assume that each type $T$ is \emph{contractive}~\cite{piercebook02}, e.g., $\rec{\var{x}} \var{x}$ is not a type. Let $\mathcal{T}$ be the set of all (contractive) session types and $\TYPES_{\mathit{c}} \subseteq \mathcal{T}$ the set of all closed session types (i.e., which do not contain free variables). \begin{figure}\label{fig:lts-types} \end{figure} A session type $T \in \TYPES_{\mathit{c}}$ induces a (finite) \emph{labelled transition system} (LTS) according to the rules in Figure~\ref{fig:lts-types}. We write $T \semarrow{{\Op \! a}}$ if there is $T' \in \mathcal{T}$ such that $T \semarrow{{\Op \! a}} T'$ and write $T \!\!\nrightarrow$ if $\forall {\Op \! a} \in \mathcal{A} \, : \, \neg (T \semarrow{{\Op \! a}} )$. \subsection{Subtyping for session types} Subtyping for session types was first studied in~\cite{GH99} and further studied in~\cite{DemangeonH11,CDY2014}. It is a crucial notion for practical applications of session types, as it allows for programs to be \emph{refined} while preserving safety. We give a definition of subtyping which is parameterised wrt.\ operators $\inchoicetop$ and $\outchoicetop$, so to allow us to give a common characteristic formula construction for both the subtype and the supertype relations, cf.\ Section~\ref{subsec:CF}. Below, we let $\bigtOp$ range over $\{\inchoicetop, \outchoicetop \}$. When writing $\choice$, we take the convention that $\Op$ refers to $!$ iff $\bigtOp$ refers to $\inchoicetop$ (and vice-versa for $?$ and $\outchoicetop$). We define the (idempotent) duality operator $\dual{\phantom{\outchoiceop}}$ as follows: $\dual{\vphantom{\outchoiceop}\inchoicetop} \defi \outchoicetop$, $\dual{\outchoicetop} \defi \inchoicetop$, $\dual{!} \defi ?$, and $\dual{?} \defi !$. \begin{mydef}[Subtyping]\label{def:ab-subtype} Fix $\bigtOp \in \{\inchoicetop, \outchoicetop \}$, $\absub{\bigOp} \subseteq \TYPES_{\mathit{c}} \times \TYPES_{\mathit{c}}$ is the \emph{largest} relation that contains the rules: \[ \resizebox{\textwidth}{!}{$ \coinference{S-\ensuremath{{\bigtOp}}} { I \subseteq J & \forall i \in I \, : \, T_i \absub{\bigOp} U_i } { \choice \absub{\bigOp} \choiceSet{j}{J}{U} } \quad \coinference{S-end} {} {\mathtt{end} \absub{\bigOp} \mathtt{end}} \quad \coinference{S-\ensuremath{\dual{\bigtOp}}} { J \subseteq I & \forall j \in J \, : \, T_j \absub{\bigOp} U_j } { \cochoice \absub{\bigOp} \cochoiceSet{j}{J}{U} } $} \] The double line in the rules indicates that the rules should be interpreted \emph{coinductively}. Recall that we are assuming an equi-recursive view of types. \end{mydef} We comment Definition~\ref{def:ab-subtype} assuming that $\bigtOp$ is set to $\inchoicetop$. Rule $\inferrule{S-\ensuremath{{\bigOp}}}$ says that a type $\inchoiceSet{j}{J}{U}$ can be replaced by a type that offers no more messages, e.g., $\snd{a} \absub{\,\inchoicetop} \snd{a} \inchoicetop \snd{b}$. Rule $\inferrule{S-\ensuremath{\dual{\bigOp}}}$ says that a type $\outchoiceSet{j}{J}{U}$ can be replaced by a type that is ready to receive at least the same messages, e.g., $\rcv{a} \outchoicetop \rcv{b} \absub{\,\inchoicetop} \rcv{a}$. Rule $\inferrule{S-end}$ is trivial. It is easy to see that $\absub{\,\inchoicetop} = ( \absub{\,\outchoicetop} )^{-1}$. In fact, we can recover the subtyping of~\cite{DemangeonH11,CDY2014} (resp.~\cite{GH99,GH05}) from $\absub{\bigOp}$, by instantiating $\bigtOp$ to $\inchoicetop$ (resp.\ $\outchoicetop$). \begin{example} Consider the session types from~\eqref{ex:intro-example-subs}, we have $T_1 \absub{\,\inchoicetop} U_1$, $U_1 \absub{\,\outchoicetop} T_1$, $T_2 \absub{\,\inchoicetop} U_2$, and $U_2 \absub{\,\outchoicetop} T_2$. \end{example} Hereafter, we will write $\subtype$ (resp.\ $\supertype$) for the pre-order $\absub{\,\inchoicetop}$ (resp.\ $\absub{\,\outchoicetop}$). \section{Characteristic formulae for subtyping}\label{sec:mucal-char-formu} We give the core construction of this paper: a function that given a (closed) session type $T$ returns a modal $\mu$-calculus formula~\cite{Kozen83} that characterises either all the supertypes of $T$ or all its subtypes. Technically, we ``translate'' a session type $T$ into a modal $\mu$-calculus formula $\phi$, so that $\phi$ characterises all the supertypes of $T$ (resp.\ all its subtypes). Doing so, checking whether $T$ is a subtype (resp.\ supertype) of $U$ can be reduced to checking whether $U$ is a model of $\phi$, i.e., whether $U \models \phi$ holds. The constructions presented here follow the theory first established in~\cite{steffen89}; which gives a characteristic formulae approach for (bi-)simulation-like relations over finite-state processes, notably for CCS processes. \subsection{Modal $\mu$-calculus}\label{sub:mucal} In order to encode subtyping for session types as a model checking problem it is enough to consider the fragment of the modal $\mu$ calculus below: \[ \phi \; \coloneqq \; \truek \;\mid\; \falsek \;\mid\; \phi \land \phi \;\mid\; \phi \lor \phi \;\mid\; \mmbox{{\Op \! a}}{\phi} \;\mid\; \mmdiamond{{\Op \! a}}{\phi} \;\mid\; \mmnu{\var{x}} \phi \;\mid\; \var{x} \] Modal operators $\mmbox{{\Op \! a}}{}$ and $\mmdiamond{{\Op \! a}}{}$ have precedence over Boolean binary operators $\land$ and $\lor$; the greatest fixpoint point operator $\mmnuND{\var{x}}$ has the lowest precedence (and its scope extends as far to the right as possible). Let $\mathcal{F}$ be the set of all (contractive) modal $\mu$-calculus formulae and $\FORMULA_{\mathit{c}} \subseteq \mathcal{F}$ be the set of all closed formulae. Given a set of actions $A \subseteq \mathcal{A}$, we write $\compset{A}$ for $\mathcal{A} \setminus A$, and $\mmbox{A}{\phi}$ for $\bigwedge_{{\Op \! a} \in A}\mmbox{{\Op \! a}}{\phi}$. The $n^{th}$ approximation of a fixpoint formula is defined as follows: \[ \approxi{\mmnu{\var{x}} \phi}{0} \; \defi \; \truek \qquad\qquad\quad \approxi{\mmnu{\var{x}} \phi}{n} \; \defi \; \phi \subs{\approxi{\mmnu{\var{x}} \phi}{n-1}}{\var{x}} \qquad \text{if } n >0 \] A \emph{closed} formula $\phi$ is interpreted on the labelled transition system induced by a session type $T$. The satisfaction relation $\models$ between session types and formulae is inductively defined as follows: \newcommand{\quad}{\quad} \[ \begin{array} { l@{\quad \mathit{iff} \quad} l@{\quad} l@{\quad \mathit{iff} \quad} l} \multicolumn{2}{l}{T \models \truek \vphantom{ T \semarrow{{\Op \! a}} } } \\ T \models \phi_1 \!\land\! \phi_2 & T \models \phi_1 \text{ and } T \models \phi_2 \\ T \models \phi_1 \!\lor\! \phi_2 & T \models \phi_1 \text{ or } T \models \phi_2 \vphantom{ T \semarrow{{\Op \! a}} } \\ T \models \mmbox{{\Op \! a}}{\phi} & \forall T' \in \TYPES_{\mathit{c}} \, : \, \text{if } T \semarrow{{\Op \! a}} T' \text{ then } T' \models \phi \\ T \models \mmdiamond{{\Op \! a}}{\phi} & \exists T' \in \TYPES_{\mathit{c}} \, : \, T \semarrow{{\Op \! a}} T' \text{ and } T' \models \phi \\ T \models \mmnu{\var{x}} \phi & \forall n \geq 0 \, : \, T \models \approxi{ \mmnu{\var{x}} \phi }{n} \vphantom{ T \semarrow{{\Op \! a}} } \\ \end{array} \] Intuitively, $\truek$ holds for every $T$ (while $\falsek$ never holds). Formula $\phi_1 \land \phi_2$ (resp.\ $\phi_1 \lor \phi_2$) holds if both components (resp.\ at least one component) of the formula hold in $T$. The construct $\mmbox{{\Op \! a}}{\phi}$ is a \emph{modal} operator that is satisfied if for each $\msg{{\Op \! a}}$-derivative $T'$ of $T$, the formula $\phi$ holds in $T'$. The dual modality is $\mmdiamond{{\Op \! a}}{\phi}$ which holds if there is an $\msg{{\Op \! a}}$-derivative $T'$ of $T$ such that $\phi$ holds in $T'$. Construct $\mmnu{\var{x}} \phi$ is the \emph{greatest} fixpoint operator (binding $\var{x}$ in $\phi$). \subsection{Characteristic formulae} \label{subsec:CF} We now construct a $\mu$-calculus formula from a (closed) session types, parameterised wrt.\ a constructor $\bigtOp$. This construction is somewhat reminiscent of the \emph{characteristic functional} of~\cite{steffen89}. \begin{mydef}[Characteristic formulae] \label{def:char-formula} The characteristic formulae of $T \in \TYPES_{\mathit{c}}$ on $\bigtOp$ is given by function $\charforname : \TYPES_{\mathit{c}} \times \{ \inchoicetop , \outchoicetop \} \rightarrow \FORMULA_{\mathit{c}}$, defined as: \[ \Anyform{T}{\bigtOp} \defi \begin{cases} \bigwedge_{i \in I} \, \mmdiamond{\opfun{\bigOp}{a_i}}{\Anyform{T_i}{\bigtOp}} & \text{if } T = \choice \\ \bigwedge_{i \in I} \, \mmbox{\opfun{\bigOp}{a_i}}{\Anyform{T_i}{\bigtOp}} & \text{if } T = \cochoice \\ \quad \land \, \bigvee_{i \in I} \, \mmdiamond{\opfun{\bigOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ \mmbox{\mathcal{A}}{\falsek} & \text{if } T = \mathtt{end} \\ \mmnu{\var{x}} \Anyform{T'}{\bigtOp} & \text{if } T = \rec{\var{x}} T' \\ \var{x} & \text{if } T = \var{x} \xqedhere{59pt}{\qed} \end{cases} \] \renewcommand{\qed}{} \end{mydef} Given $T \in \TYPES_{\mathit{c}}$, $\Anyform{T}{\inchoicetop}$ is a $\mu$-calculus formula that characterises all the \emph{supertypes} of $T$; while $\Anyform{T}{\outchoicetop}$ characterises all its \emph{subtypes}. For the sake of clarity, we comment on Definition~\ref{def:char-formula} assuming that $\bigtOp$ is set to $\inchoicetop$. The first case of the definition makes every branch \emph{mandatory}. If $T = \inchoice$, then every internal choice branch that $T$ can select must also be offered by a supertype, and the relation must hold after each selection. The second case makes every branch \emph{optional} but requires at least one branch to be implemented. If $T = \outchoice$, then ($i$) for each of the $\rcv{a_i}$-branch offered by a supertype, the relation must hold in its $\rcv{a_i}$-derivative, ($ii$) a supertype must offer at least one of the $\rcv{a_i}$ branches, and ($iii$) a supertype cannot offer anything else but the $\rcv{a_i}$ branches. If $T = \mathtt{end}$, then a supertype cannot offer any behaviour (recall that $\falsek$ does not hold for any type). Recursive types are mapped to greatest fixpoint constructions. Lemma~\ref{lem:compo} below states the compositionality of the construction, while Theorem~\ref{thm:main-theorem}, our main result, reduces subtyping checking to a model checking problem. A consequence of Theorem~\ref{thm:main-theorem} is that the characteristic formula of a session type precisely specifies the set of its subtypes or supertypes. \begin{restatable}{lemma}{lemcompo} \label{lem:compo} $\Anyform{T \subs{U}{\var{x}}}{\bigtOp} = \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}}$ \end{restatable} \begin{proof} By structural induction, see Appendix~\ref{proof:lemcompo}. \end{proof} \begin{restatable}{theorem}{thmmaintheorem} \label{thm:main-theorem} $\forall T, U \in \TYPES_{\mathit{c}} \, : \, T \absub{\bigOp} U \iff U \models \Anyform{T}{\bigtOp}$ \end{restatable} \begin{proof} The proof essentially follows the techniques of~\cite{steffen89}, see Appendix~\ref{proof:thmmaintheorem}. \end{proof} \begin{corollary}\label{cor:TsubU} The following holds: \[ \begin{array}{c@{\qquad\quad}c} \begin{array}{l@{\;\,}l} (a) & T \subtype U \iff U \models \subform{T} \\ (b) & U \supertype T \iff T \models \SUPform{U} \end{array} & \begin{array}{l@{\;\,}l} (c) & U \models \subform{T} \iff T \models \SUPform{U} \end{array} \end{array} \] \end{corollary} \begin{proof} By Theorem~\ref{thm:main-theorem} and $\subtype = \absub{\,\inchoicetop}$, $\supertype = \absub{\,\outchoicetop}$, $\subtype = \supertype^{-1}$, and $\absub{\,\inchoicetop} = (\absub{\,\outchoicetop} )^{-1}$ \end{proof} \begin{proposition}\label{prop:char-complexity} For all $T, U \in \TYPES_{\mathit{c}}$, deciding whether or not $U \models \Anyform{T}{\bigtOp}$ holds can be done in time complexity of $\bigo{\lvert T \rvert \times \lvert U \rvert}$, in the worst case; where $\lvert T \rvert$ stands for the number of states in the LTS induced by $T$. \end{proposition} \begin{proof} Follows from~\cite{cs91}, since the size of $\Anyform{T}{\bigtOp}$ increases linearly with $\lvert T \rvert$. \end{proof} \begin{example}\label{ex:char-formula} Consider session types $T_1$ and $U_1$ from~\eqref{ex:intro-example-u-1} and~\eqref{ex:intro-example-subs} and fix $\mathcal{A} = \{ \rcv{request}, \snd{ok}, \snd{ko} \}$. Following Definition~\ref{def:char-formula}, we obtain: \[ \begin{array}{rcll} \subform{T_1} & = & \multicolumn{2}{l}{ \mmbox{\rcv{request}}{ \mmdiamond{\snd{ok}}{ \mmbox{\mathcal{A}}{\falsek} } } \;\, \land \;\, \mmdiamond{\rcv{request}}{\truek} \;\, \land \;\, \mmbox{ \neg \{ \rcv{request} \}}{\falsek} } \\ \SUPform{U_1} & = & \mmnu{\var{x}} \mmdiamond{\rcv{request}}{} \big( & \left( \mmbox{\snd{ok}}{ \mmbox{\mathcal{A}}{\falsek} } \; \land \;\, \mmbox{\snd{ko}}{ \var{x} } \right) \\ && & \land \; \, \left( \mmdiamond{\snd{ok}}{ \truek } \lor \mmdiamond{\snd{ko}}{ \truek } \right) \;\, \land \;\, \mmbox{ \neg \{ \snd{ok}, \snd{ok} \}}{\falsek} \big) \end{array} \] We have $U_1 \models \subform{T_1}$ and $T_1 \models \SUPform{U_1}$, as expected (recall tat $T_1 \subtype U_1$). \end{example} \section{Safety and duality in session types}\label{sec:safety} A key ingredient of session type theory is the notion of \emph{duality} between types. In this section, we study the relation between duality of session types, characteristic formulae, and safety (i.e., error freedom). In particular, building on recent work~\cite{CDY2014} which studies the preciseness of subtyping for session types, we show how characteristic formulae can be used to guarantee safety. \label{subsec:safety} A system (of session types) is a pair of session types $T$ and $U$ that interact with each other by synchronising over messages. We write $T \spar U$ for a system consisting of $T$ and $U$ and let $S$ range over systems of session types. \begin{mydef}[Synchronous semantics]\label{def:synch-semantics} The \emph{synchronous} semantics of a \emph{system} of session types $T \spar U$ is given by the rule below, in conjunction with the rules of Figure~\ref{fig:lts-types}. \[ \inference {s-com} { T \semarrow{\anydir{a}} T' & U \semarrow{\coanydir{a}} U' } { T \spar U \semarrow{} T' \spar U' } \] We write $\semarrow{}^{\ast}$ for the reflexive transitive closure of $\semarrow{}$. \end{mydef} Definition~\ref{def:synch-semantics} says that two types interact whenever they fire dual operations. \begin{example} Consider the following execution of system $T_1 \spar U_2$, from~\eqref{ex:intro-example-subs}: \begin{equation}\label{eq:good-exec} \begin{array}{ccll} T_1 \spar U_2 & \; = \; & \rcv{request} . \snd{ok} . \mathtt{end} \; \spar \; \rec{\var{x}} \snd{request} . \{ \ldots \} \\ & \; \semarrow{\; \; \;} \; & \snd{ok} . \mathtt{end} \; \spar \; \{ \rcv{ok} . \mathtt{end} \; \outchoicetop \, \rcv{ok} . \rec{\var{x}} \rcv{request} \{ \ldots \} \} \; \semarrow{\; \; \;} \; & \mathtt{end} \; \spar \; \mathtt{end} \end{array} \end{equation} \end{example} \begin{mydef}[Error~\cite{CDY2014} and safety] A system $T_1 \spar T_2$ is an \emph{error} if, either: \begin{enumerate}[label=(\emph{\alph*})] \item \label{en:error-same} $T_1 = \choice$ and $T_2 = \choiceSet{j}{J}{U}$, with $\bigtOp$ fixed; \item \label{en:error-miss} $T_h = \inchoice$ and $T_g = \outchoiceSet{j}{J}{U}$; and $\exists i \in I \, : \, \forall j \in J \, : \, a_i \neq a_j$, with $h \neq g \in \{1,2\}$; or \item \label{en:error-end} $T_h = \mathtt{end}$ and $T_g = \choice$, with $h \neq g \in \{1,2\}$. \end{enumerate} We say that $S = T \spar U$ is \emph{safe} if for all $S' \, : \, S \semarrow{}^{\ast} S'$, $S'$ is not an error. \end{mydef} A system of the form~\ref{en:error-same} is an error since both types are either attempting to send (resp.\ receive) messages. An error of type~\ref{en:error-miss} indicates that some of the messages cannot be received by one of the types. An error of type~\ref{en:error-end} indicates a system where one of the types has terminated while the other still expects to send or receive messages. \begin{mydef}[Duality]\label{def:duality} The dual of a formula $\phi \in \mathcal{F}$, written $\dual{\phi}$ (resp.\ of a session type $T \in \mathcal{T}$, written $\dual{T}$), is defined recursively as follows: \[ \begin{array}{c@{\quad}c} \dual{\phi} \defi \begin{cases} \dual{\phi_1} \land \dual{\phi_2} & \text{if } \phi = \phi_1 \land \phi_2 \\ \dual{\phi_1} \lor \dual{\phi_2} & \text{if } \phi = \phi_1 \lor \phi_2 \\ \mmbox{\coanydir{a}}{\dual{\phi'}} & \text{if } \phi = \mmbox{{\Op \! a}}{\phi'} \\ \mmdiamond{\coanydir{a}}{\dual{\phi'}} & \text{if } \phi = \mmdiamond{{\Op \! a}}{\phi'} \\ \mmnu{\var{x}} \dual{\phi'} & \text{if } \phi = \mmnu{\var{x}} \phi' \\ \phi & \text{if } \phi = \truek, \falsek, \text{ or } \var{x} \xqedhere{196pt}{\qed} \end{cases} & \dual{T} \defi \begin{cases} \cocochoiceSetNoIdx{i}{I}{\dual{T_i}} & \text{if } T = \choice \\ \rec{\var{x}} \dual{T'} & \text{if } T = \rec{\var{x}} T' \\ \var{x} & \text{if } T = \var{x} \\ \mathtt{end} & \text{if } T = \mathtt{end} \end{cases} \end{array} \] \renewcommand{\qed}{} \end{mydef} In Definition~\ref{def:duality}, notice that the dual of a formula only rename labels. \begin{lemma} For all $T \in \TYPES_{\mathit{c}}$ and $\phi \in \FORMULA_{\mathit{c}}$, $T \models \phi \iff \dual{T} \models \dual{\phi}$. \end{lemma} \begin{proof} Direct from the definitions of $\dual{T}$ and $\dual{\phi}$ (labels are renamed uniformly). \end{proof} \begin{restatable}{theorem}{thmcharformduality} \label{thm:char-form-duality} For all $T \in \mathcal{T} \, : \, \dual{\Anyform{T}{\bigtOp}} = \Anyform{\dual{T}}{\dual{\bigtOp}}$. \end{restatable} \begin{proof} By structural induction on $T$, see Appendix~\ref{proof:thmcharformduality}. \end{proof} Theorem~\ref{thm:safety} follows straightforwardly from~\cite{CDY2014} and allows us to obtain a sound and complete model-checking based condition for safety, cf.\ Theorem~\ref{thm:safety-statements}. \begin{restatable}[Safety]{theorem}{thmsafety} \label{thm:safety} $T \spar U$ is safe $\iff$ $(T \subtype \dual{U} \lor \dual{T} \subtype U)$. \end{restatable} \begin{proof} Direction $(\Longrightarrow)$ follows from ~\cite[Table 7]{CDY2014} and direction $(\Longleftarrow)$ is by coinduction on the derivations of $T \subtype \dual{U}$ and $\dual{T} \subtype U$. See Appendix~\ref{app:safety} for details. \end{proof} \noindent Finally we achieve: \begin{theorem}\label{thm:safety-statements} The following statements are equivalent: $\; (a) {\;\,} T \spar U \text{ is safe }$ \[ \begin{array}{l@{\;\,}l@{\qquad\qquad}l@{\;\,}l} (b) & \dual{U} \models \Anyform{T}{\inchoicetop} \lor U \models \Anyform{\dual{T}}{\inchoicetop} & (d) & U \models \Anyform{\dual{T}}{\outchoicetop} \lor \dual{U} \models \Anyform{T}{\outchoicetop} \\ (c) & T \models \Anyform{\dual{U}}{\outchoicetop} \lor \dual{T} \models \Anyform{{U}}{\outchoicetop} & (e) & \dual{T} \models \Anyform{{U}}{\inchoicetop} \lor {T} \models \Anyform{\dual{U}}{\inchoicetop} \end{array} \] \end{theorem} \begin{proof} By direct applications of Theorem~\ref{thm:safety}, then Corollary~\ref{cor:TsubU} and Theorem~\ref{thm:char-form-duality}. \end{proof} \section{Alternative algorithms for subtyping}\label{sec:algos} In order to compare the cost of checking the subtyping relation via characteristic formulae to other approaches, we present two other algorithms: the original algorithm as given by Gay and Hole in~\cite{GH05} and an adaptation of Kozen, Palsberg, and Schwartzbach's algorithm~\cite{KPS95} for recursive subtyping for the $\lambda$-calculus. \subsection{Gay and Hole's algorithm} \label{sub:gay-hole-algo} \begin{figure} \caption{Algorithmic subtyping rules~\cite{GH05}} \label{fig:gay-hole-algo} \end{figure} The inference rules of Gay and Hole's algorithm are given in Figure~\ref{fig:gay-hole-algo} (adapted to our setting). The rules essentially follow those of Definition~\ref{def:ab-subtype} but deal explicitly with recursion. They use judgments $\judge{\Gamma}{T}{U}$ in which $T$ and $U$ are (closed) session types and $\Gamma$ is a sequence of assumed instances of the subtyping relation, i.e., $\Gamma = T_1 \subtype_{c} U_1 , {\scriptstyle \ldots}, T_k \subtype_{c} U_k$, saying that each pair $T_i \subtype_{c} U_i$ has been visited. To guarantee termination, rule $\inferrule{Assump}$ should always be used if it is applicable. \begin{theorem}[Correspondence{~\cite[Corollary 2]{GH05}}] \label{thm:gay-hole-algo} $T \subtype U$ if and only if $\judge{\emptyset}{T}{U}$ is derivable from the rules in Figure~\ref{fig:gay-hole-algo}. \end{theorem} \noindent Proposition~\ref{prop:gay-complexity}, a contribution of this paper, states the algorithm's complexity. \begin{proposition}\label{prop:gay-complexity} For all $T, U \in \TYPES_{\mathit{c}}$, the problem of deciding whether or not $\judge{\emptyset}{T}{U}$ is derivable has an $\bigo{n^{2^n}}$ time complexity, in the worst case; where $n$ is the number of nodes in the parsing tree of the $T$ or $U$ (which ever is bigger). \end{proposition} \begin{proof} Assume the bigger session type is $T$ and its size is $n$ (the number of nodes in its parsing tree). Observe that the algorithm in Figure~\ref{fig:gay-hole-algo} needs to visit every node of $T$ and relies on explicit unfolding of recursive types. Given a type of size $n$, its unfolding is of size $\bigo{n^2}$, in the worst case. Hence, we have a chain $\bigo{n} + \bigo{n^2} + \bigo{n^4} + \ldots$, or $\bigo{\sum_{1 \leq i \leq k} n^{2^i}}$, where $k$ is a bound on the number of derivations needed for the algorithm to terminate. According to~\cite[Lemma 10]{GH05}, the number of derivations is bounded by the number of sub-terms of $T$, which is $\bigo{n}$. Thus, we obtain a worst case time complexity of $\bigo{n^{2^n}}$. \end{proof} \subsection{Kozen, Palsberg, and Schwartzbach's algorithm} \label{sub:kozen-et-al} Considering that the results of~\cite{KPS95} ``\emph{generalise to an arbitrary signature of type constructors (\ldots)}'', we adapt Kozen et al.'s algorithm, originally designed for subtyping recursive types in the $\lambda$-calculus. Intuitively, the algorithm reduces the problem of subtyping to checking the language emptiness of an automaton given by the product of two (session) types. The intuition of the theory behind the algorithm is that ``\emph{two types are ordered if no common path detects a counterexample}''. We give the details of our instantiation below. The set of type constructors over $\mathcal{A}$, written $\mathfrak{C}_\mathcal{A}$, is defined as follows: \[ \mathfrak{C}_\mathcal{A} \defi \{ \mathtt{end} \} \cup \{ \choicecons{\inchoicetop}{A} \, \mid \, \emptyset \subset A \subseteq \mathcal{A} \} \cup \{ \choicecons{\outchoicetop}{A} \, \mid \, \emptyset \subset A \subseteq \mathcal{A} \} \] \begin{mydef}[Term automata]\label{def:term-automaton} A term automaton over $\mathcal{A}$ is a tuple $ \mathcal{M} = (Q, \, \mathfrak{C}_\mathcal{A}, \, q_0, \, \delta, \, \ell) $ where \begin{itemize} \item $Q$ is a (finite) set of states, \item $q_0 \in Q$ is the initial state, \item $\delta : Q \times \mathcal{A} \rightarrow Q$ is a (partial) function (the \emph{transition function}), and \item $\ell : Q \rightarrow \mathfrak{C}_\mathcal{A}$ is a (total) labelling function \end{itemize} such that for any $q \in Q$, if $\ell(q) \in \{\choicecons{\inchoicetop}{A} , \choicecons{\outchoicetop}{A} \} $, then $\delta(q, {\Op \! a})$ is defined for all ${\Op \! a} \in A$; and for any $q \in Q$ such that $\ell(q) = \mathtt{end}$, $\delta(q, {\Op \! a})$ is undefined for all ${\Op \! a} \in \mathcal{A}$. We decorate $Q$, $\delta$, etc.\ with a superscript, e.g., $\mathcal{M}$, where necessary. \end{mydef} We assume that session types have been ``translated'' to term automata, the transformation is straightforward (see,~\cite{DY13} for a similar transformation). Given a session type $T \in \TYPES_{\mathit{c}}$, we write $\autof{T}$ for its corresponding term automaton. \begin{mydef}[Subtyping]\label{def:kozen-sub} $\sqsubseteq$ is the smallest binary relation on $\mathfrak{C}_\mathcal{A}$ such that: \[ \mathtt{end} \sqsubseteq \mathtt{end} \qquad \choicecons{\inchoicetop}{A} \sqsubseteq \choicecons{\inchoicetop}{B} \iff A \subseteq B \qquad \choicecons{\outchoicetop}{A} \sqsubseteq \choicecons{\outchoicetop}{B} \iff B \subseteq A \qedhere \] \end{mydef} Definition~\ref{def:kozen-sub} essentially maps the rules of Definition~\ref{def:ab-subtype} to type constructors. The order $\sqsubseteq$ is used in the product automaton to identify final states, see below. \begin{mydef}[Product automaton]\label{def:product-automaton} Given two term automata $\automaton$ and $\mathcal{N}$ over $\mathcal{A}$, their product automaton $\autoprod{\automaton}{\mathcal{N}} = (P, \, p_0, \, \Delta, \, F)$ is such that \begin{itemize} \item $P = Q^{\automaton} \times Q^{\mathcal{N}}$ are the states of $\autoprod{\automaton}{\mathcal{N}}$, \item $p_0 = (q_0^{\automaton}, q_0^{\mathcal{N}})$ is the initial state, \item $\Delta : P \times \mathcal{A} \rightarrow P$ is the partial function which for $q_1 \in Q^{\automaton}$ and $q_2 \in Q^{\mathcal{N}}$ gives \[ \Delta( ( q_1, q_2 ), {\Op \! a} ) = ( \delta^{\automaton}(q_1, {\Op \! a} ) , \delta^{\mathcal{N}}(q_2, {\Op \! a} ) ) \] \item $F \subseteq P$ is the set of \emph{accepting} states: $ F = \{ \, ( q_1, q_2 ) \, \mid \, \ell^{\automaton}(q_1) \nsqsubseteq \ell^{\mathcal{N}}(q_2) \, \} $ \end{itemize} Note that $\Delta( ( q_1, q_2 ), {\Op \! a} )$ is defined iff $\delta^{\automaton}(q_1, {\Op \! a})$ and $\delta^{\mathcal{N}}(q_2, {\Op \! a})$ are defined. \end{mydef} Following~\cite{KPS95}, we obtain Theorem~\ref{def:kozen-correspondence}. \begin{theorem}\label{def:kozen-correspondence} Let $T, U \in \TYPES_{\mathit{c}}$, $ T \subtype U$ iff the language of $\autoprod{\autof{T}}{\autof{U}}$ is empty. \end{theorem} Theorem~\ref{def:kozen-correspondence} essentially says that $T \subtype U$ iff one cannot find a ``common path'' in $T$ and $U$ that leads to nodes whose labels are not related by $\sqsubseteq$, i.e., one cannot find a counterexample for them \emph{not} being in the subtyping relation. \newcommand{\;\,\,}{\;\,\,} \begin{example} Below we show the constructions for $T_1$~\eqref{ex:intro-example-u-1} and $U_1$~\eqref{ex:intro-example-subs}. \[ \begin{array}{c@{\;\,\,}c@{\;\,\,}c@{\;\,\,}c} \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$\choicecons{\outchoicetop}{\{ \rcv{request} \}}$}; \node[state, below of=q0] (q1) {$\choicecons{\inchoicetop}{ \{ \snd{ok} \} }$}; \node[state, below of=q1] (q2) {$\mathtt{end}$}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$\autof{T_1}$}; \path (q0) edge node [right] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) ; \end{tikzpicture} & \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$\choicecons{\outchoicetop}{\{ \rcv{request} \}}$}; \node[state, below of=q0] (q1) {$\choicecons{\inchoicetop}{ \{ \snd{ok} , \snd{ko} \} }$}; \node[state, below of=q1] (q2) {$\mathtt{end}$}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$\autof{U_1}$}; \path (q0) edge [bend right] node [left] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) (q1) edge [bend right] node [right] {$\snd{ko}$} (q0) ; \end{tikzpicture} & \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$ \choicecons{\outchoicetop}{\{ \rcv{request} \}} \sqsubseteq \choicecons{\outchoicetop}{\{ \rcv{request} \}} $}; \node[state, below of=q0] (q1) {$ \choicecons{\inchoicetop}{ \{ \snd{ok} \} } \sqsubseteq \choicecons{\inchoicetop}{ \{ \snd{ok} , \snd{ko} \} } $}; \node[state, below of=q1] (q2) {$ \mathtt{end} \sqsubseteq \mathtt{end} $}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$\autoprod{\autof{T_1}}{\autof{U_1}}$ }; \path (q0) edge node [left] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) ; \end{tikzpicture} & \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$ \choicecons{\outchoicetop}{\{ \rcv{request} \}} \sqsubseteq \choicecons{\outchoicetop}{\{ \rcv{request} \}} $}; \node[state, below of=q0, accepting] (q1) {$ \choicecons{\inchoicetop}{ \{ \snd{ok} , \snd{ko} \} } \nsqsubseteq \choicecons{\inchoicetop}{ \{ \snd{ok} \} } $}; \node[state, below of=q1] (q2) {$ \mathtt{end} \sqsubseteq \mathtt{end} $}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$ \autoprod{\autof{U_1}}{\autof{T_1}}$}; \path (q0) edge node [left] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) ; \end{tikzpicture} \end{array} \] Where initial states are shaded and accepting states are denoted by a double line. Note that the language of $\autoprod{\autof{T_1}}{\autof{U_1}}$ is empty (no accepting states). \end{example} \begin{proposition}\label{prop:kozen-complexity} For all $T, U \in \TYPES_{\mathit{c}}$, the problem of deciding whether or not the language of $\autoprod{\autof{T}}{\autof{U}}$ is empty has a worst case complexity of $\bigo{\lvert T \rvert \times \lvert U \rvert}$; where $\lvert T \rvert$ stands for the number of states in the term automaton $\autof{T}$. \end{proposition} \begin{proof} Follows from the fact that the algorithm in~\cite{KPS95} has a complexity of $\bigo{n^2}$, see~\cite[Theorem 18]{KPS95}. This complexity result applies also to our instantiation, assuming that checking membership of $\sqsubseteq$ is relatively inexpensive, i.e., $\lvert A \rvert \ll \lvert Q^\mathcal{M} \rvert$ for each $q$ such that $\ell^\mathcal{M}(q) \in \{\choicecons{\inchoicetop}{A} , \choicecons{\outchoicetop}{A} \}$. \end{proof} \section{Experimental evaluation} \label{sec:tool} Proposition~\ref{prop:gay-complexity} states that Gay and Hole's classical algorithm has an exponential complexity; while the other approaches have a quadratic complexity (Propositions~\ref{prop:char-complexity} and~\ref{prop:kozen-complexity}). The rest of this section presents several experiments that give a better perspective of the \emph{practical} cost of these approaches. \begin{figure} \caption{Benchmarks (1)} \label{fig:plots-1} \end{figure} \subsection{Implementation overview and metrics} We have implemented three different approaches to checking whether two given session types are in the subtyping relation given in Definition~\ref{def:ab-subtype}. The tool~\cite{tool}, written in Haskell, consists of three main parts: ($i$) A module that translates session types to the mCRL2 specification language~\cite{groote2014modeling} and generates a characteristic ($\mu$-calculus) formula (cf.\ Definition~\ref{def:char-formula}), respectively; ($ii$) A module implementing the algorithm of~\cite{GH05} (see Section~\ref{sub:gay-hole-algo}), which relies on the Haskell $\mathtt{bound}$ library to make session types unfolding as efficient as possible. ($iii$) A module implementing our adaptation of Kozen et al.'s algorithm~\cite{KPS95}, see Section~\ref{sub:kozen-et-al}. Additionally, we have developed an accessory tool which generates arbitrary session types using Haskell's QuickCheck library~\cite{quickcheck}. The tool invokes the mCRL2 toolset~\cite{mcrl2} (release version {\tt 201409.1}) to check the validity of a $\mu$-calculus formula on a given model. We experimented invoking mCRL2 with several parameters and concluded that the default parameters gave us the best performance overall. Following discussions with mCRL2 developers, we have notably experimented with a parameter that pre-processes the $\mu$-calculus formula to ``insert dummy fixpoints in modal operators''. This parameter gave us better performances in some cases, but dramatic losses for ``super-recursive'' session types. Instead, an addition of ``dummy fixpoints'' while generating the characteristic formulae gave us the best results overall.\footnote{This optimisation was first suggested on the mCRL2 mailing list.} The tool is thus based on a slight modification of Definition~\ref{def:char-formula} where a modal operator $\mmbox{{\Op \! a}}{\phi}$ becomes $\mmbox{{\Op \! a}}{\mmnu{\var{t}}\phi}$ (with $\var{t}$ fresh and unused) and similarly for $\mmdiamond{{\Op \! a}}{\phi}$. Note that this modification does not change the semantics of the generated formulae. We use the following functions to measure the size of a session type. \[ \resizebox{\textwidth}{!}{$ \begin{array}{ll} \nummsg{T} \, \defi & \unfoldP{T}{X} \, \defi \\ \quad \left\{ \! \begin{array}{ll} 0 & \text{if } T = \mathtt{end} \; \text{or} \; T = \var{x} \\ \nummsg{T'} & \text{if } T = \rec{x} T' \\ \lvert I \rvert \!+\! \sum_{i \in I} \nummsg{T_i} & \text{if } T = {\choice} \end{array} \right. \; & \quad \left\{ \! \begin{array}{ll} 0 & \text{if } T = \mathtt{end} \; \text{or} \; T = \var{x} \\ (1 \!+\! \varocc{T'}{\var{x}}) \!\times\! \unfold{T'} & \text{if } T = \rec{x} T' \\ \lvert I \rvert \!+\! \sum_{i \in I} \unfold{T_i} & \text{if } T = {\choice} \end{array} \right. \end{array} $} \] Function $\nummsg{T}$ returns the \emph{number of messages} in $T$. Letting $\varocc{T}{\var{x}}$ be the number of times variable $\var{x}$ appears \emph{free} in session type $T$, function $\unfold{T}$ returns the number of messages in the unfolding of $T$. Function $\unfold{T}$ takes into account the structure of a type wrt.\ recursive definitions and calls (by unfolding once every recursion variable). \begin{figure} \caption{Benchmarks (2)} \label{fig:plots-2} \end{figure} \subsection{Benchmark results} The first set of benchmarks compares the performances of the three approaches when the pair of types given are identical, i.e., we measure the time it takes for an algorithm to check whether $T \subtype T$ holds. The second set of benchmarks considers types that are ``unfolded'', so that types have different sizes. Note that checking whether two equal types are in the subtyping relation is one of the most costly cases of subtyping since every branch of a choice must be visited. Our results below show the performances of four algorithms: ($i$) our Haskell implementation of Gay and Hole's algorithm (GH), ($ii$) our implementation of Kozen, Palsberg, and Schwartzbach's algorithm (KPS), ($iii$) an invocation to mCRL2 to check whether $U \models \subform{T}$ holds, and ($iv$) an invocation to mCRL2 to check whether $T \models \SUPform{U}$ holds. All the benchmarks were conducted on an 3.40GHz Intel i7 computer with 16GB of RAM. Unless specified otherwise, the tests have been executed with a timeout set to $2$ hours ($7200$ seconds). A gap appears in the plots whenever an algorithm reached the timeout. Times ($y$-axis) are plotted on a \emph{logarithmic} scale, the scale used for the size of types ($x$-axis) is specified below each plot. {\bf Arbitrary session types} Plots (a) and (b) in Figure~\ref{fig:plots-1} shows how the algorithms perform with arbitrary session types (randomly generated by our tool). Plot (a) shows clearly that the execution time of KPS, $T \models \SUPform{T}$, and $T \models \subform{T}$ mostly depends on $\nummsg{T}$; while plot (b) shows that GH is mostly affected by the number of messages in the unfolding of a type ($\unfold{T}$). Unsurprisingly, GH performs better for smaller session types, but starts reaching the timeout when $\nummsg{T} \approx 700$. The other three algorithms have roughly similar performances, with the model checking based ones performing slightly better for large session types. Note that both $T \models \SUPform{T}$ and $T \models \subform{T}$ have roughly the same execution time. {\bf Non-recursive arbitrary session types} Plot (c) in Figure~\ref{fig:plots-1} shows how the algorithms perform with arbitrary types that do \emph{not} feature any recursive definition (randomly generated by our tool), i.e., the types are of the form: \[ \textstyle T \coloneqq \mathtt{end} \;\mid\; \inchoice \;\mid\; \outchoice \] The plot shows that GH performs much better than the other three algorithms (terminating under $1$s for each invocation). Indeed this set of benchmarks is the best case scenario for GH: there is no recursion hence no need to unfold types. Observe that the model checking based algorithms perform better than KPS for large session types. Again, $T \models \SUPform{T}$ and $T \models \subform{T}$ behave similarly. {\bf Handcrafted session types} Plots (d) and (e) in Figure~\ref{fig:plots-2} shows how the algorithms deal with ``super-recursive'' types, i.e., types of the form: \[ \textstyle T \coloneqq \recND{x}_1 . {\Op \! a}_1 . \ldots \recND{x}_k . {\Op \! a}_k \left\{ \; \bigOp_{1 \leq i \leq k} {\Op \! a}_i . \{ \bigOp_{1 \leq j \leq k} {\Op \! a}_j . \var{x}_j \} \; \right\} \] where $\nummsg{T} = k(k+2)$ for each $T$. Plot (d) shows the results of experiments with $\bigtOp$ set to $\inchoicetop$ and $\Op$ to $\sendop$; while $\bigtOp$ is set to $\outchoicetop$ and $\Op$ to $\rcvop$ in plot (e). The exponential time complexity of GH appears clearly in both plots: GH starts reaching the timeout when $\nummsg{T} = 80$ ($k=8$). However, the other three algorithms deal well with larger session types of this form. Interestingly, due to the nature of these session types (consisting of either only \emph{internal} choices or only \emph{external} choices), the two model checking based algorithms perform slightly differently. This is explained by Definition~\ref{def:char-formula} where the formula generated with $\SUPform{T}$ for an internal choice is larger than for an external choice, and vice-versa for $\subform{T}$. Observe that, $T \models \subform{T}$ (resp.\ $T \models \SUPform{T}$) performs better than KPS for large session types in plot (d) (resp.\ plot (e)). {\bf Unfolded types} \newcommand{{V}}{{V}} The last set of benchmarks evaluates the performances of the four algorithms to check whether $T = \rec{\var{x}} {V} \; \subtype \; \rec{\var{x}} \left( {V} \subs{{V}}{\var{x}} \right) = U$ holds, where $\var{x}$ is fixed and ${V}$ (randomly generated) is of the form: \[ \textstyle {V} \coloneqq \inchoiceSet{i}{I}{{V}} \;\mid\; \outchoiceSet{i}{I}{{V}} \;\mid\; \var{x} \] Plots (f) in Figure~\ref{fig:plots-2} shows the results of our experiments (we have set the timeout to $6$ hours for these tests). Observe that $U \models \subform{T}$ starts reaching the timeout quickly. In this case, the model (i.e., $U$) is generally much larger than the formula (i.e., $\subform{T}$). After discussing with the mCRL2 team, this discrepancy seems to originate from internal optimisations of the model checker that can be diminished (or exacerbated) by tweaking the parameters of the tool-set. The other three algorithms have similar performances. Note that the good performance of GH in this case can be explained by the fact that there is only one recursion variable in these types; hence the size of their unfolding does not grow very fast. \section{Related work and conclusions}\label{sec:conc} \label{sec:related} \paragraph{\bf Related work} Subtyping for recursive types has been studied for many years. Amadio and Cardelli~\cite{AC93} introduced the first subtyping algorithm for recursive types for the $\lambda$-calculus. Kozen et al.\ gave a quadratic subtyping algorithm in~\cite{KPS95}, which we have adapted for session types, cf.\ Section~\ref{sub:kozen-et-al}. A good introduction to the theory and history of the field is in~\cite{GLP02}. Pierce and Sangiori~\cite{PS96} introduced subtyping for IO types in the $\pi$-calculus, which later became a foundation for the algorithm of Gay and Hole who first introduced subtyping for session types in the $\pi$-calculus in~\cite{GH05}. The paper~\cite{DemangeonH11} studied an abstract encoding between linear types and session types, with a focus on subtyping. Chen et al.~\cite{CDY2014} studied the notion of \emph{preciseness} of subtyping relations for session types. The present work is the first to study the algorithmic aspect of the problem. Characteristic formulae for finite processes were first studied in~\cite{GS86}, then in~\cite{steffen89} for finite-state processes. Since then the theory has been studied extensively~\cite{si94,ai97,ai07,cs91,FS05,MOlm98,ails12} for most of the van Glabbeek's spectrum~\cite{Glabbeek90} and in different settings (e.g., time~\cite{AIPP00} and probabilistic~\cite{SZ12}). See~\cite{ails12,ai07} for a detailed historical account of the field. This is the first time characteristic formulae are applied to the field of session types. A recent work~\cite{ails12} proposes a general framework to obtain characteristic formula constructions for simulation-like relation ``for free''. We chose to follow~\cite{steffen89} as it was a better fit for session types as they allow for a straightforward inductive construction of a characteristic formula. Moreover,~\cite{steffen89} uses the standard $\mu$-calculus which allowed us to integrate our theory with an existing model checker. \paragraph{\bf Conclusions} In this paper, we gave a first connection between session types and model checking, through a characteristic formulae approach based on the $\mu$-calculus. We gave three new algorithms for subtyping: two are based on model checking and one is an instantiation of an algorithm for the $\lambda$-calculus~\cite{KPS95}. All of which have a quadratic complexity in the worst case and behave well in practice. Our approach can be easily: ($i$) adapted to types for the $\lambda$-calculus (see Appendix~\ref{sec:lambda}) and ($ii$) extended to session types that carry other (\emph{closed}) session types, e.g., see~\cite{GH05,CDY2014}, by simply applying the algorithm recursively on the carried types. For instance, to check $ \snd{a\langle \rcv{c} \outchoicetop \rcv{d} \rangle } \; \subtype \; \snd{a\langle \rcv{c} \rangle } \, \inchoicetop \, \snd{b \langle \mathtt{end} \rangle } $ one can check the subtyping for the outer-most types, while building constraints, i.e., $\{ \rcv{\!c} \, \outchoicetop \rcv{d} \, \subtype \, \rcv{c} \}$, to be checked later on, by re-applying the algorithm. The present work paves the way for new connections between session types and modal fixpoint logic or model checking theories. It is a basis for upcoming connections between model checking and classical problems of session types, such as the asynchronous subtyping of~\cite{CDY2014} and multiparty compatibility checking~\cite{LTY15, DY13}. We are also considering applying model checking approaches to session types with probabilistic, logical~\cite{BHTY10}, or time~\cite{BLY15,BYY14} annotations. Finally, we remark that~\cite{CDY2014} also establishes that subtyping (cf.\ Definition~\ref{def:ab-subtype}) is \emph{sound} (but not complete) wrt.\ the \emph{asynchronous} semantics of session types, which models programs that communicate through FIFO buffers. Thus, our new conditions (items $(b$)-$(e)$ of Theorem~\ref{thm:safety-statements}) also imply safety $(a)$ in the asynchronous setting. \paragraph{\bf Acknowledgements} We would like to thank Luca Aceto, Laura Bocchi, and Alceste Scalas for their invaluable comments on earlier versions of this work. This work is partially supported by UK EPSRC projects EP/K034413/1, EP/K011715/1, and EP/L00058X/1; and by EU 7FP project under grant agreement 612985 (UPSCALE). \appendix \section{Appendix: Proofs}\label{app:proofs} \subsection{Compositionality} \lemcompo* \begin{proof}\label{proof:lemcompo} By structural induction on the structure of $T$. \begin{enumerate} \item If $T = \mathtt{end}$, then \begin{itemize} \item $T \subs{U}{\var{x}} = \mathtt{end}$ and $\Anyform{T \subs{U}{\var{x}}}{\bigtOp} = \mmbox{\mathcal{A}}{\falsek}$, and \item $\Anyform{T}{\bigtOp} = \mmbox{\mathcal{A}}{\falsek} = \mmbox{\mathcal{A}}{\falsek} \subs{ \Anyform{U}{\bigtOp} }{\var{x}}$. \end{itemize} \item If $T = \var{x}$, then \begin{itemize} \item $T \subs{U}{\var{x}} = U$, hence $\Anyform{T \subs{U}{\var{x}}}{\bigtOp} = \Anyform{U}{\bigtOp}$, and \item $\Anyform{T}{\bigtOp} = \var{x}$, hence $\Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} = \Anyform{U}{\bigtOp}$. \end{itemize} \item If $T = \var{y} ( \neq \var{x})$, then \begin{itemize} \item $\Anyform{\var{y} \subs{U}{\var{x}}}{\bigtOp} = \Anyform{\var{y}}{\bigtOp} = \var{y}$, and \item $\Anyform{\var{y}}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} = \var{y} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} = \var{y}$. \end{itemize} \item If $T = \choice$, then \begin{align*} \Anyform{T \subs{U}{\var{x}}}{\bigtOp} & = \Anyform{ \choiceSetNoIdx{i}{I}{ T_i \subs{U}{\var{x}} }}{\bigtOp} \\ & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i \subs{U}{\var{x}} }{\bigtOp}} \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{ \left(\Anyform{T_i }{\bigtOp} \subs{ \Anyform{U}{\bigtOp}}{\var{x}} \right)} \\ & = \left( \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i }{\bigtOp}} \right) \subs{ \Anyform{U}{\bigtOp}}{\var{x}} \\ &= \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} \end{align*} \item If $T = \cochoice$, then \begin{align*} \Anyform{T \subs{U}{\var{x}}}{\bigtOp} & = \Anyform{ \cochoiceSetNoIdx{i}{I}{ T_i \subs{U}{\var{x}} }}{\bigtOp} \\ & = \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{\Anyform{ T_i \subs{U}{\var{x}} }{\bigtOp}} \, \land \, \\ & \qquad \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{ \left( \Anyform{ T_i }{\bigtOp} \subs{\Anyform{U}{\bigtOp}}{\var{x}} \right) } \, \land \, \\ & \qquad \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ & = \left( \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{ \Anyform{ T_i }{\bigtOp} } \, \land \, \right. \\ & \qquad \left. \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \right) \subs{\Anyform{U}{\bigtOp}}{\var{x}} \\ & = \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} \end{align*} \item If $ T = \rec{\var{y}} T'$ ($\var{x} \neq \var{y}$), we have \begin{align*} \Anyform{ \rec{\var{y}} T' \subs{U}{\var{x}}}{\bigtOp} & = \mmnu{\var{y}} \Anyform{T' \subs{U}{\var{x}} }{\bigtOp} \\ \text{\tiny\it (I.H.)} \quad & = \mmnu{\var{y}} \Anyform{T'}{\bigtOp} \subs{\Anyform{U}{\bigtOp}}{\var{x}} \\ & = \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} \end{align*} \end{enumerate} \end{proof} \subsection{Extensions and approximations} The proofs in this section follow closely the proof techniques in~\cite{steffen89}. \begin{mydef}[Extended subtyping] Let $T, U \in \mathcal{T}$, $\phi \in \mathcal{F}$, and $\vec{\var{x}} = (\var{x}_1 , \ldots , \var{x}_n)$ be a vector containing all the free variables in $T$, $U$, or $\phi$. We define the \emph{extended subtyping} $\subtype_e$ and the \emph{extended satisfaction relation}, $\models_e$, by \begin{enumerate} \item $T \subtype_e U \iff \forall \vec{V} \in \mathcal{T}^n \, : \, T\subs{\vec{V}}{\vec{\var{x}}} \subtype U\subs{\vec{V}}{\vec{\var{x}}} $ \item $ T \models_e \phi \iff \forall \vec{V} \in \mathcal{T}^n \forall \vec{\psi} \in \mathcal{F}^n \, : \, \vec{V} \models \vec{\psi} \implies T\subs{\vec{V}}{\vec{\var{x}}} \models \phi \subs{\vec{\psi}}{\vec{\var{x}}} $ \end{enumerate} where $ \vec{V} \models \vec{\psi}$ is understood component wise. \end{mydef} \begin{mydef}[Subtyping approximations]\label{def:subtype-approx} Let $T, U \in \mathcal{T}$ and $\vec{\var{x}} = (\var{x}_1 , \ldots , \var{x}_n)$ be a vector containing all the free variables in $T$ or $U$. The extended $k$-limited subtyping, $\subtype_{e,k}$ is defined inductively on $k$ as follows: $T \subtype_{e,0} U$ always holds; if $k \geq 1$, then $T \subtype_{e,k} U$ holds iff for all $\vec{V} \in \mathcal{T}^n$, $ T\subs{\vec{V}}{\vec{\var{x}}} \subtype_{e,k} U\subs{\vec{V}}{\vec{\var{x}}} $ can be derived from the following rules: \[ \begin{array}{c} \inference{S-out} { I \subseteq J & \forall i \in I \, : \, T_i \absub{\bigOp}_{e,k-1} U_i } { \choice \absub{\bigOp}_{e,k} \choiceSet{j}{J}{U} } \qquad \inference{S-in} { J \subseteq I & \forall j \in J \, : \, T_j \absub{\bigOp}_{e,k-1} U_j } { \cochoice \absub{\bigOp}_{e,k} \cochoiceSet{j}{J}{U} } \\[1pc] \inference{S-end} {} {\mathtt{end} \absub{\bigOp}_{e,k} \mathtt{end}} \end{array} \] Recall that we are assuming an equi-recursive view of types. \end{mydef} \begin{restatable}{lemma}{lemextsubtype} \label{lem:ext-subtype} $ T \absub{\bigOp}_e U \iff \forall k \, : \, T \absub{\bigOp}_{e,k} U$ \end{restatable} \begin{proof} The ($\Rightarrow$) direction is straightforward, while the converse follow from the fact the session types we consider have only a finite number of states. \end{proof} \begin{mydef}[Semantics approximations] Let $T \in \mathcal{T}$ and $\vec{\var{x}} = (\var{x}_1 , \ldots , \var{x}_n)$ be a vector containing all the free variables in $T$. The extended $k$-limited satisfaction relation $\models_{e,k}$ is defined inductively as follows on $k$: $T \models_{e,0} \phi$ always holds; if $k \geq 1$, then $\models_{e,k}$ is given by: \[ \begin{array}{l@{\quad \mathit{iff} \quad}l} \multicolumn{1}{l}{T \models_{e,k} \truek} \\ T \models_{e,k} \phi_1 \land \phi_2 & T \models_{e,k} \phi_1 \text{ and } T \models_{e,k} \phi_2 \\ T \models_{e,k} \phi_1 \lor \phi_2 & T \models_{e,k} \phi_1 \text{ or } T \models_{e,k} \phi_2 \\ T \models_{e,k} \mmbox{{\Op \! a}}{\phi} & \forall \vec{V} \in \mathcal{T}^n \; \forall T' \, : \, \text{if } T\subs{\vec{V}}{\vec{\var{x}}} \semarrow{{\Op \! a}} T' \text{ then } T' \models_{e,k-1} \phi \\ T \models_{e,k} \mmdiamond{{\Op \! a}}{\phi} & \forall \vec{V} \in \mathcal{T}^n \; \exists T' \, : \, T\subs{\vec{V}}{\vec{\var{x}}} \semarrow{{\Op \! a}} T' \text{ and } T' \models_{e,k-1} \phi \\ T \models_{e,k} \mmnu{\var{x}} \phi & \forall n \, : \, T \models_{e,k} \approxi{ \mmnu{\var{x}} \phi }{n} \ensuremath{$\qedhere$} \end{array} \] \end{mydef} \begin{restatable}{lemma}{lemextmodels} \label{lem:ext-models} $ T \models_e \phi \iff \forall k \geq 0 \, : \, T \models_{e,k} \phi$ \end{restatable} \begin{proof} The $(\Rightarrow)$ direction is straightforward, while the $(\Leftarrow)$ direction follows from the fact that a session type induce a finite LTS. \end{proof} \begin{restatable}[Fixpoint properties]{lemma}{lemfixpointprop} \label{lem:fix-point-prop} Let $T \in \mathcal{T}$ and $\phi \in \mathcal{F}$, then we have: \begin{enumerate} \item \label{enum:fix-mu} $T \models_{e,k} \mmnu{\var{x}} \phi \iff T \models_{e,k} \phi \subs{\mmnu{\var{x}} \phi}{\var{x}}$ \item \label{enum:fix-rec} $\rec{\var{x}} T \models_{e,k} \phi \iff T\subs{\rec{\var{x}}T}{\var{x}} \models_{e,k} \phi$ \item \label{enum:fix-tr} $ \rec{\var{x}} T \; \absub{\bigOp}_{e,k} \; T \subs{\rec{\var{x}} T}{\var{x}} \; \absub{\bigOp}_{e,k} \; \rec{\var{x}} T $ \end{enumerate} \end{restatable} \begin{proof} The first property is a direct consequence of the definition of $\models_{e,k}$, while the last two properties follow from the equi-recursive view of types. \end{proof} \subsection{Main results} \thmmaintheorem* \begin{proof}\label{proof:thmmaintheorem} Direct consequence of Lemma~\ref{lem:main-lemma}. \end{proof} \begin{restatable}[Main lemma]{lemma}{lemmainlemma} \label{lem:main-lemma} $\forall T, U \in \mathcal{T} \, : \, T \absub{\bigOp}_{e} U \iff U \models_e \Anyform{T}{\bigtOp}$ \end{restatable} \begin{proof} According to Lemmas~\ref{lem:ext-subtype} and~\ref{lem:ext-models}, it is enough to show that \begin{equation}\label{eq:main-equi} \forall k \geq 0 \, : \, \forall U, T \in \mathcal{T} \, : \, T \absub{\bigOp}_{e,k} U \; \iff \; U \models_{e,k} \Anyform{T}{\bigtOp} \end{equation} We show this by induction on $k$. If $k = 0$, the result holds trivially, let us show that it also holds for $k \geq 1$. We distinguish four cases according to the structure of $T$. \begin{enumerate} \item If $T = \var{x}$, then must have $U = \var{x}$, by definition of $\absub{\bigOp}_e$ and $\models_e$. \item If $T = \rec{\var{x}} T'$, then by Lemma~\ref{lem:fix-point-prop}, we have \begin{enumerate} \item $ U \models_{e,k} \Anyform{T}{\bigtOp} \iff U \models_{e,k} \Anyform{T'}{\bigtOp} \subs{\Anyform{T}{\bigtOp}}{\var{x}} $ \item $ T \; \absub{\bigtOp}_{e,k} \; T' \subs{T}{\var{x}} \; \absub{\bigtOp}_{e,k} \; T $ \end{enumerate} Applying Lemma~\ref{lem:compo}, it is enough to show that: \[ \forall T, U \in \mathcal{T} \, : \, T' \subs{\rec{\var{x}} T'}{\var{x}} \absub{\bigtOp}_{e,k} U \; \iff \; U \models_{e,k} \Anyform{T' \subs{\rec{\var{x}} T'}{\var{x}}}{\bigtOp} \] Hence, since we have assumed that the types are guarded, we only have to deal with the cases where $T = \choice$, $T = \cochoice$, and $T = \mathtt{end}$. On the other hand, considering both sides of the equivalence~\eqref{eq:main-equi}, we notice that $U$ cannot be a variable. Thus, let us assume that $U = \rec{\var{x}} U'$, by Lemma~\ref{lem:fix-point-prop}, we have \begin{enumerate} \item $ U \models_{e,k} \Anyform{T}{\bigtOp} \iff U' \subs{U}{\var{x}} \models_{e,k} \Anyform{T}{\bigtOp} $ \item $ U \; \absub{\bigtOp}_{e,k} \; U' \subs{U}{\var{x}} \; \absub{\bigtOp}_{e,k} \; U $ \end{enumerate} Hence, applying Lemma~\ref{lem:compo} again, this case reduces to the cases where $U$ is of the form: $\choiceSet{j}{J}{U}$, $\cochoiceSet{j}{J}{U}$, or $\mathtt{end}$. \item $T = \mathtt{end}$ \begin{itemize} \item $(\Rightarrow)$ Assume $\mathtt{end} = T \absub{\bigtOp}_{e,k} U$, then by Definition~\ref{def:subtype-approx}, we have $U = \mathtt{end}$. By Definition~\ref{def:char-formula}, we have $\Anyform{\mathtt{end}}{\bigtOp} = \mmbox{\mathcal{A}}{\falsek}$, and we have $\mathtt{end} \models_{e,k} \mmbox{\mathcal{A}}{\falsek}$ since $ U = \mathtt{end} \!\!\nrightarrow$. \item $(\Leftarrow)$ Assume $U \models_{e,k} \Anyform{\mathtt{end}}{\bigtOp}$. By Definition~\ref{def:char-formula}, we have $U \models_{e,k} \mmbox{\mathcal{A}}{\falsek}$, which holds iff $U \!\!\nrightarrow$, hence we must have $U = \mathtt{end}$. Finally, by Definition~\ref{def:subtype-approx}, we have $\mathtt{end} \absub{\bigOp}_{e,k} \mathtt{end}$. \end{itemize} \item $T = \choice$ \begin{itemize} \item $(\Rightarrow)$ Assume $\choice \absub{\bigtOp}_{e,k} U$. By Definition~\ref{def:subtype-approx}, $ U = \choiceSet{j}{J}{U}$ with $I \subseteq J$ (note that $\emptyset \neq I$ by assumption) and $\forall i \in I \, : \, T_i \absub{\bigtOp}_{e,k-1} U_i$. Hence, $\forall i \in I \, : \, U \semarrow{\anydir{a_i}} U_i$, and by induction hypothesis, we have $U_i \models_{e,k-1} \Anyform{T_i}{\bigtOp}$, for all $i \in I$. By Definition~\ref{def:char-formula}, we have $ \Anyform{T}{\bigtOp} = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} $. Thus we have to show that for all $i \in I$, $U \semarrow{\opfun{\bigtOp}{a_i}} U_i$ and $U_i \models_{e,k-1} \Anyform{T_i}{\bigtOp}$; which follows from above. \item $(\Leftarrow)$ Assume $U \models_{e,k} \Anyform{\choice}{\bigtOp}$. From Definition~\ref{def:char-formula}, we have \[ \Anyform{T}{\bigtOp} = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} \] Hence, , $\forall i \in I \, : \, U \semarrow{\opfun{\bigtOp}{a_i}} U_i$, and $U_i \models_{e,k-1} \Anyform{T_i}{\bigtOp}$, for all $i \in I$. Hence, we must have $U = \choiceSet{j}{J}{U}$ with $I \subseteq J$ and by induction hypothesis, this implies that $T_i \absub{\bigtOp}_{e,k-1} U_i$ for all $i \in I$. \end{itemize} \item $T = \cochoice$ \begin{itemize} \item $(\Rightarrow)$ Assume $\cochoice \absub{\bigtOp}_{e,k} U$. By Definition~\ref{def:subtype-approx}, $ U = \cochoiceSet{j}{J}{U}$, with $J \subseteq I$ and $\forall j \in J \, : \, T_j \absub{\bigtOp}_{e,k-1} U_j$. Hence, by induction hypothesis, we have $U_j \models_{e,k-1} \Anyform{T_j}{\bigtOp}$, for all $j \in J$. By Definition~\ref{def:char-formula}, we have \begin{equation}\label{eq:formula-cochoice-1} {\Anyform{T}{\bigtOp}} \; = \; \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \end{equation} We must show that $U \models_{e,k} {\Anyform{T}{\bigtOp}} $. Since $J \subseteq I$, we have that $\forall i \in I \, : \, T \semarrow{\opfun{\bigtOp}{a_i}} T_i \implies U \semarrow{\opfun{\bigtOp}{a_i}} U_i$, hence the first conjunct of~\eqref{eq:formula-cochoice-1} holds (using the induction hypothesis, cf.\ above). While the second conjunct of~\eqref{eq:formula-cochoice-1} must be true from the assumption that $ \emptyset \neq J$. Finally, the third conjunct of~\eqref{eq:formula-cochoice-1} is false only if $U \semarrow{\opfun{\bigtOp}{a_n}}$ with $n \notin I$, which contradicts $J \subseteq I$. \item $(\Leftarrow)$ Assume $U \models_{e,k} \Anyform{\cochoice}{\bigtOp}$. From Definition~\ref{def:char-formula}, we have \begin{equation}\label{eq:formula-cochoice-2} {\Anyform{T}{\bigtOp}} \; = \; \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \end{equation} Hence, we must have $U = \cochoiceSet{j}{J}{U}$. It follows straightforwardly that $\emptyset \neq J \subseteq I$. Finally, the fact that for all $j \in J \, : \, T_j \absub{\bigtOp}_{e,k-1} U_j$, follows from the induction hypothesis. \qedhere \end{itemize} \end{enumerate} \end{proof} \subsection{Duality and safety in session types}\label{app:safety} \thmcharformduality* \begin{proof} \label{proof:thmcharformduality} By straightforward induction on the structure of $T$. \begin{enumerate} \item The result follows trivially if $T = \mathtt{end}$ or $T = \var{x}$. \item If $T = \rec{\var{x}} T'$, then we have $\dual{\Anyform{\rec{\var{x}} T'}{\bigtOp}} = \mmnu{\var{x}} \dual{ \Anyform{T'}{\bigtOp}}$, and $ \Anyform{\dual{T}}{\dual{\bigtOp}} = \mmnu{\var{x}} \Anyform{\dual{T'}}{\dual{\bigtOp}} $. The result follows by induction hypothesis. \item If $T = \choice$, then we have \begin{align*} \dual{\Anyform{T}{\bigtOp}} & = \dual { \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} } \\ & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{ \dual { \Anyform{T_i}{\bigtOp} } } \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{ \Anyform{\dual{T_i}}{\dual{\bigtOp}} } = \Anyform{\dual{T}}{\dual{\bigtOp}} \end{align*} \item If $T = \cochoice$, then we have \begin{align*} \dual{\Anyform{T}{\bigtOp}} & = \dual{ \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{ { \Anyform{T_i}{\bigtOp}} } \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} } \\ & = \bigwedge_{i \in I} \mmbox{\opfun{\bigcoOp}{a_i}}{ \dual{ \Anyform{T_i}{\bigtOp}} } \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigcoOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmbox{\opfun{\bigcoOp}{a_i}}{ \Anyform{ \dual{T_i}}{ \dual{\bigtOp}} } \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigcoOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ & = \Anyform{\dual{T}}{\dual{\bigtOp}} \end{align*} \end{enumerate} \end{proof} \thmsafety* \begin{proof} $(\Longleftarrow)$ We prove that if $T \subtype \dual{U}$ then $T \spar U$ is safe by coinduction on the derivation of $T \subtype \dual{U}$ (recall that $\subtype$ stands for $\absub{\,\inchoicetop}$). \\[1mm] {\bf Case [S-end]} Obvious since $T =\dual{U}=\mathtt{end}$ and $T \spar U \not \semarrow{}$. \\[1mm] {\bf Case [S-$\bigtOp$]} Suppose $T = \inchoice$. Then $\dual{U}=\inchoiceSet{j}{J}{\dual{U}}$ such that $I \subseteq J$ and $T_i \subtype \dual{U}_i$ for all $i \in I$. For all $a_i$ such that $i \in I$, $T \semarrow{!a_i} T_i$ implies $U \semarrow{?a_i} U_i$. Hence by [S-COM], we have $T \spar U\semarrow{} T_i \spar U_i$. Then by coinduction hypothesis, $T_i \spar U_i$ is safe. \\[1mm] {\bf Case [S-$\dual{\bigtOp}$]} Similar to the above case.\\[1mm] $(\Longrightarrow)$ We prove $(\neg (T \subtype \dual{U}) \land \neg (\dual{T} \subtype U))$ implies $T \spar U$ has an error. Since the error rule coincides with the negation rules of subtyping in~\cite[Table 7]{CDY2014}, we conclude this direction. \end{proof} \newcommand{\mathtt{bot}}{\mathtt{bot}} \newcommand{\mathtt{top}}{\mathtt{top}} \newcommand{\tsemarrow}[1]{\xhookrightarrow{\, #1 \,}} \newcommand{\delta}{\delta} \newcommand{\lamsub}[2]{{{\Lambda}}(#1,#2)} \newcommand{\mathcal{T}_R}{\mathcal{T}_R} \newcommand{\trec}[1]{\mathtt{rec}\, {#1} . } \newcommand{\tmmnu}[1]{\nu {#1} .\,} \section{Appendix: Recursive types for the $\lambda$-calculus} \label{sec:lambda} \subsection{Recursive types and subtyping} We consider recursive types for the $\lambda$-calculus below: \[ t \coloneqq \mathtt{top} \;\mid\; \mathtt{bot} \;\mid\; t_0 \rightarrow t_1 \;\mid\; \trec{v} t \;\mid\; v \] Let $\mathcal{T}_R$ be the set of all closed recursive types. A type $t$ induces an LTS according to the rules below: \[ \begin{array}{c@{\qquad\qquad}c} \inference{top} {} {\mathtt{top} \tsemarrow{\mathtt{top}} \mathtt{top} } & \inference{bot} {} {\mathtt{bot} \tsemarrow{\mathtt{bot}} \mathtt{top}} \\[0.5cm] \inference{arrow} {i \in \{0,1\}} {t_0 \rightarrow t_1 \; \tsemarrow{i} \; t_i } & \inference {rec} {t \subs{\trec{v} t}{v} \tsemarrow{a} t'} {\trec{v} t \tsemarrow{a} t'} \end{array} \] where we let $a$ range over $\{ 0, 1, \mathtt{bot}, \mathtt{top} \}$. \begin{definition}[Subtyping for recursive types] $\subtype \subseteq \mathcal{T}_R \times \mathcal{T}_R$ is the \emph{largest} relation that contains the rules: \[ \coinference{S-bot} {t \in \mathcal{T}_R} {\mathtt{bot} \subtype t} \qquad \coinference{S-top} {t \in \mathcal{T}_R} {t \subtype \mathtt{top}} \qquad \coinference{S-arrow} { t'_0 \subtype t_0 & t_1 \subtype t'_1 } {t_0 \rightarrow t_1 \subtype t'_0 \rightarrow t'_1} \] Recall that we are assuming an equi-recursive view of types. The double line in the rules indicates that the rules should be interpreted \emph{coinductively}. \end{definition} \subsection{Characteristic formulae for recursive types} We assume the same fragment of the modal $\mu$-calculus as in Section~\ref{sub:mucal} but for ($i$) omitting the direction $\Op$ on labels, i.e., we consider modalities: $\mmbox{a}{\phi}$ and $\mmdiamond{a}{\phi}$; and ($ii$) using $v$ to range over recursion variables. Let $\delta \in \{ \mathtt{top} , \mathtt{bot} \}$, and $\dual{\mathtt{bot}} = \mathtt{top}$, $\dual{\mathtt{top}} = \mathtt{bot}$. \[ \lamsub{t}{\delta} \defi \begin{cases} \mmdiamond{\delta}\truek & \text{if } t = \delta \\ \truek & \text{if } t = \dual{\delta} \\ \mmdiamond{0} \, \lamsub{t_0}{ \dual{\delta} } \; \land \; \mmdiamond{1} \, \lamsub{t_1}{\delta} & \text{if } t = t_0 \rightarrow t_1 \\ \tmmnu{v} \lamsub{t'}{\delta} & \text{if } t = \trec{v} t' \\ v & \text{if } t = v \end{cases} \] \begin{theorem} The following holds: \begin{itemize} \item $t \subtype t' \iff t' \models \lamsub{t}{\, \mathtt{top}}$ \item $t \subtype t' \iff t \; \models \lamsub{t'}{\, \mathtt{bot}}$ \end{itemize} \end{theorem} \begin{proof} We show only the $(\Leftarrow)$ direction here. \begin{enumerate} \item \label{en:lambda-top} We show $t \subtype t' \Leftarrow t' \models \lamsub{t}{\mathtt{top}}$ by induction on $t$. \begin{itemize} \item If $t = \mathtt{top}$, then $\lamsub{t}{\mathtt{top}} = \mmdiamond{\mathtt{top}}\truek$, hence $t' = \mathtt{top}$. \item If $t = \mathtt{bot}$, then $\lamsub{t}{\mathtt{top}} = \truek$ hence $t'$ can be any type, as expected. \item If $t = t_0 \rightarrow t_1$, then \[ \lamsub{t}{\mathtt{top}} = \mmdiamond{0} \, \lamsub{t_0}{ \mathtt{bot} } \; \land \; \mmdiamond{1} \, \lamsub{t_1}{\mathtt{top}} \] hence we must have $t' = t'_0 \rightarrow t'_1$ with $t'_0 \models \lamsub{t_0}{ \mathtt{bot} }$ (hence $t'_0 \subtype t_0$, by IH, see below) and $t'_1 \models \lamsub{t_1}{\mathtt{top}}$ (hence $t_1 \subtype t'_1$ by IH). \end{itemize} \item \label{en:lambda-bot} We show $t \subtype t' \Leftarrow t \models \lamsub{t'}{\mathtt{bot}}$ by induction on $t'$. \begin{itemize} \item If $t' = \mathtt{bot}$, then $\lamsub{t'}{\mathtt{bot}} = \mmdiamond{\mathtt{bot}}\truek$ and $t = \mathtt{bot}$. \item if $t' = \mathtt{top}$, then $\lamsub{t'}{\mathtt{bot}} = \truek$ and $t$ can be any type, as expected. \item If $t' = t'_0 \rightarrow t'_1$, then \[ \lamsub{t'}{\mathtt{bot}} = \mmdiamond{0} \, \lamsub{t'_0}{ \mathtt{top} } \; \land \; \mmdiamond{1} \, \lamsub{t'_1}{\mathtt{bot}} \] hence we must have $t = t_0 \rightarrow t_1$ with $t_0 \models \lamsub{t'_0}{ \mathtt{top} }$ (hence $t'_0 \subtype t_0$, by IH, see above) and $t_1 \models \lamsub{t'_1}{\mathtt{bot}}$ (hence $t_1 \subtype t'_1$ by IH). \end{itemize} \end{enumerate} The other direction is similar to the above, while the recursive step is similar to the proof of Theorem~\ref{thm:main-theorem}. \end{proof} \end{document}
arXiv
Novel Prosopis juliflora leaf ethanolic extract as natural antimicrobial agent against food spoiling microorganisms Genome-scale metabolic reconstruction of 7,302 human microorganisms for personalized medicine Almut Heinken, Johannes Hertel, … Ines Thiele Structural basis for intrinsic transcription termination Linlin You, Expery O. Omollo, … Yu Zhang Effect of the intratumoral microbiota on spatial and cellular heterogeneity in cancer Jorge Luis Galeano Niño, Hanrui Wu, … Susan Bullman Bacterial and fungal isolation from face masks under the COVID-19 pandemic Ah-Mee Park, Sundar Khadka, … Ikuo Tsunoda Microbial predators form a new supergroup of eukaryotes Denis V. Tikhonenkov, Kirill V. Mikhailov, … Patrick J. Keeling Soil microbiomes and one health Samiran Banerjee & Marcel G. A. van der Heijden Gut microbiome-wide association study of depressive symptoms Djawad Radjabzadeh, Jos A. Bosch, … Najaf Amin Bacterial volatile organic compounds attenuate pathogen virulence via evolutionary trade-offs Jianing Wang, Waseem Raza, … Zhong Wei Tracking nitrogen allocation to proteome biosynthesis in a marine microbial community Amy E. Zimmerman, Justin C. Podowski, … Jacob R. Waldbauer Iman Saleh1 & Mohammed H. Abu-Dieyeh1 Scientific Reports volume 11, Article number: 7871 (2021) Cite this article Fresh produces spoilage is a worldwide concern that accompany the global increase in food demand. Adverse human health and environmental effects of commercial spoilage control agents are major public concern. In this study, Prosopis juliflora leaves and fruit extracts had their antimicrobial activities evaluated against the growth of selected bacteria and yeast, and against mycelial growth and conidial germination of selected mycotoxins-producing fungi. P. juliflora water-soluble leaf ethanolic (PJ-WS-LE) extract with its novel extraction method showed the strongest antibacterial activity. Antimicrobial tests showed total inhibition of Botrytis cinerea, Alternaria alternata, Bacillus subtilis, Staphylococcus aureus and Candida albicans with MICs ranging between 0.125 and 1 mg/ml. Percent inhibition of mycelial growth (PIMG) of the extract was also determined against seven other fungal strains with highest value against Geotrichum candidum (66.2%). Even the least affected fungal strain showed alterations in their hyphae and spores exposed to PJ-WS-LE extract when observed using scanning electron microscope (SEM), alterations include exfoliated flakes, pores, vacuolation and applanation. Small-scale fruit bioassays controlled experiment showed high efficacy of the extract in protecting inoculated cherry tomato samples from B. cinerea and A. alternata infections. In conclusion, PJ-WS-LE extract is a feasible, natural antifungal agent that can replace common anti-spoiling chemicals. Microbial infections of fruits and vegetables are caused by fungal and bacterial pathogens (spoilage microorganisms) that can be harmful to human and animal health and it can lead to significant financial losses for both manufacturers and consumers1. Food sustainability is a major concern worldwide with the growing world population that has led to a tremendous increase in food demand2. Chemicals such as antifungal and antimicrobial agents are used for spoilage control of fruits and vegetables, and while these chemical agents help increase the preservation of fresh produce, they also pose a potential threat to human health and cause environmental damage. As a result, various countries around the world are setting a maximum residue limit (MRL) of chemicals on the skin of their exported fruits and vegetables3. In addition, spoilage microorganisms are becoming resistant to the commonly used antimicrobial agents4. This pushes scientists to further explore natural products to replace chemical applications in the field and at post-harvest levels5. Plant-based raw materials are susceptible to microbial spoilage by fungi and bacteria due to their high nutrient density. Fruits become more prone to fungal contamination during the ripening process where they become rich with carbohydrates and less acidic, creating a highly suitable environment for food spoilage fungi to thrive6. Advanced fungal contamination causes visible symptoms on the plants such as detectable change in color and/or texture, hyphal growth and others. Spoiling fungi affect mainly citrus fruits, berries, pome fruits, stone fruits, and tropical and solanaceous fruits7. The most common postharvest diseases are caused by fungi species belonging to the following genera: Penicillium, Botrytis, Monilinia, Rhizopus, Alternaria, Aspergillus, Fusarium, Geotrichum, Gloeosporium, and Mucor. Aspergillus, Penicillium, Fusarium and other fungi are mycotoxin-producing fungi8. Infection of fruits and vegetables by mycotoxin-producing fungi poses a major health risk to consumers as cumulative mycotoxins in the body can be mutagenic and carcinogenic and can cause damage to different organs9. Thorough research for spoilage bio-controllers reveals many promising results. Bacteriocins, phytochemicals from plant extracts, natural oils and other biological products have been screened for their antimicrobial properties, yet very few of these tested agents have been patented and made commercially available in the market to replace the chemical agents. Plant extracts are being explored in many studies as a natural bio-controller for food spoilage organisms. Phytochemicals in higher plants are valuable drugs known to cure several diseases10. Numerous studies have recently shown the antifungal effect of higher plant extracts which served as fungi-toxic agents against spore germination and/or growth of mycelia11. Among the tested plants, Prosopis species in general, particularly P. juliflora, have been screened for their antioxidant, anti-inflammatory, antibacterial, antifungal, and anti-tumor effects12,13. These studies examined the extracts of different parts of the plant while others analyzed purified phytochemicals, yet the studies are not interconnected. Chemical analysis of P. juliflora leaves ethanolic extract have shown antioxidant activity with the presence of alkaloids, tannins, flavonoids, coumarins and anthraquinone glycoside14. Aqueous extracts of the same leaves have also been evaluated to show similar composition with higher antioxidant activity in some studies15. A successful antifungal product should control pre-existing infections and leave residues that prevent subsequent infections and delay sporulation of the existing spores, thereby reducing economical losses9,16,17. In the case of fungi, applying an affordable, natural, and non-hazardous formulation would prevent visible spoilage as well as inhibit the growth of a fungus before the formation of dangerous mycotoxins, providing fresh produce which are safer for consumption. This research explores the effect of a novel leaf extract of P. juliflora as a natural, eco-friendly, and effective antifungal agent to control microbial food spoilage. Coming up with a naturally based anti-spoilage formulation will boost the economy of many countries, enhance food security measures, and solve a worldwide agricultural problem without posing a threat to the environment. Molecular identification of food spoiling fungal isolates DNA extraction followed by PCR using ITS primers and PCR product sequencing helped in identifying the fungal isolates at the species level. NCBI-BLAST identification results are shown in Table 1. Table 1 PCR products blasting results indicating species of fungal isolates used. In-vitro antimicrobial effect of PJ-WS-LE extract Agar diffusion method Ethanolic and aqueous extracts of P. juliflora leaves and fruits were screened for their antifungal and antibacterial activity. Freshly blended leaves and fruits were also tested for their effectiveness. Preliminary screening showed that ethanolic extract of leaves had the best antimicrobial effect which led to it being further tested throughout this study. Leaves ethanolic extract was dissolved in two solvents (a) 4% Dimethyl sulfoxide (DMSO) and (b) sterile distilled water, both solutions showed high anti-fungal activity. However, negative control batch of PDA plates with 4% DMSO proved the toxic effect of the solvent discussed in the literature by showing high antifungal activity (Fig. 1). Among the five tested fungi, DMSO alone has shown PIMG higher than 50% in A. alternata and C. gloeosporioides. Therefore, only PJ-WS-LE solution was used for the rest of the investigations, being dissolved in distilled-water insure that antimicrobial activity is related to active phytochemicals only. PDA plates with 20 mg/ml of PJ-WS-LE extract showed 100% inhibition of the growth of B. cinerea and A. alternata which are two major food spoiling fungi. High PIMG were shown also against G. candidum (66.2%), C. gloeosporioides (60.3%) and F. oxysporum (59.2%). T-test showed significant difference between the mycelium diameters of the controls and the experimental plates for all fungal strains (P < 0.05) except for A. niger (P = 0.279). Pure PDA plates were used as negative control and PDA plates with Clotrimazole were used as positive control plates with zero fungal growth. All trials were conducted in four replications for each fungal species, average mycelium diameter ± standard error (SE) of each fungus in both experimental and control trials are shown in Fig. 2. Percent inhibition of mycelial growth (PIMG) of 4% DMSO PDA plates against five fungal strains incubated at 25 °C for 5 days. Average mycelium diameter (mm) ± SE (N = 4) of seven fungal species in the presence and absence of PJ-WS-LE extract after 5 days incubation at 25 °C. abTreatment columns with different letters have their values significantly different as shown by t-test for data of fungal species. Pour plate method P. citrinum was 100% inhibited upon soaking its spores for 1.5 h in 20 mg/ml of the PJ-WS-LE extract followed by pouring them in PDA plates of the same PJ-WS-LE extract concentration. As for the case of P. chrysogenum, the fungal strain was not affected by the extract. Disk diffusion method was used to characterize the antimicrobial effect of the crude extracts on four bacterial isolates: Escherichia coli, Proteus mirabilis, S. aureus, and B. subtilis. PJ-WS-LE extract showed the best efficacy, its various concentrations showed good inhibition of the four tested strains as summarized in Fig. 3. All bacteria showed a dose-dependent response against the extract with the largest inhibition zone with 50 mg/ml PJ-WS-LE extract. Diameter of the inhibition zone (with standard deviation bars, N = 3) of different concentrations of PJ-WS-LE extract against four bacterial strains using disk diffusion method and upon incubation at 37 °C for 24 h. abcdeWithin each bacterial species, treatment columns with different letters have their values significantly different as shown by Tukey's test at p ≤ 0.05. The antibiogram showed that the bacterial strains tested were multi-drug resistant (Table 2). All disk diffusion results were conducted in triplicates. Table 2 Antibiogram of four bacterial strains to (Ampicillin (AMP), Amoxicillin (AMX), Bacitracin (B), Carbenicillin (CB), and Cephalothin (CR)) (S: susceptible, R: resistant, I: intermediate). Anti-yeast effect The effect of PJ-WS-LE extract against C. albicans was tested using the disk diffusion method. Dose-dependent inhibition of the yeast strain was very clear with the highest diameter of inhibition (18.43 mm) shown with 50 mg/ml of the extract which is very close to the diameter of inhibition zone in the positive control (22.5 mm). Extraction yield Among the tested extracts, PJ-WS-LE extract was the most effective as an antimicrobial agent, therefore, the extraction method yield was calculated each time new extract was prepared. The average extraction yield was around 11%. PJ-WS-LE extract stability Agar diffusion method was conducted using the seven previously tested fungal strains. Results confirmed stability of the active antifungal compounds of the 6-month-old crude extract preserved at 4 °C in powder form and in aqueous solution form. Figure 4 shows the PIMG of the seven fungal strains using 6-month-old extract compared to freshly prepared PJ-WS-LE extract. T-test showed no significant difference was observed between the three experimental batches (p > 0.05). Percent inhibition mycelial growth of seven fungal strains by 20 mg/ml of 6-month-old PJ-WS-LE extract preserved as dry powder and as aqueous solution compared to fresh extract. Determination of minimum inhibitory concentration Table 3 shows the different MICs of the PJ-WS-LE extract to all tested fungi, bacteria, and yeast isolates. Results were read after 72 h of incubation. Spores that germinated at all extract concentrations had their MICs valued as > 50 mg/ml. Besides the three fungal species that showed total inhibition of their growth on PDA plates (B. cinerea, A. alternata, and P. citrinum), C. cladosporioides and G. candidum had their spore germination totally inhibited in the presence of the extract. Interestingly, A. niger had an MIC of 2.5 mg/ml at 48 h, yet spores grew in all wells at 72 h which means that the extract slowed down the fungal growth. Table 3 Minimum inhibitory concentrations of PJ-WS-LE extract against different microorganisms. As for bacterial strains, it is worth mentioning that the extract was able to completely kill gram-positive strains in liquid culture even at very low concentrations, while gram-negative strains showed cellular activity at all extract concentrations even though they were partially inhibited on agar. Crude extract effect on fungal spore germination Spore suspensions exposed to various concentrations of PJ-WS-LE extract for 24 h with shaking at 25 °C were evaluated for germination percentage. Results are summarized in Table 4. Percentages of spore germination at the start of the experiment (before incubation) were also calculated and subtracted from the final percentages of spores germinating in the presence of the leaf extract to rule out spores that were already germinated on the PDA plates. This explains the negative values of A. alternata and G. candidum. The extract concentration which led to the maximum inhibition of spore germination was 8 mg/ml for all fungal strains that showed total inhibition in the 96-well plate experiment except for C. cladosporioides. C. gloeosporioides and P. chrysogenum showed a dose-dependent spore germination inhibition when exposed to the leaf extract. The most resistant fungal species was F. oxysporum, in which spore germination only decreased by 8.9% in the presence of 8 mg/ml of the extract. Finally, microscopic observation showed that even the most resistant strains of germinated spores exhibited stress-like symptoms. Table 4 Percentage of germinating spores after 24 h of exposure to different concentrations of PJ-WS-LE extract at 25 °C with shaking (150 rpm). Mode of action of fungal or bacterial inhibition When fungal mycelia that failed to grow in the presence of the plant extract were transferred to fresh PDA plates, the 7 days-old results showed an average diameter of A. alternata and B. cinerea much lower in recovered disks than in the negative control. Some of the replicate disks could not re-grow which led to a high standard error (Fig. 5). Average diameter ± SE of fungal plugs transferred from 20 mg/ml PJ-WS-LE extract plates to clean PDA plates compared to direct fungal plugs growth on clear PDA plates. abTreatment columns with different letters have their values significantly different as shown by t-test for data of fungal species. When fungal spores and bacteria were incubated in the presence of PJ-WS-LE extract for 48 h (8 mg/ml) and then spread on clean media, results showed that the mode of inhibition of A. alternate, B. cinerea and C. albicans is fungicidal, while the mode of inhibition of G. candidum and C. cladosporioides is fungistatic. As for the effect of the leaf extract on S. aureus and B. subtilis, results showed bacteriostatic mode of action. The effect of PJ-WS-LE extract on the microscopic morphology of the studied fungal species hyphae and spores SEM was used to evaluate the effect of PJ-WS-LE extract on the hyphae and spores of nine fungal species, especially those in which growth and germination were not completely inhibited by the extract. SEM evaluation showed damage of the external morphology of all tested fungal species. A. niger was not entirely inhibited by the extract as shown in previous assays. However, the typical net smooth surface of the hyphal structure seen in the non-treated control samples of A. niger (Fig. 6a) was not maintained when it was exposed to 8 mg/ml of PJ-WS-LE extract for 24 h. The extract caused the hyphae of this fungal species to lose smoothness; exposed hyphae were more applanate with exfoliated flakes and small pores in some places (Fig. 6b). At higher magnification (25.000×), A. niger spores showed surface damages and small pores (arrow) (Fig. 7b) that were not seen in the control (Fig. 7a). Control (non-treated) samples of P. citrinum and P. chrysogenum showed nice filamentous tubular hyphae while treated samples showed severely fractured hyphal structure with vacuolation that might indicate leakage of essential intracellular components (data not shown). Treated penicillium spores were shrunken compared to control (Fig. 7c) with craters of different sizes (arrows) (Fig. 7d). C. cladosporioides is one of the fungi which growth was inhibited by the leaf ethanolic extract in-vitro; this was also clearly indicated by the large holes in the treated spore membranes shown by SEM (Fig. 7f), non-treated C. cladosporioides sample showed smooth spores (Fig. 7e). Treated B. cinerea mycelium lost their tubular shape (Fig. 6c) and showed degenerative changes including applanation and formation of exfoliated flakes while their spores at higher magnification (10.000×) were totally collapsed (Fig. 6d). Microscopic observation of this fungus supports the total growth inhibition observed in previous experiments. SEM images of the hyphae of control fungi and fungi treated with 8 mg/ml PJ-WS-LE extract. A. niger: non-treated (a) and treated (b) (5.000×). B. cinerea: non-treated (c) and treated (d) (5.000×). G. candidum: non-treated (e) and treated (f) (5.000×). C. gloeosporioides: non-treated (g) and treated (h) (10.000×). A. alternata: non-treated (i) and treated (j) (5.000×). SEM images of the spores of control fungi and fungi treated with 8 mg/ml PJ-WS-LE extract. A. niger: non-treated (a) and treated (b) (25.000 ×). P. citrinum: non-treated (c) and treated (d) (25.000×). C. cladosporioides: non-treated (e) and treated (f) (20.000×). G. candidum: non-treated (g) and treated (h) (10.000×). F. oxysporum: non-treated (i) and treated (j). C. gloeosporioides: non-treated (k) and treated (l) (10.000×). Untreated samples of G. candidum showed nice tubular hyphae (Fig. 6e) that were distorted and flattened when treated with the extract (Fig. 6f). Tubular unique spores of G. candidum (Fig. 7g) were collapsed when treated with the antifungal extract (Fig. 7h). Spore collapse might be an indicator of loss of intracellular components as indicated earlier and therefore cell death. The growth of F. oxysporum was not totally inhibited by the crude extract in-vitro, yet SEM images show severe damage in both mycelium and spores. Treated F. oxysporum showed various levels of diastrophic and fractured mycelium and spores, in addition to vacuolation and pores in the case of treated spores (Fig. 7j) compared to normal (Fig. 7i). The treated C. gloeosporioides samples showed distorted mycelium with unusual surface bulges and applanation in some areas (Fig. 6h) as well as damaged spores with rugged and fractured surfaces with holes (arrow) (Fig. 7l). Figures 6g and 7k shows the normal shapes of C. gloeosporioides mycelium and spores for comparison. Finally, A. alternata is among the most affected tested fungi by PJ-WS-LE extract. Treated samples showed collapsed spores with large vacuolation while their mycelia lost their smoothness (Fig. 6i) and were shriveled and distorted with large vacuolation which also indicates loss of intracellular components that leads to death (Fig. 6j). Effects of PJ-WS-LE extract on grey mold and Alternaria rot development in artificially inoculated cherry tomatoes None of the negative control samples showed fungal growth within the two-week timeline of the experiment, which indicates the efficacy of the pre-sterilization treatment of the samples. Most of the non-treated control samples showed fungal growth, while only one treated sample out of the 24 samples inoculated with A. alternata showed fungal growth. Similar results were observed in the case of B. cinerea. Infection rate in B. cinerea control batches was 100% with 4.16% infection rate in the samples treated with PJ-WS-LE extract. As for the A. alternata control samples, infection rate was 90% while only 4.16% of the treated samples showed infection. PJ-WS-LE extract showed both curative and preventive effects on both wounded and entire samples which indicates very promising results for future market use. Figure 8 shows the details of infected samples percentage per each treatment batch. It is also worth noting that treated cherry tomato samples did not show any signs of phytotoxicity upon PJ-WS-LE extract application. Fourteen days-old samples exhibited no change in color or smell which indicates quality preservation. Percent infection rate of wounded and not-wounded cherry tomato samples inoculated with A. alternata and B. cinerea without PJ-WS-LE extract treatment and with PJ-WS-LE extract treatment applied to evaluate curative and preventive effects. Different parts of P. juliflora were evaluated for their antibacterial activity in various studies, yet studies had their gaps including the usage of organic solvents in extracts preparation and the lack of control batches for all tests. Disk diffusion method results showed that the diameters of inhibition zones of gram positive bacteria (S. aureus and B. subtilis) around PJ-WS-LE extract were higher than those caused by commonly used antimicrobials including ampicillin, Amoxicillin, Bacitracin and Carbenicillin. Singh and Verma18 examined the effect of 100 mg/ml of ethanolic extract of the leaves, pods and flowers of P. juliflora against different bacterial strains. The disk diffusion method showed that the inhibition zone diameters of E. coli and S. aureus treated with 100 mg/ml of leaf ethanolic extract were 12.81 ± 0.45 mm and 12.72 ± 0.67 mm, respectively. These numbers are close to our results in which the inhibition zone diameters of E. coli and S. aureus treated with 50 mg/ml of PJ-WS-LE extract were 9.13 ± 0.20 mm and 12.85 ± 0.01 mm, respectively. Note that half the amount of the extract prepared in this study was as effective as Singh and Verma's extract in the case of S. aureus. Different concentrations of P. juliflora leaf ethanolic extract were tested for their antimicrobial activity against different bacterial strains including S. epidermis, S. aureus, Streptococcus spp. Micrococcus luteus, B. subtilis, Salmonella typimurium, Klebsiella pneumonia, E. coli, and Pseudomonas spp. in two different studies. Results showed growth inhibition activity. A comparison between the inhibition zone diameters obtained in previous studies and the diameters obtained in our study is shown in Table 5. It is noted that S. aureus was more inhibited by our extract while E. coli and B. subtilis showed similar values to Sathiya and Muthuchelian's study, while Thakur et al. have used higher extract concentrations. It is important to know that the novel extraction method used in this study is different than the other two studies in which leaf extract was dissolved wither in ethanol or in DMSO, which may have had antimicrobial effects themselves regardless of the phytochemcials extracted19,20. Table 5 Diameter of inhibition zone (mm) of bacterial growth of P. juliflora leaf ethanolic extract as obtained in three different studies. A study by Osuru et al.21 analyzed the aqueous extract of P. juliflora leaves as a possible mouthwash solution, results showed that this aqueous extract inhibited the growth of Enterococcus faecalis, S. aureus, Prevotella intermedia, Porphyromonas gingivalis and Aggregatibacter actinomycetemcomitans. Note that the inhibition zone diameter of S. aureus was found to be 10.16 ± 0.28 mm, which is still lower than the inhibition zone diameter obtained with the extraction method used in our study (11.06 ± 0.01), note that both extraction methods involved water as a final solvent of the extracts21. A recent study examined the enhanced effect of P. juliflora methanolic extract combined with silver nanoparticles as an antimicrobial agent against strains of Candida sp. and Staphylococcus sp. Leaves of P. juliflora were dried, grinded, and soaked in methanol for 24 h, and the extract was filtered and mixed with a solution of silver nitrate (AgNO3). Upon color change, the precipitate was washed, dried, and used for further analysis. Results showed that 1 µg/ml of this nano-powder was capable of totally inhibiting the growth of MRSA, C. albicans and C. tropicalis22. Note that in our study, both S. aureus and C. albicans were completely inhibited by our extract with an MIC of 500 µg/ml. The combined effect of P. juliflora leaf aqueous extract with silver nanoparticles encapsulated in chitosan showed a higher diameter of inhibition zone (22 mm) against E. coli when compared to the current study (9.13 mm), this is due to the combined antimicrobial effect of chitosan and the synthesized nanoparticles23. Zinc monoxide nanoparticles derived from P. juliflora leaf aqueous extract had also a successful antimicrobial effect against E. coli and B. subtilis with an inhibition zone diameter of 23 mm and 19 mm, respectively, at concentrations of 100 µg/ml24. These findings have shown that plant extracts enhanced with nanoparticles or coating material lead to a higher antibacterial efficacy. However, nanoparticles enhanced studies lacked the control batches that test for the antimicrobial effect of the metals nanoparticles themselves. Unlike PJ-WS-LE extract where phytochemicals were dissolved in water, these materials require further analysis to test for their possible health risks and to ensure that they are safe for the environment if they are to be used in the field or as a postharvest controller. Various preliminary studies showed also the antifungal effect of various extracts of P. juliflora. A study by Raghavendra et al.25 showed that the aqueous leaf extract of P. juliflora significantly decreased the mycelial growth of A. alternata isolated from tobacco leaves with a PIMG of 71.59% at a concentration of 24%. This number is comparable to our results in which the PIMG of A. alternata was found to be 80.57% after exposure to 2 mg/ml of PJ-WS-LE extract and 100% after exposure to 20 mg/ml of the extract. On the other hand, a study, which examined the effect of the methanolic extract of P. juliflora leaves on the soil-borne pathogenic fungus Sclerotium rolfsii did not find any significant antifungal activity against this species26. Bazie et al.27 tested the antifungal effect of leaf extracts from different plant species against Colletotrichum musae, the causative agent of postharvest banana anthracnose, and they found that the methanolic extract of P. juliflora leaves showed the best results in fungal growth inhibition followed by Acacia albida. A study in 2009 examined the antifungal effect of leaf extracts from multiple plant species against different Fusarium species found that using a fresh solution of P. juliflora leaves resulted in a PIMG of 80.25% against a species of F. oxysporum, compared to 59.2% obtained in our study. Note that the extraction methods including the final solvent and the concentrations used in the two studies are different11. More recent study showed that using fresh extract from the maceration of P. juliflora leaves had a low antifungal efficacy against Alternaria solani, with a PIMG of 27.14% at a 10% concentration, while our extraction method showed a very high antifungal efficacy (100%) against a strain of Alternaria (A. alternata)28. Many studies aimed to find the MIC of P. juliflora crude extracts, enriched extracts, and purified compounds against a scattered group of microorganisms, however, none of the studies was comprehensive enough to test for the efficacy of a particular extract against a range of spoiling organisms. Santos et al.29 subjected the ethanolic extract of P. juliflora pods to acid–base treatments to obtain alkaloid-enriched extracts. The ethanolic extract in that study was suspended in acetic acid solution, and then the aqueous phase was extracted with chloroform or ethyl acetate at different pH levels. The basic chloroformic extract showed the best antimicrobial efficacy as well as a high alkaloid concentration. The MIC of our PJ-WS-LE extract on S. aureus and B. subtilis were 0.5 mg/ml and 0.125 mg/ml respectively, while the MICs obtained using the alkaloid enriched extract, particularly the basic chloroformic extract, were 0.05 mg/ml for S. aureus and above 0.1 mg/ml for B. subtilis. It is worth noting that crude extracts could be safer for human health and for the environment than concentrated phytochemicals which might exhibit toxicity. An older study by Singh and Verma18 found that the MIC of the alkaloid rich fraction of the ethanolic extract of P. juliflora leaves on strains of S. aureus and B. cereus was 0.05 mg/ml, compared to our MICs of similar strains of S. aureus and B. subtilis (0.5 mg/ml and 0.125 mg/ml). The lower MIC results from Singh and Verma's research can be justified by the concentrated active phytochemicals in the alkaloid-rich fraction. Both previously discussed studies showed lower MICs compared to our study because the use of concentrated active compounds normally result in lower MICs than the usage of crude extracts. However, if crude extracts continue to demonstrate successful inhibitory activity against microorganisms, they will be of commercial interest as they have feasible extraction methods that can be conducted in developing nations at a low cost. In addition, crude extracts are likely to have lower toxicity compared to concentrated pure phytochemicals30. The investigation of another species of Prosopis showed that the aqueous ethanolic extract of stem bark of Prosopis chilensis, which are rich with tannins, has significant antimicrobial activity against Micrococcus luteus, Bacillus Subtilis, Bacillus cereus, Staphylococcus aureus, Streptococcus pneumonia, and Cryptococcus albidus. The MICs of the stem bark extract against B. subtilis and S. aureus were 0.16 mg/ml and 0.62 mg/ml respectively, which are very close to the MICs of our PJ-WS-LE extract (0.125 mg/ml and 0.5 mg/ml)31. The similarity between these results suggests the presence of common active phytochemicals in the two extracts. Alkaloid extracts of P. juliflora showed a complete inhibition of A. alternata growth at a concentration of 1 mg/ml while common fungicides in the market are typically used at a concentration of 2 mg/ml. This number is equal to the MIC of our crude extract against A. alternata25. Spores germination experiment showed no significant difference in the spore germination of C. cladosporioides in the presence or absence of the extract, which was not consistent with the previous experiment results. It is very possible that shaking allowed the fungi to escape the extract inhibition mechanism. It is worth noting also that A. niger showed a very low level of spore germination (4.67%) in the presence of 8 mg/ml of the extract within 24 h. This was consistent with previous results which showed that A. niger growth is delayed when exposed to the extract and those results might change if incubation time is increased. Our research showed that PJ-WS-LE extract did not completely inhibit the fungal growth and spore germination of A. niger. However, the smooth hyphal network of this Aspergillus strain was not maintained when it was exposed to 8 mg/ml of the extract for 24 h. Similarly, the treatment of Aspergillus ochraceus with 3-carene, a component of Melaleuca alternifolia oil, did not fully inhibit fungal growth, yet it damaged the external morphology of hyphae and spores as observed using SEM32. Such damages might play a role in decreasing fungi pathogenesis in-vivo. B. cinerea treated with PJ-WS-LE extract resulted in completely collapsed spores and degenerative changes in their mycelium that could be due to a decrease in exopolysaccharide (EPS) formation in their outer membrane. Similar results were observed when B. cinerea were treated with phenazine-1-carboxylic acid (PCA) produced by the Pseudomonas aeruginosa LV strain. Hyphae of the treated fungi lost smoothness and formed unusual surface bulges33. The growth of F. oxysporum was not inhibited by our crude extract in-vitro, yet SEM images showed severe damage in mycelium and spores of both tested strains. Similarly, SEC imagery of Fusarium sporotrichioides treated with Mentha piperita essential oil showed distorted and shrunken mycelia compared to the control34. Another study conducted on an economically important pathogenic Fusarium strain known as Fusarium verticillioides also showed slender, shrunken, and winding hyphae that lost their linearity with some depressions on the surface35. Various antifungal agents cause morphological changes similar to those caused by PJ-WS-LE extract. When the two pathogenic fungi Mycrosporum gypseum and Trychophyton mentagrophytes were treated with the lyophilisate of granular gland secretion from Duttaphrynus melanostictus frogs, the fungal cells showed cellular deformations and pores. Hyphae of M. gypseum showed shrinkage while those of T. mentagrophytes showed shrinkage and pores36. Recently, the treatment of Villosiclava virens, an emerging disease of rice panicles, with the essential oils of 18 plants showed promising results with cinnamon bark oil, cinnamon oil and trans-cinnamaldehyde. Scanning electron microscopic imaging of V. virens treated with the vapor of one of the effective essential oils exhibited degenerative changes in the hyphal morphology including exfoliated flakes, applanation, vacuolation and shriveling. Treatment with direct contact of essential oils caused more severe exfoliated flake damage with collapse and blistering37. The morphological changes caused by PJ-WS-LE extract are indicators or stressed fungi which might lose their pathogenicity against fresh produce with time. The in-vivo preservative effect of PJ-WS-LE extract on cherry tomatoes was comparable to the effect of low fermenting yeast (Lachancea thermotolerans) volatile organic compounds (VOCs) on artificial F. oxysporum infection. The assay showed that 76% of the control cherry tomatoes were infected while none of the treated batches showed any infection38. Zeidan et al. results are similar to those of PJ_WS_LE extract that showed 100% B. cinerea infection and 90% A. alternata infection in the control batches with only 4.16% of the treated batches showing infections in each of the evaluated fungal strains. However, the application of liquid solution such as PJ-WS-LE extract in field or as post-harvest diseases controller could be more practical. PJ-WS-LE extract has a novel, safe and affordable extraction method that makes it a promising anti-spoiling agents, the growth of major spoiling agents was totally inhibited in its presence. All tested spoiling agents showed stress-like symptoms under the SEM when they were exposed to low concentrations of the extract. In-vivo assay showed a successful protection of infected fruits against fungal spoilage and hyphae development. Future studies on the revealed extract would include large scale in-vivo analysis and fractionation work to identify active phytochemicals, the additional studies are needed to further prove the extract efficacy for future promotion as a replacement for anti-spoilage chemicals used in the market. Prosopis juliflora samples collection and extracts preparation Prosopis juliflora leaves and plants were collected from healthy mature plants from the fields of Qatar University Campus after the proper permission. All methods were carried out in accordance with relevant guidelines and regulations. The plant status in Qatar is considered as exotic and invasive species. Leaves and fruits were washed thoroughly with tap water followed by sterilized distilled water and they were left to dry in a sterilized oven at 45 °C. Dried leaves and fruits were grinded to powder state and then stored at 4 °C until extract preparation39. Twenty grams of leaf or fruits powder was mixed with 200 ml of sterile distilled water for aqueous extract or 70% ethanol for ethanolic extract, bottles were then transferred to a water bath at 45 °C with shaking (50 cycles/min) for 48 h. Note that in aqueous extracts preparation the bottles were pre-heated to 70 °C for 3 h before incubating at 45 °C. Extracted liquid was centrifuged at 4500 rpm for 5 min40. Supernatant was filtered using Whatman filter papers and poured into 150 mm glass Petri plates to be transferred to a pre-sterilized oven at 45 °C to evaporate the solvent. The dried (powdery or gummy) material was aseptically scratched, weighed and stored at 4 °C. A second elution of 70% ethanol was also prepared16,26. Ethanolic extracts were dissolved in: (a) 4% DMSO to completely dissolve the extracted material into a homogeneous solution, (b) sterile distilled water to prepare a suspension of 150 mg/ml. Tubes of the suspension were centrifuged and the water-soluble supernatant was taken and stored at 4 °C to be used in the analysis. The water-insoluble pellet was incubated at 45 °C to dry. The pellet was then weighed, and the new concentration of the solution was re-calculated. It was noticed that the mass of the pellet was around 9.5% of the initial total mass of the ethanolic extracted powder which makes the final concentration of P. juliflora water-soluble leaf ethanolic (PJ-WS-LE) extract around 125 mg/ml. All suspensions were sterilized using syringe filter 0.2 µm. Figure 9 represents the steps of preparation of PJ-WS-LE extract. Extraction method yield of PJ-WS-LE extract was also determined as follows41: $$ {\text{Extraction yield}} = \frac{{{\text{Mass of extracted powder}} \times 100}}{{\text{Mass of leaves powder used}}}. $$ Flow diagram of PJ-WS-LE extract preparation. Food spoiling microbial isolates The antimicrobial in-vitro efficacy of P. juliflora water-soluble leaf ethanolic (PJ-WS-LE) extract was tested against bacterial, fungal and yeast strains that are known to cause food contaminations. Bacterial and yeast strains taken from the laboratory collection previously isolated and identified or purchased, the collection is currently stored at Qatar University, Department of biological and environmental sciences, chosen microbial types include: E. coli ATCC 11775, Bacillus, S. aureus ATCC BAA976, P. microbilis and C. albicans. The fungal stains used were isolated and microscopically identified in a previous study (Table 6)42. Table 6 Fungal strains used in the in-vitro analysis study. The fungal strains had their DNA extracted using the DNeasy Plant Mini Kit (QIAGEN, Germany) according to the manufacturer protocol. The Internal Transcribed Spacer (ITS) regions of fungal ribosomal DNA (rDNA) was amplified by PCR using ITS1 and ITS 4 primers. PCR products were purified using Invitrogen Quick PCR Purification Kit (Thermo Fisher Scientific, US) as indicated by the manufacturer. Sanger Sequencing was used to sequence the PCR products. BioEdit software was used to read the sequencing results. Sequences of DNA fragments were used in fungal species identification using NCBI-BLAST43. Agar diffusion method for antifungal activity evaluation The antifungal effects of the prepared extracts were tested against A. alternata, B. cinerea, F. oxysporum, A. niger, C. gloeosporioides, C. cladosporioides and G. candidum. Potato dextrose agar (PDA) (Difco-USA) plates of 20 mg/ml of aqueous or ethanolic extracts of P. juliflora leaves or fruits were prepared by adding the appropriate volume of stock solutions previously prepared to 50 °C PDA media26, both ethanolic extract stock solutions dissolved in sterile distilled water and in 4% DMSO were tested. Once the media solidified, a plug of 6 mm of agar was aseptically removed from the center of each Petri dish. Plugs of four-day-old fungi culture of the same size were transferred aseptically to the experimental plate. The experiment was run in four replications. Plates with 1 ml of Clotrimazole per plates (10 mg/ml) were used as a positive control. Pure PDA plates and PDA plates with 4% DMSO were also inoculated and used as a negative control. All plates were incubated at 25 °C for 5 days. Percent inhibition of mycelial growth (PIMG) was then calculated using the formula below. Mean diameter mycelial growth was determined by measuring the diameter of the infected area in two perpendicular directions33. $$ {\text{PIMG}} = \frac{{\left( {{\text{dc}} - {\text{dt}}} \right) \times 100}}{{{\text{dc}}}}. $$ dc and dt are the average mycelium diameter in the negative control plate and in the treated plate, respectively5. Antibacterial activity assay The disk diffusion method was used to characterize the antimicrobial effect of aqueous and ethanolic extracts of P. juliflora leaves and fruits on bacterial (E. coli, S. aureus, B. subtilis, and P. microbilis) and yeast (C. albicans) isolates. A microbial lawn of each isolate was spread on Mueller–Hinton agar (HiMedia-India). Sterile paper disks (Thermo Scientific Oxoid Blank Antimicrobial Susceptibility Disks) of 5 mm diameter were inoculated with 20 µl of different concentrations of each extracts (1, 5, 10, 20, and 50 mg/ml). Dry disks were then transferred with sterile forceps into the inoculated agar plates. Disks with sterile distilled water were used as negative controls and antibiotic disks were used as positive controls. The antibiogram of the bacterial isolates were also determined to compare the effect of PJ-WS-LE extract to commonly used antibiotics (Ampicillin (AMP), Amoxicillin (AMX), Bacitracin (B), Carbenicillin (CB), and Cephalothin (CR))(Oxoid-USA). Plates were incubated at 37 °C for 24 h. The experiment was conducted in triplicates44. Preliminary results evaluation Among the leaves and fruits, aqueous and ethanolic extract, only the extract(s) with good preliminary results in the antifungal and antibacterial assays was (were) selected to be further tested. In addition, only the solvent that shows no antimicrobial activity on its own was further used. The pour plate method was used to test the effect of PJ-WS-LE extract against Penicillium strains. Heamatocytometer was used to prepare spores suspension of 3 × 104 spores/ml, then a serial dilution was prepared to reach a suspension of 30 spores/ml which was used for inoculation. One milliliter of the final suspension was first incubated for 90 min at 25 °C in 20 mg/ml of PJ-WS-LE extract to increase contact time between the spores and the extract. Spores were then inoculated into PDA plates of 20 mg/ml of PJ-WS-LE extract. Percent colony growth inhibition was later calculated. $$ {\text{Percent colony growth inhibition}} = \frac{{\left( {{\text{Nc}} - {\text{Nt}}} \right) \times 100}}{{{\text{Nc}}}}. $$ Nc and Nt are the number of colonies growing in the control plate and in the treated plate, respectively. Powder and liquid extracts were stored at 4 °C for 6 months. Their antifungal activity against the previously tested seven fungal strains was re-evaluated using agar diffusion method to determine active phytochemical stability. Minimum inhibitory concentrations (MIC) of PJ-WS-LE extract against all previously tested microorganisms were determined using 96-well plate. Each well was inoculated with 100 μl of nutrient broth (Difco-USA) for bacterial MIC determination and 100 μl of potato dextrose broth (PDB) (HIMEDIA-India) for fungi and yeast MIC determination. Twenty different concentrations of PJ-WS-LE extract were evaluated (from 50 to 0.125 mg/ml). Wells were inoculated with the various microorganism suspensions, spore suspensions of 104 spores/ml were used. All concentrations were tested in four replications. Clear media was used as a negative control. Wells with no microbial suspensions were also used as control trials. Fungicide (1% Clotrimazole) and antibiotics (7.5 mg/ml Ampicillin) were used as positive controls for fungal and bacterial testing, respectively. Resazurin (HIMEDIA-India) was added in all wells to monitor bacterial and fungal growth. Resazurin is a blue dye that becomes pink and fluorescent upon cellular activity45. Resazurin stock solution was prepared by dissolving 0.27 g of Resazurin powder in 40 ml of sterile distilled water. Minimum inhibitory concentration (MIC) of PJ-WS-LE extract to various microorganisms was observed and recorded. The MIC is the lowest concentration that does not show a change of Resazurin color within the incubation time46. Well plates were incubated at 25 °C and results were recorded at 72 h incubation period. Spores suspensions of the nine molecularly identified fungal strains were inoculated into tubes of different concentrations of PJ-WS-LE extract around the fungi MIC (2, 4, and 8 mg/ml). Negative control (no extract) tubes were also prepared. The experiment was conducted in triplicates following the experimental design in Table 7. Table 7 Spores germination experimental design. Test tubes were incubated at 25 °C with 150 rpm shaking for 18 h. Slides were prepared from each test tube, and conidia were stained using cotton blue (6 µl of the test tube + 6 µl cotton blue). The number of germinated conidia was counted out of 100 random conidia in three slides for each test tube and averages were calculated. A spore is considered germinated if the length of the germination tube is at least half the length of the spore itself. Slides were also prepared from the negative control tubes before incubation to find the start number of originally germinated spores. Spore germination percentages and percent inhibition of the extract were calculated as follow: $$ {\text{Percent inhibition }}\left( {\text{\% }} \right) = \frac{{\left( {{\text{Gc}} - {\text{Gt}}} \right) \times 100}}{{{\text{Gc}}}}. $$ Gc and Gt are the germination rate in the control and treated Petri dishes, respectively37,47. Microorganisms that were totally inhibited by the PJ-WS-LE extract had their mode of growth inhibition determined. PDA agar plugs were removed from the middle of clean PDA plates (without extracts) and the plugs of Fungal mycelia that had their growth inhibited by the extract were transferred to them. Plates were incubated at 25 °C for 7 days. Positive growth of the sub-culture shows fungistatic activity, while negative growth demonstrates a fungicidal effect5. The mode of inhibition of PJ-WS-LE extract on bacterial and fungal spores was also determined. Spores suspension (104 spores/ml) incubated in PJ-WS-LE extract (8 mg/ml) for 48 h were used to inoculate (100 µl) clean PDA plates. Control tubes without plant extract were also used to inoculate PDA plates for comparison. This inoculation was conducted using four replications. Plates were incubated at 25 °C for 5 days. As for bacteria, overnight culture of various strains were used to inoculate 1 ml of bacterial culture into nutrient broth tubes containing 10 ml of 8 mg/ml of the PJ-WS-LE extract, tubes were incubated for 48 h at 37 °C. One hundred microliter of each tube were then inoculated into clean nutrient agar (NA, Scharlau) plates. Control tubes without plant extract were also used to inoculate nutrient agar plates for comparison. This experiment was also conducted in four replications. All plates were incubated at 37 °C for 48 h. In order to check for any damages of cell structures, the hyphae and spore structures of: A. niger, B. cinerea, G. candidum, C. gloeosporioides, A. alternata, P. citrinum, C. cladosporioides and F. oxysporum were treated with PJ-WS-LE extract and examined using a scanning electron microscope (SEM). Two test tubes of each of the eight fungal species evaluated were prepared as in Table 8. Table 8 Pre-setting of germination tubes used in SEM observation. Tubes were incubated at 25 °C with 150 rpm shaking for 24 h. Tubes were then centrifuged (5000 rpm) and the pellet was washed twice with PBS pH 7.4. Cells were fixed by re-eluting the pellet in a solution of 2.5% glutaraldehyde + 3.6% of formalin and were incubated at 4 °C for 18 h. Tubes were then centrifuged (5000 rpm) and washed three times with PBS. Pellets were dehydrated in serial ethanol dilution (25, 50, 70, 80, 90, and 100%) for 30 min for each dilution. Samples were then smeared on silver holders in thin films and left to air dry before gold coating them using Agar Sputter Coater. The SEM observations were made using Nova NanoSEM 45032,37. Effect of PJ-WS-LE extract against A. alternata and B. cineraria induced infection in cherry tomatoes Organic locally-grown cherry tomato samples were purchased from the market and washed with disinfecting soap and water followed by sterile distilled water. Clean cherry tomato samples were dried using heat sterilized towels and categorized based on their weight ranges. Eighty four samples were divided into 14 treatment category of six samples each, samples in different categories had similar shapes and weight ranges. Half of the cherry tomato samples were wounded, wounds were made by inserting a sterile syringe tip 5 mm deep in two opposite positions near the calyx of each sample. Two negative control treatment categories were made of sterile wounded and non-wounded plants without any treatment. B. cineraria control and A. alternata control groups are samples treated with 15 μl of spores suspension alone on the tomato calyx of wounded and non-wounded treatment categories. Spores suspension of B. cineraria was prepared at 104 spores/ml using a heamatocytometer, while A. alternata spores suspension was prepared at 107 spores/ml. To test the curative effect of the extract, 15 μl of the spores suspension was applied on the calyx first and left to dry for 2 h after which the calyx was sprayed with a solution of 8 mg/ml of PJ-WS-LE extract. To test the preventive effect of the extract, the samples were first sprayed with a solution of 8 mg/ml of PJ-WS-LE extract and left for 2 h to dry then 15 μl of the spores suspension was applied. Plants were all kept in sterile paper boxes that were closed tightly and left at room temperature to have their quality and infection rate monitored on a daily basis. The experiment was ended in 2 weeks. Descriptive statistics data was obtained using the SPSS program (IBM SPSS Statistics 24.0). One Way ANOVA was used to calculate the significance of the effect of extract concentration at p ≤ 0.05 and Tukey's test was used to separate the means at p ≤ 0.05. All assays were performed in four replications and results were presented in graphs showing means with standard error bars37. Gil, J. G., López, J. & Henao-Rojas, J. Causes of hass avocado fruit rejection in preharvest, harvest, and packinghouse: Economic losses and associated variables. Agronomy 10, 1–13. https://doi.org/10.3390/agronomy10010008 (2020). Cassman, K. G. & Grassini, P. A global perspective on sustainable intensification research. Nat. Sustain. 3, 262–268. https://doi.org/10.1038/s41893-020-0507-8 (2020). Mostafidi, M., Sanjabi, M. R., Shirkhan, F. & Zahedi, M. T. A review of recent trends in the development of the microbial safety of fruits and vegetables. Trends Food Sci. Technol. 103, 321–332. https://doi.org/10.1016/j.tifs.2020.07.009 (2020). Basavegowda, N., Patra, J. K. & Baek, K.-H. Essential oils and mono/bi/tri-metallic nanocomposites as alternative sources of antimicrobial agents to combat multidrug-resistant pathogenic microorganisms: An overview. Molecules https://doi.org/10.3390/molecules25051058 (2020). Bill, M., Sivakumar, D., Korsten, L. & Thompson, A. K. The efficacy of combined application of edible coatings and thyme oil in inducing resistance components in avocado (Persea americana Mill.) against anthracnose during post-harvest storage. Crop Prot. 64, 159–167. https://doi.org/10.1016/j.cropro.2014.06.015 (2014). Lukša, J. et al. Fungal microbiota of sea buckthorn berries at two ripening stages and volatile profiling of potential biocontrol yeasts. Microorganisms. https://doi.org/10.3390/microorganisms8030456 (2020). Mailafia, S., Okoh, G. R., Olabode, H. O. K. & Osanupin, R. Isolation and identification of fungi associated with spoilt fruits vended in Gwagwalada market, Abuja, Nigeria. Vet. World 10, 393–397. https://doi.org/10.14202/vetworld.2017.393-397 (2017). Bhalerao, V. & Chavan, A. Antifungal activity of leaf extract against mycotoxin producing fungi. Int. J. Res. Pharmaceut. Sci. 11, 2650–2656. https://doi.org/10.26452/ijrps.v11i2.2279 (2020). Leyva Salas, M. et al. Antifungal microbial agents for food biopreservation—A review. Microorganisms. https://doi.org/10.3390/microorganisms5030037 (2017). Halonen, N. et al. Bio-based smart materials for food packaging and sensors—A review. Front. Mater. https://doi.org/10.3389/fmats.2020.00082 (2020). Satish, S., Mp, R. & Anandarao, R. Antifungal potentiality of some plant extracts against Fusarium sp.. Arch. Phytopathol. Plant Prot. 42, 618–625. https://doi.org/10.1080/03235400701287578 (2009). Aziz, M. A. et al. Traditional uses of medicinal plants practiced by the indigenous communities at Mohmand Agency, FATA, Pakistan. J. Ethnobiol. Ethnomed. 14, 2. https://doi.org/10.1186/s13002-017-0204-5 (2018). Henciya, S. et al. Biopharmaceutical potentials of Prosopis spp. (Mimosaceae, Leguminosa). J. Food Drug Anal. 25, 187–196. https://doi.org/10.1016/j.jfda.2016.11.001 (2017). Badri, A. et al. Antioxidant activity and phytochemical screening of Prosopis juliflora leaves extract. Adv. Med. Plant Res. https://doi.org/10.30918/AMPR.53.17.020 (2017). Lakshmibai, R. Phytochemical analysis and antioxidant activity of Prosopis Juliflora thorn extract. Malaya J. Biosci. https://doi.org/10.13140/RG.2.2.35189.88806 (2018). Sayago, J. E., Ordoñez, R. M., Kovacevich, L. N., Torres, S. & Isla, M. I. Antifungal activity of extracts of extremophile plants from the Argentine Puna to control citrus postharvest pathogens and green mold. Postharvest. Biol. Technol. 67, 19–24. https://doi.org/10.1016/j.postharvbio.2011.12.011 (2012). Solanki, D. S. et al. Characterization of a novel seed protein of Prosopis cineraria showing antifungal activity. Int. J. Biol. Macromol. 116, 16–22. https://doi.org/10.1016/j.ijbiomac.2018.05.018 (2018). Singh, S. & Verma, S. Antibacterial properties of Alkaloid rich fractions obtained from various parts of Prosopis juliflora. Int. J. Pharm. Sci. Res. 2, 114 (2011). Thakur, R., Singh, R., Saxena, P. & Mani, A. Evaluation of antibacterial activity of Prosopis juliflora (SW.) DC. leaves. Afr. J. Trad. Complement. Altern. Med. 11, 182–188. https://doi.org/10.4314/ajtcam.v11i3.26 (2014). Sathiya, M. & Muthuchelian, K. Investigation of phytochemical profile and antibacterial potential of ethanolic leaf extract of Prosopis juliflora DC. Ethnobot. Leafl. 12, 169 (2008). Osuru, H. P. et al. Comparative evaluation of the antibacterial efficacy of P. juliflora and three commercially available mouthrinses: An in vitro study. J. Pharm. Res. 4, 2149–2151 (2011). Anwar, Y., Fakieh, M., Ullah, I., Alkenani, N. & Shareef, M. Synthesis of silver nanoparticles using Prosopis juliflora extract: Potential of antimicrobial and pollutants degradation performance. Desalin. Water Treat. 167, 105–112. https://doi.org/10.5004/dwt.2019.24571 (2019). Malini, S. et al. Antibacterial, photocatalytic and biosorption activity of chitosan nanocapsules embedded with Prosopis juliflora leaf extract synthesized silver nanoparticles. Mater. Today Proc. 21, 828–832. https://doi.org/10.1016/j.matpr.2019.07.587 (2020). Sheik Mydeen, S., Raj Kumar, R., Kottaisamy, M. & Vasantha, V. S. Biosynthesis of ZnO nanoparticles through extract from Prosopis juliflora plant leaf: Antibacterial activities and a new approach by rust-induced photocatalysis. J. Saudi Chem. Soc. 24, 393–406. https://doi.org/10.1016/j.jscs.2020.03.003 (2020). Raghavendra, M. P., Satish, S. & Raveesha, K. A. Alkaloid extracts of Prosopis juliflora (Sw.) DC. (Mimosaceae) against Alternaria alternata. J. Biopest. 2, 56–59 (2009). Sana, N., Shoaib, A. & Javaid, A. Antifungal potential of leaf extracts of leguminous trees against sclerotium rolfsiI. Afr. J. Tradit. Complement. Altern. Med. 13, 54–60. https://doi.org/10.21010/ajtcam.v13i5.8 (2016). Bazie, S., Ayalew, A. & Woldetsadik, K. Antifungal activity of some plant extracts against (Colletotrichum musae) the cause of postharvest banana anthracnose. Plant Pathol. Microbiol. https://doi.org/10.4172/2157-7471.1000226 (2014). Rex, B., Prabhu, S. & Kumar, J. S. Original article antifungal efficacies of plant extracts against Alternaria solani (Ellis and Martin) Jones and Grout under in vitro condition. Ann. Phytomed. Int. J. 8, 148–152. https://doi.org/10.21276/ap.2019.8.1.19 (2019). Santos, E. et al. Antibacterial activity of the alkaloid-enriched extract from Prosopis juliflora pods and its influence on in vitro ruminal digestion. Int. J. Mol. Sci. 14, 8496–8516. https://doi.org/10.3390/ijms14048496 (2013). Lima, H. G. et al. Prosopis juliflora pods alkaloid-rich fraction: In vitro anthelmintic activity on goat gastrointestinal parasites and its cytotoxicity on vero cells. Pharmacogn. Mag. 13, S684–S687. https://doi.org/10.4103/pm.pm_3_17 (2017). Singh, et al. Antimicrobial screening of ethnobotanically important stem bark of medicinal plants. Pharmacogn. Res. 2, 254–257. https://doi.org/10.4103/0974-8490.69127 (2010). Kong, Q. et al. Antifungal mechanisms of α-terpineol and terpene-4-alcohol as the critical components of Melaleuca alternifolia oil in the inhibition of rot disease caused by Aspergillus ochraceus in postharvest grapes. J. Appl. Microbiol. 126, 1161–1174. https://doi.org/10.1111/jam.14193 (2019). Simionato, A. S. et al. The effect of phenazine-1-carboxylic acid on mycelial growth of Botrytis cinerea produced by Pseudomonas aeruginosa LV strain. Front. Microbiol. 8, 1102–1102. https://doi.org/10.3389/fmicb.2017.01102 (2017). Rachitha, P., Krupashree, K., Jayashree, G. V., Gopalan, N. & Khanum, F. Growth inhibition and morphological alteration of Fusarium sporotrichioides by Mentha piperita essential oil. Pharmacogn. Res. 9, 74–79. https://doi.org/10.4103/0974-8490.199771 (2017). Xing, F. et al. Growth inhibition and morphological alterations of Fusarium verticillioides by cinnamon oil and cinnamaldehyde. Food Control 46, 343–350. https://doi.org/10.1016/j.foodcont.2014.04.037 (2014). Barlian, A., Anggadiredja, K. & Kusumorini, A. Damage in fungal morphology underlies the antifungal effect of lyophilisate of granular gland secretion from Duttaphrynus melanostictus Frog. J. Biol. Sci. 11, 282–287. https://doi.org/10.3923/jbs.2011.282.287 (2011). Zheng, J. et al. Fumigation and contact activities of 18 plant essential oils on Villosiclava virens, the pathogenic fungus of rice false smut. Sci. Rep. 9, 7330. https://doi.org/10.1038/s41598-019-43433-x (2019). Zeidan, R., Ul-Hassan, Z., Al-Thani, R., Balmas, V. & Jaoua, S. Application of low-fermenting yeast lachancea thermotolerans for the control of toxigenic fungi Aspergillus parasiticus, Penicillium verrucosum and Fusarium graminearum and their mycotoxins. Toxins (Basel) 10, 242. https://doi.org/10.3390/toxins10060242 (2018). Choudhary, A. & Nagori, B. P. Oral Prosopis juliflora treatment ameliorates inflammatory responses against carrageenan induced paw edema in rats. J. Sci. Innov. Res. 2, 888–892 (2013). Sylvester, R. et al. Antibacterial activity of ethanolic extracts of Prosopis juliflora against gram negative bacteria. Eur. J. Exp. Biol. 5, 43–46 (2015). Dhanani, T., Shah, S., Gajbhiye, N. A. & Kumar, S. Effect of extraction methods on yield, phytochemical constituents and antioxidant activity of Withania somnifera. Arab. J. Chem. 10, S1193–S1199. https://doi.org/10.1016/j.arabjc.2013.02.015 (2017). Saleh, I. & Al-Thani, R. Fungal food spoilage of supermarkets' displayed fruits. Vet. World 12, 1877–1883. https://doi.org/10.14202/vetworld.2019.1877-1883 (2019). Raja, H. A., Miller, A. N., Pearce, C. J. & Oberlies, N. H. Fungal identification using molecular tools: A primer for the natural products research community. J. Nat. Prod. 80, 756–770. https://doi.org/10.1021/acs.jnatprod.6b01085 (2017). Eloff, J. N. Avoiding pitfalls in determining antimicrobial activity of plant extracts and publishing the results. BMC Complement. Altern. Med. 19, 106. https://doi.org/10.1186/s12906-019-2519-3 (2019). Kuete, V., Karaosmanoğlu, O. & Sivas, H. Anticancer activities of african medicinal spices and vegetables. In Medicinal Spices and Vegetables from Africa (ed. Kuete, V.) 271–297 (Academic Press, 2017). Stefanovic, O. D., Tesic, J. D. & Comic, L. R. Melilotus albus and Dorycnium herbaceum extracts as source of phenolic compounds and their antimicrobial, antibiofilm, and antioxidant potentials. J. Food Drug Anal. 23, 417–424. https://doi.org/10.1016/j.jfda.2015.01.003 (2015). Zhimo, Y., Kole, R., Bhutia, D. D. & Saha, J. Antifungal activity of plant extracts against Colletotrichum musae, the post harvest anthracnose pathogen of banana cv. Martaman. Nutr. Food Sci. 46, 2–15. https://doi.org/10.1108/NFS-06-2015-0068 (2016). We would like to acknowledge the Environmental Studies Center (ESC) and Central Laboratory Unit at Qatar University for providing experimental support. Authors would like to thank Mr. Essam Shabaan Mohamed Attia from the central laboratory at QU for his technical support in the SEM imaging. We would also like to thank the students: Hafsa Alabdulhamed, Ebtihal Sharif, Aycha Dalloul, Nuha Mohammed, Mirna Abdelnabi and Safa Kilani for their assistance in the experimentation. We would also like to thank Ms. May Haidar for reviewing the manuscript for English language. The publication was made possible by UREP Grant # [UREP26-055-4-001] from Qatar National Research Fund (a member of Qatar foundation). The findings achieved herein are solely the responsibility of the authors. Open access funding provided by the Qatar National Library. Department of Biological and Environmental Sciences, College of Art and Science, Qatar University, P.O. Box 2713, Doha, Qatar Iman Saleh & Mohammed H. Abu-Dieyeh Iman Saleh Mohammed H. Abu-Dieyeh I.S. and M.H.A. designed the study, I.S. performed the experiments, I.S. and M.H.A. analyzed the results, I.S. wrote the manuscript. M.H.A. edited the manuscript. I.S. and M.H.A. reviewed the manuscript and M.H.A. supervised the work. Correspondence to Mohammed H. Abu-Dieyeh. Saleh, I., Abu-Dieyeh, M.H. Novel Prosopis juliflora leaf ethanolic extract as natural antimicrobial agent against food spoiling microorganisms. Sci Rep 11, 7871 (2021). https://doi.org/10.1038/s41598-021-86509-3 Genetic diversity of Prosopis juliflora in the state of Qatar and its valuable use against postharvest pathogen of mango fruits Talaat Ahmed Mohammed Abu-Dieyeh
CommonCrawl
Maggie_McNabb15Plus Variables that are defined inside a function are called ______ variables. A function ______ is a statement that causes a function to execute. A function _______ eliminates the need to place a function definition before all calls to the function. Enclosing a group of statements inside a set of braces creates a(n) ________. ________ arguments are passed to parameters automatically if no argument is provided in the function call. Correct default Instead of writing one long program that contains all of the statements necessary to solve a problem, several small functions that each solve a specific part of the problem can be written. A program of this type is called a(n) ________ program. A function ________ contains the interface to the function as well as the statements that make up the function. A(n) ________ constant is a named constant that is available to every function in a program. A(n) ________ function is a function that does not return a value to the part of the program that executed it. In the computer programming, ________ code is part of the source code of a program which can never be executed because there exists no control flow path to the code from the rest of program. dead, unreachable A(n) ________ is a package of code that is meant to be reused by many programs. The list of parameters indicated in the function definition are called _____ parameters. _____ is code that tests a function by simply calling it. driver code A(n) _______ is a dummy function that is called instead of the actual function it represents. It usually displays a test message acknowledging that it was called, and nothing more. Values that are sent to a function are called [a] (a.k.a. [b]). a) arguments b)actual parameters A(n) ______ is a special variable that holds a value being passed into a function. A(n) _______ is a collection of statements that performs a specific task. A(n) _______ is a collection of precompiled routines that a program can use. The _______ of a function is the set of statements that perform the function's operation. A function can send a value to the part of the program that executed it. This value is called a(n) _______ value. _______ involves writing and testing small portions of a program repeatedly until the program is complete. incremental development A(n) ________ is a programming construct that allows a programmer to associate a given set of instructions with a specific name. function, subroutine, procedure In programming, a function is a named section of a program that performs a specific task. In this sense, a function is a type of [a] or [b] Some programming languages make a distinction between a function, which returns a value, and a procedure, which performs some operation but does not retum a value. a) procedure b) subroutine When used as parameters, _______ variables allow a function to access the parameter's original argument. Changes to the parameter are also made to the argument. A(n) ______ variable is defined inside a function and is not accessible outside the function. A(n) _______ variable is defined outside all functions and is accessible to all functions in its scope. ________ local variables exist for the lifetime of the program, even though their scope is only the function in which they are defined. Variables that are defined inside a function that remain in memory after the function has finished executing are called ______ variables. The very first line of a function definition is the function ______. When a copy of an argument is passed to a function, it is said to be _____. pass by value The order in which statements are executed in a program is referred to as the program's _____. flow of execution Function ____ is a programming concept that allows programmers to define two or more functions with the same name as long as each of the functions has a unique signature. overloading When a reference variable is used as a parameter, it is said that the argument is ______. passed by reference Students also viewed Topic 1: Earth System & Topic 4: Plate Tectonics MrsProell PSYC2101 Chapter 13 Vocab sebbestplacePlus Chapter 9 Word Share mcecena7Plus sophiestrange_Plus Sets found in the same folder Structural Programming Week 5&6 Structured chapter 7 Structured Chapter 9 and 10 Structured chapter 11 Structural Programming Week 1, Structural Programm… Structural Programming Week 2, Structura… Structural Programming Week 3, Structure… Structured Programming(Week 4), Structur… Verified questions What must occur before mitosis takes place? Verified answer Of the following, the species with a bond order of 1 is $\mathrm{H}_2$. Write a method called `isSorted` that accepts an array of real numbers as a parameter and returns `true` if the list is in sorted (nondecreasing) order and `false` otherwise. For example, if arrays named list1 and list2 store [16.1, 12.3, 22.2, 14.4] and [1.5, 4.3, 7.0, 19.5, 25.1, 46.2] respectively, the calls `isSorted(list1)` and `isSorted(list2)` should return `false` and `true` respectively. Assume the array has at least one element. A one-element array is considered to be sorted. Recommended textbook solutions Information Technology Project Management: Providing Measurable Organizational Value 5th EditionJack T. Marchewka 3rd EditionCharles E. Leiserson, Clifford Stein, Ronald L. Rivest, Thomas H. Cormen APUSH Ch. 29 theAbsolute_Unit David_Urbina29Teacher I cant study anymore! Matuszewskia2 CHAPTER 6 questions Ashley_Borja9
CommonCrawl
\begin{document} \title{$L$-invariants of Hilbert modular forms} \author{Bingyong Xie \\ \small Department of Mathematics, East China Normal University, Shanghai, China \\ \small [email protected]} \date{} \maketitle \begin{abstract} In this paper we show that under certain condition the Fontaine--Mazur $L$-invariant for a Hilbert eigenform coincides with its Teitelbaum type $L$-invariant, and thus prove a conjecture of Chida, Mok and Park. \end{abstract} \section*{Introduction} In the remarkable paper \cite{GS} Greenberg and Stevens proved a formula for the derivative at $s=1$ of the $p$-adic $L$-function of an elliptic curve $E$ over $\BQ$ (i.e. a modular form of weight $2$) when $p$ is a prime of split multiplicative reduction, which is the exceptional zero conjecture proposed by Mazur, Tate, and Teitelbaum \cite{MTT}. An important quantity in this formula is the $L$-invariant, namely $\CL(E)=\log_p(q_E)/v_p(q_E)$ where $q_E\in p \BZ_p$ is the Tate period for $E$. For higher weight modular forms $f$, since \cite{MTT}, a number of different candidates for the $L$-invariant $\CL(f)$ have been proposed. These include: (1) Fontaine-Mazur's $L$-invariant $\CL_{FM}$ using $p$-adic Hodge theory, (2) Teitelbaum's $L$-invariant $\CL_T$ built by the theory of $p$-adic uniformization of Shimura curves, (3) an invariant $\CL_C$ by Coleman's theory of $p$-adic integration on modular curves, (4) an invariant $\CL_{DO}$ due to Darmon and Orton using ``modular-form valued distributions'', (5) Breuil's $L$-invariant $\CL_B$ by $p$-adic Langlands theory. \noindent Now, all of these invariants are known to be equal \cite{BDI, Breuil, CI, IoSp}. The readers are invited to consult Colmez's paper \cite{Cz2005} for some historical account. The exceptional zero conjecture for higher weight modular forms has been proved by Steven using $\CL_C$ \cite{Stevens}, by Kato--Kurihara--Tsuji using $\CL_{FM}$ (unpublished), by Orton using $\CL_{DO}$ \cite{Orton}, by Emerton using $\CL_B$ \cite{Em} and by Bertolini--Darmon--Iovita using $\CL_T$ \cite{BDI}. In \cite{Mok} Mok addressed special cases of the exceptional zero conjecture in the setting of Hilbert modular forms. In \cite{CMP} Chida, Mok and Park introduced the Teitelbaum type $L$-invariant for Hilbert modular forms, and conjectured that Teitelbaum type $L$-invariant coincides with the Fontaine-Mazur $L$-invariant. We state this conjecture precisely below. Fix a prime number $p$. Let $F$ be a totally real field, $g=[F:\BQ]$ and $\fp$ a prime ideal of $F$ above $p$. Let $f_\infty$ be a Hilbert eigen newform with even weight $(k_1, \cdots, k_g, w)$ and level divided exactly by $\fp$ (i.e. not by $\fp^2$). Here ``even weight'' means that $k_1,\cdots ,k _g, w$ are all even. On one hand, by Carayol's result \cite{Car2} we can attach to $f_\infty$ a $p$-adic representation of $G_{\BQ}=\mathrm{Gal}(\overline{\BQ}/\BQ)$. This Galois representation (restricted to $G_{\BQ_p}$) is semistable and thus we can attach to it a Fontaine-Mazur $L$-invariant $\CL_{FM}(f_\infty)$. On the other hand, Chida, Mok and Park attached to an automorphic form $\mathbf{f}$ on a totally definite quaternion algebra over $F$ (of the same weight $(k_1, \cdots, k_g,w)$) a Teitelbaum type $L$-invariant $\CL_T(\mathbf{f})$ under the following assumption \begin{equation*} \hskip -100pt (\text{CMP}) \hskip 100pt \mathbf{f} \text{ is new at } \fp \text{ and } U_\fp \mathbf{f} = \CN \fp ^{w/2} \mathbf{f}. \end{equation*} Both $\CL_{FM}(f_\infty)$ and $\CL_T(\mathbf{f})$ are vector valued. See Section \ref{ss:FM-L} and Section \ref{ss:Teitelbaum} for their precise definitions. \begin{conj} If $f_\infty$ and $\mathbf{f}$ are associated to each other by the Jacquet-Langlands correspondence, then $\CL_{FM}(f_\infty)=\CL_T(\mathbf{f})$. \end{conj} Our main result is the following \begin{thm} \label{thm:main} $($=Theorem \ref{thm:main-text}$)$ Assume that $F$ satisfies the following condition: $$\text{ there is no place other than } \fp \text{ above }p. $$ \noindent Let $f_\infty$ and $\mathbf{f}$ be as above. Then $\CL_{FM}(f_\infty)=\CL_T(\mathbf{f})$. \end{thm} We sketch the proof of Theorem \ref{thm:main}. Our method is similar to that in \cite{IoSp}. The Galois representation attached to $f_\infty$ comes from the \'etale cohomology $H^1_{\mathrm{et}}$ of some local system on a Shimura curve. The technical part of our paper is the computation of the filtered $\varphi_q$-isocrystal attached to this local system. On one hand, Coleman and Iovita \cite{CI} provided a precise description of the monodromy operator, which is helpful for computing Fontaine-Mazur's $L$-invariant. On the other hand, the Teitelbaum type $L$-invariant is closely related to the de Rham cohomology of the filtered $\varphi_q$-isocrystal by Coleman integration and Schneider integration. Our precise description of the filtered $\varphi_q$-isocrystal allows us to compute Fontaine-Mazur's $L$-invariant and the Teitelbaum type $L$-invariant. Finally, analyzing the relation among the monodromy operator, Coleman integration and Schneider integration finishes the proof. When $F$ has more than one place (say $r$ places) above $p$, our method of computing filtered $\varphi_q$-isocrystals is not valid. To make it work, one might have to consider the Shimura variety studied by Rapoport and Zink \cite[Chapter 6]{RZ} (which is of dimension $r$) instead of the Shimura curve. Coleman and Iovita's result is valid only for curves, and so can not be applied directly. We plan to address this problem in a future work. Our paper is organized as follows. Fontaine-Mazur's $L$-invariant is introduced in Section \ref{sec:FM-L-inv}. Coleman and Iovita's result is recall in Section \ref{sec:CoIo}. Section \ref{sec:univ-spec-mod} is devoted to compute the filtered $\varphi_q$-isocrystal attached to the universal special formal module. We introduce various Shimura curves, and study their $p$-adic uniformizations following Rapoport and Zink respectively in Section \ref{sec:sh-curves} and Section \ref{sec:p-unif}. In Section \ref{sec:comp-iso} we use the result in Section \ref{sec:univ-spec-mod} to determine the filtered $\varphi_q$-isocrystals attached to various local systems on Shimura curves. In Section \ref{sec:cover-hodge} we recall the theory of de Rham cohomology of certain local systems, and in Section \ref{sec:Teit-L-inv} we recall Chida, Mok and Park's construction of Teitelbaum type $L$-invariant. Finally in Section \ref{sec:compare} we combine results in Section \ref{sec:CoIo}, Section \ref{sec:comp-iso} and Section \ref{sec:cover-hodge} to prove our main theorem. \section*{Notations} For two $\BQ$-algebras $A$ and $B$, write $A\otimes B$ for $A\otimes_{\BQ}B$. For a ring $R$ let $R^\times$ denote the multiplicative group of invertible elements in $R$. For a linear algebraic group over $\BQ$ we will identify it with its $\BQ$-valued points. Let $F$ be a totally real number field, $g=[F:\BQ]$. Let $p$ be a fixed prime. Suppose that $p$ is inertia in $F$, i.e. there exists exactly one place of $F$ above $p$, denoted by $\fp$. If $q$ is a power of $p$, we use $v_p(q)$ to denote $\log_pq$. Let $\BA_f$ denote $\BQ\otimes_\BZ \widehat{\BZ}$ and let $\BA_f^p$ denote $\BQ\otimes_\BZ (\prod_{\ell\neq p}\BZ_\ell)$. Similarly for any number field $E$ let $\BA_{E,f}$ denote $E\otimes_\BZ\widehat{\BZ}$, the group of finite ad\`eles of $E$. Fix an algebraic closure of $F_\fp$, denoted by $\overline{F_\fp}$, and let $\BC_p$ be the completion of $\overline{F_\fp}$ with respect to the $p$-adic topology. By this way we have fixed an embedding $ F_\fp \hookrightarrow \BC_p$. The Galois group $G_{F_\fp}=\Gal(\overline{F_\fp}/F_\fp)$ can be naturally identified with the group of continuous $F_\fp$-automorphisms of $\BC_p$. \section{Fontaine-Mazur invariant} \label{sec:FM-L-inv} \subsection{Monodromy modules and Fontaine-Mazur $L$-invariant} \label{sec:mono-fontaine-mazur} Let $F_{\fp 0}$ be the maximal absolutely unramified subfield of $F_\fp$. Let $q$ be the cardinal number of the residue field of $F_\fp$. Let $\RB_\cris, \RB_{\mathrm{st}}$ and $ \RB_\dR$ be Fontaine's period rings \cite{Fon}. As is well known, there are operators $\varphi$ and $N$ on $\RB_{\mathrm{st}}$, and a descending $\BZ$-filtration on $\RB_\dR$; $\RB_\cris$ is a $\varphi$-stable subring of $\RB_{\mathrm{st}}$ and $N$ vanishes on $\RB_\cris$. Put $\RB_{{\mathrm{st}}, F_\fp}:=\RB_{\mathrm{st}}\otimes_{F_{\fp 0}} F_\fp$; $\RB_{{\mathrm{st}}, F_\fp}$ can be considered as a subring of $\RB_\dR$. We extend the operators $\varphi_{q}=\varphi^{v_p(q)}$ and $N$ $F_\fp$-linearly to $\RB_{{\mathrm{st}}, F_\fp}$. Let $K$ be either a finite unramified extension of $F_\fp$ or the completion of the maximal unramified extension of $F_\fp$ in $\BC_p$. Write $G_K$ for the group of continuous automorphisms of $\BC_p$ fixing elements of $K$. By our assumption on $K$ we have $$ (\RB_{\cris, F_\fp})^{G_K} = (\RB_{{\mathrm{st}}, F_\fp})^{G_K} = (\RB_{\dR})^{G_K}=K. $$ Let $L$ be a finite extension of $\BQ_p$. For an $L$-linear representation $V$ of $G_{K}$, we put $$D_{{\mathrm{st}}, F_\fp}(V):=(V\otimes_{\BQ_p}\RB_{{\mathrm{st}}, F_\fp})^{G_{K}}.$$ This is a finite rank $L\otimes_{\BQ_p}K$-module. If $V$ is semistable, then $D_{{\mathrm{st}}, F_\fp}(V)$ is a filtered $(\varphi_q, N)$-module: the $(\varphi_q,N)$-module structure is induced from the operators $\varphi_q=1_V \otimes \varphi_q$ and $N=1_V\otimes N$ on $V \otimes_{\BQ_p} \RB_{{\mathrm{st}}, F_\fp}$; the filtration comes from that on $V\otimes_{\BQ_p}\RB_\dR$. Note that $\varphi_q$ and $N$ are $L\otimes_{\BQ_p}K$-linear. If $L$ splits $F_\fp$, then $L\otimes_{\BQ_p}K$ is isomorphic to $\bigoplus_\sigma L\otimes_{\sigma,F_\fp}K$, where $\sigma$ runs through all embeddings of $F_\fp$ into $L$. Here the subscript $\sigma$ under $\otimes$ indicates that $F_\fp$ is considered as a subfield of $L$ via $\sigma$. Let $e_\sigma$ be the unity of the subring $L\otimes_{\sigma, F_\fp}K$. We shall need the notion of monodromy modules. This notion is introduced in \cite{Ma}. However we will use the slightly different definition given in \cite{IoSp}. Let $T$ be a finite-dimensional commutative semisimple $\BQ_p$-algebra. A $T$-object $D$ in the category of filtered $(\varphi_q,N)$-modules, is called a {\it $2$-dimensional monodromy $T$-module}, if the following hold: $\bullet$ $D$ is a free $T_{F_\fp}$-module of rank $2$ ($T_{F_\fp}=T\otimes_{\BQ_p}F_\fp$), $\bullet$ the sequence $D\xrightarrow{N} D \xrightarrow{N} D$ is exact, $\bullet$ there exists an integer $j_0$ such that ${\mathrm{Fil}}^{j_0} D$ is a free $T_{F_\fp}$-submodule of rank $1$ and ${\mathrm{Fil}}^{j_0} D \cap \ker(N)=0$. \begin{lem} $($\cite[Lemma 2.3]{IoSp}$)$ If $D$ is a monodromy $T$-module, then there exists a decomposition $D=D^{(1)}\oplus D^{(2)}$ where $D^{(1)}$ and $D^{(2)}$ are $\varphi_q$-stable free rank one $T_{F_\fp}$-submodules such that $N:D\rightarrow D$ induces an isomorphism $N|_{D^{(2)}}: D^{(2)}\xrightarrow{\sim} D^{(1)}$. \end{lem} Let $D$ be a monodromy $T$-module and let $j_0$ be as above. The {\it Fontaine-Mazur $L$-invariant} of $D$ is defined to be the unique element in $T_{F_\fp}$, denoted as $\CL_{FM}(D)$, such that $x-\CL_{FM}(D)N(x)\in {\mathrm{Fil}}^{j_0} D$ for every $x\in D^{(2)}$. What we are interested in is the case when $T$ is an $L$-algebra, where $L$ is a field splitting $F_\fp$. Note that we have an decomposition of $ T_{F_\fp} $: \begin{equation} \label{eq:decom} T_{F_\fp} \xrightarrow{\sim} \bigoplus_\sigma T_\sigma, \hskip 10pt t\otimes a\mapsto (\sigma(a)t)_\sigma, \end{equation} where $\sigma$ runs through all embeddings of $F_\fp$ in $L$. The index $\sigma$ in $T_\sigma$ indicates that $T$ is considered as an $F_\fp$-algebra via $\sigma$. Then we have a decomposition of $D$ by $D\simeq \bigoplus_\sigma D_\sigma$, where $D_\sigma=e_\sigma \cdot D$. Each $D_\sigma$ is stable under $\varphi_q$ and $N$. Note that, for every $j$, ${\mathrm{Fil}}^j D$ is a $T_{F_\fp}$-submodule. Thus the filtration on $D$ restricts to a filtration on $D_\sigma$ for each $\sigma$, and satisfies ${\mathrm{Fil}}^j D = \bigoplus_\sigma {\mathrm{Fil}}^j D_\sigma$ for all $j\in \BZ$. Using the decomposition (\ref{eq:decom}) we may write $\CL_{FM}(D)$ in the form $(\CL_{FM,\sigma}(D))_\sigma$. It is easy to see that $\CL_{FM,\sigma}(D)$ is the unique element in $T$ such that $x-\CL_{FM,\sigma}(D)N(x)\in {\mathrm{Fil}}^{j_0} D_\sigma$ for every $x\in D^{(2)}_\sigma$. We also call $(\CL_{FM,\sigma}(D))_\sigma$, a vector with values in $T$, the {\it Fontaine-Mazur $L$-invariant} of $D$. \subsection{Fontaine-Mazur $L$-invariant for Hilbert modular forms} \label{ss:FM-L} Let $\{\tau_1,\cdots, \tau_g\}$ be the set of real embeddings $F\hookrightarrow \BR$. Fix a multiweight $\mathrm{k}=(k_1,\cdots, k_g,w)\in \BN^{g+1}$ satisfying $k_i\geq 2$ and $k_i\equiv w\hskip 6pt \mathrm{mod} \hskip 3pt 2$. Let $\pi=\otimes_v \pi_v$ be a cuspidal automorphic representation of $\GL(2,\BA_F)$ such that for each infinite place $\tau_i$, the $\tau_i$-component $\pi_{\tau_i}$ is a holomorphic discrete series representation $D_{k_i}$. Let $\fn$ be the level of $\pi$. Carayol \cite{Car2} attached to such an automorphic representation (under a further condition) an $\ell$-adic Galois representation, which will be recalled below. Let $L$ be a sufficiently large number field of finite degree over $\BQ$ such that the finite part $\pi^\infty=\otimes_{\fp\nmid \infty}\pi_\fp$ of $\pi$ admits an $L$-structure $\pi_L^\infty$. The fixed part $(\pi^\infty_L)^{K_1(\fn)}$ is of dimension $1$ and generated by an eigenform $f_\infty$. In this case we write $\pi_{f_\infty}$ for $\pi$. The local Langlands correspondence associates to every irreducible admissible representation $\pi$ of $\GL(2, F_\fp)$ defined over $L$ a $2$-dimensional $L$-rational Frobenius-semisimple representation $\sigma(\pi)$ of the Weil-Deligne group $WD(\overline{F}_\fp/F_\fp)$. Let $\check{\sigma}(\pi)$ denote the dual of $\sigma(\pi)$. For an $\ell$-adic representation $\rho$ of $\Gal(\overline{F}/F)$, let $\rho_\fp$ denote its restriction to $\Gal(\overline{F}_\fp/F_\fp)$, $'\hskip -3pt \rho_\fp$ the Weil-Deligne representation attached to $\rho_\fp$ and $'\hskip -3pt\rho_\fp^{\text{F-ss}}$ the Frobenius-semisimplification of $'\hskip -3pt\rho_\fp$. \begin{thm} \label{thm:car} \cite{Car2} Let $f_\infty$ be an eigenform of multiweight $k$ satisfying the following condition: \begin{quote} If $g=[F:\BQ]$ is even, then there exists a finite place $\fq$ such that the $\fq$-factor $\pi_{f_\infty, \fq}$ lies in the discrete series. \end{quote} Then for any prime number $\ell$ and a finite place $\lambda$ of $L$ above $\ell$, there exists a $\lambda$-adic representation $\rho=\rho_{f_\infty,\lambda}: \Gal(\overline{F}/F)\rightarrow \GL_{L_\lambda}(V_{f_\infty,\lambda})$ satisfying the following property: For any finite place $\fp\nmid \ell$ there is an isomorphism $$ '\hskip -3pt \rho_{f_\infty,\lambda, \fp}^{\mathrm{F}\text{-}\mathrm{ss}} \simeq \check{\sigma}(\pi_{f_\infty,\fp}) \otimes_L L_\lambda $$ of representations of the Weil-Deligne group $WD(\overline{F}_\fp/F_\fp)$. \end{thm} Saito \cite{Saito} showed that when $\fp|\ell$, $\rho_{f_\infty,\lambda, \fp}$ is potentially semistable. Now we assume that $\ell=p$, $\fp$ is the prime ideal of $F$ above $p$, and $L$ contains $F$. Let $\mathfrak{P}$ be a prime ideal of $L$ above $\fp$. \begin{thm} $($=Theorem \ref{thm:semistable}$)$ \label{thm:mono-mod} Let $f_\infty$ be as in Theorem \ref{thm:car}, $\ell=p$ and $\lambda=\mathfrak{P}$. If $f_\infty$ is new at $\fp$ (when $[F:\BQ]$ is odd, we demand that $f_\infty$ is new at another prime ideal), then $\rho_{f_\infty, \mathfrak{P}, \fp}$ is a semistable $($non-crystalline$)$ representation of $\Gal(\overline{F}_\fp/F_\fp)$, and the filtered $(\varphi_q,N)$-module $D_{{\mathrm{st}}, F_\fp}(\rho_{f_\infty,\mathfrak{P},\fp})$ is a monodromy $L_\mathfrak{P}$-module. \end{thm} \begin{rem} The conditions in Theorem \ref{thm:car} and Theorem \ref{thm:mono-mod} are used to ensure that via the Jacquet-Langlands correspondence $f_\infty$ corresponds to a modular form on the Shimura curve $M$ associated to a quaternion algebra $B$ that splits at exactly one real place; in Theorem \ref{thm:mono-mod} the quaternion algebra $B$ is demanded to be ramified at $\fp$. See Section \ref{ss:shimura-curves} for the construction of $M$. \end{rem} Thus $D_{{\mathrm{st}}, F_\fp}(\rho_{f_\infty,\mathfrak{P},\fp})$ is associated with the Fontaine-Mazur $L$-invariant. We define the {\it Fontaine-Mazur $L$-invariant} of $f_\infty$, denoted by $\CL_{FM}(f_\infty)$, to be that of $D_{{\mathrm{st}},F_\fp}(\rho_{f_\infty,\mathfrak{P},\fp})$. \section{Local systems and the associated filtered $\varphi_q$-isocrystals} \label{sec:CoIo} Let $X$ be a $p$-adic formal $\mathscr{O}_{F_\fp}$-scheme. Suppose that $X$ is analytically smooth over $\mathscr{O}_{F_\fp}$, i.e. the generic fiber $X^\mathrm{an}$ of $X$ is smooth. The filtered convergent $\varphi$-isocrystals attached to local systems are studied in \cite{Faltings, CI}. It is more convenience for us to compute the filtered convergent $\varphi_q$-isocrystals attached to the local systems that we will be interested in. From now on, we will ignore ``convergent'' in the notion. Filtered $\varphi_q$-isocrystal is a natural analogue of filtered $\varphi$-isocrystal. To define it one needs the notion of $F_{\fp}$-enlargement. An {\it $F_{\fp}$-enlargement} of $X$ is a pair $(T, x_T)$ consisting of a flat formal $\mathscr{O}_{F_\fp}$-scheme $T$ and a morphism of formal $\mathscr{O}_{F_\fp}$-scheme $x_T: T_0\rightarrow X$, where $T_0$ is the reduced closed subscheme of $T$ defined by the ideal $\pi\mathscr{O}_T$. An {\it isocrystal} $\mathscr{E}$ on $X$ consists of the following data: $\bullet$ for every $F_\fp$-enlargement $(T,x_T)$ a coherent $\mathscr{O}_T\otimes_{\mathscr{O}_{F_\fp}}F_\fp$-module $\mathscr{E}_T$, $\bullet$ for every morphism of $F_\fp$-enlargements $g: (T', x_{T'})\rightarrow (T, x_T)$ an isomorphism of \hskip 12pt $\mathscr{O}_{T'}\otimes_{\mathscr{O}_{F_\fp}}F_\fp$-modules $\theta_g: g^* (\mathscr{E}_T)\rightarrow \mathscr{E}_{T'}$. \noindent The collection of isomorphisms $\{\theta_g\}$ is required to satisfy certain cocycle condition. If $T$ is an $F_\fp$-enlargement of $X$, then $\mathscr{E}_T$ may be interpreted as a coherent sheaf $E_T^\mathrm{an}$ on the rigid space $T^\mathrm{an}$. As $X$ is analytically smooth over $\mathscr{O}_{F_\fp}$, there is a natural integrable connection $$ \nabla_X: E_X^\mathrm{an}\rightarrow E_X^\mathrm{an}\otimes \Omega^1_{X^\mathrm{an}} . $$ Note that an isocrystal on $X$ depends only on $X_0$. Let $\varphi_q$ denote the absolute $q$-Frobenius of $X_0$. A {\it $\varphi_q$-isocrystal} on $X$ is an isocrystal $\mathscr{E}$ on $X$ together with an isomorphism of isocrystals $\varphi_q: \varphi_q^*\mathscr{E}\rightarrow \mathscr{E}$. A {\it filtered $\varphi_q$-isocrystal} on $X$ is a $\varphi_q$-isocrystal $\mathscr{E}$ with a descending $\BZ$-filtration on $E_X^\mathrm{an}$. The following well known result compares the de Rham cohomology of a filtered $\varphi_q$-isocrystal and the \'etale cohomology of the $\BQ_p$-local system associated to it. \begin{prop}\label{prop:semistable} \cite[Theorem 3.2]{Faltings} Suppose that $X$ is a semistable proper curve over $\mathscr{O}_{F_\fp}$. Let $\mathscr{E}$ be a filtered $\varphi_q$-isocrystal over $X$ and $\CE$ be a $\BQ_p$-local system over $X_{\overline{F}_\fp}$ that are attached to each other. Then the Galois representation $H^i_\mathrm{et}(X_{\overline{F}_\fp}, \CE)$ of $G_{F_\fp}$ is semistable, and the filtered $(\varphi_q,N)$-module $D_{{\mathrm{st}}, F_\fp}(H^i_{\mathrm{et}}(X_{\overline{F}_\fp}, \CE))$ is isomorphic to $H^i_{\mathrm{dR}}(X^{\mathrm{an}}, \mathscr{E})$. \end{prop} Coleman and Iovita \cite{CI} gave a precise description of the monodromy $N$ on $H^1_{\mathrm{dR}}(X^{\mathrm{an}}, \mathscr{E})$. Now let $X$ be a connected, smooth and proper curve over $F_\fp$ with a regular semistable model $\CX$ over $\mathscr{O}_{F_\fp}$ such that all irreducible components of its special fiber $\overline{\CX}$ are smooth. For a subset $U$ of $\overline{\CX}$ let $]U[$ denote the tube of $U$ in $X^\mathrm{an}$. We associate to $\overline{\CX}$ a graph $\mathrm{Gr}(\overline{\CX})$. Let $\mathrm{n}: \overline{\CX}^{\mathrm{n}}\rightarrow \overline{\CX}$ be the normalization of $\overline{\CX}$. The vertices $\mathrm{V}(\overline{\CX})$ of $\mathrm{Gr}(\overline{\CX})$ are irreducible components of $\overline{\CX}$. For every vertex $v$ let $C_v$ be the irreducible component corresponding to $v$. The edges $\mathrm{E}(\overline{\CX})$ of $\mathrm{Gr}(\overline{\CX})$ are ordered pairs $\{x,y\}$ where $x$ and $y$ are two different liftings in $\overline{\CX}^\mathrm{n}$ of a singular point. Let $\tau$ be the involution on $\mathrm{E}(\overline{\CX})$ such that $\tau\{x,y\}=\{y,x\}$. Below, for a module $M$ on which $\tau$ acts, set $M^\pm=\{m\in M: \tau(m)=\pm m\}$. Let $\mathscr{E}$ be a filtered $\varphi_q$-isocrystal over $X$. For any $e=\{x,y\}\in\mathrm{E}(\overline{\CX})$ let $H^i_\dR(]e[, \mathscr{E})$ denote $H^i_\dR(]\mathrm{n}(x)[,\mathscr{E})$. Then $\tau$ exchanges $H^i_\dR(]e[,\mathscr{E})$ and $H^i_\dR(]\bar{e}[,\mathscr{E})$ where $\bar{e}=\{y,x\}$. Note that $\{C_v\}_{v\in \mathrm{V}(\overline{\CX})}$ is an admissible covering of $X^\mathrm{an}$. From the Mayer-Vietorus exact sequence with respect to this admissible covering we obtain the following short exact sequence $$\label{eq:exact-sq} \small \xymatrix{ 0 \ar[r] & (\bigoplus_{e\in \mathrm{E}(\overline{\CX})}H^0_\dR(]e[,\mathscr{E}))^-/ \bigoplus_{v\in \mathrm{V}(\overline{\CX})}H^0_\dR(]C_v[,\mathscr{E}) \ar[r]^{\hskip 70pt \iota} & H^1_\dR(X^\mathrm{an}, \mathscr{E}) \\ \ar[r] & \ker\Big(\bigoplus_{v\in \mathrm{V}(\overline{\CX})}H^1_\dR(]C_v[, \mathscr{E}) \rightarrow (\bigoplus_{e\in \mathrm{E}(\overline{\CX})} H^1_\dR(]e[,\mathscr{E}))^+\Big) \ar[r] & 0.}$$ For any $e\in \mathrm{E}(\overline{\CX}) $ there is a natural residue map ${\mathrm{Res}}_e: H^1_\dR(]e[, \mathscr{E})\rightarrow H^0_\dR(]e[, \mathscr{E})$ \cite[Section 4.1]{CI}. These residue maps induce a map $$ \bigoplus_{e\in\mathrm{E}(\overline{\CX})}{\mathrm{Res}}_e: \hskip 10pt \Big(\bigoplus_{e\in\mathrm{E}(\overline{\CX})} H^1_\dR(]e[,\mathscr{E})\Big)^+ \rightarrow \Big(\bigoplus_{e\in\mathrm{E}(\overline{\CX})} H^0_\dR(]e[,\mathscr{E})\Big)^-. $$ \begin{prop} \label{prop:monodromy} \cite[Theorem 2.6, Remark 2.7]{CI} The monodromy operator $N$ on $H^1_\dR(X^\mathrm{an}, \mathscr{E})$ coincides with the composition $$ \iota \circ (\bigoplus_{e\in \mathrm{E}(\overline{\CX})} {\mathrm{Res}}_e) \circ \Big(H^1_\dR(X^\mathrm{an}, \mathscr{E}) \rightarrow (\bigoplus_{e\in\mathrm{E}(\overline{\CX})} H^1_\dR(]e[,\mathscr{E}))^+\Big) $$ where $H^1_\dR(X^\mathrm{an}, \mathscr{E})\rightarrow \Big(\bigoplus_{e\in\mathrm{E}(\overline{\CX})} H^1_\dR(]e[,\mathscr{E})\Big)^+$ is the restriction map. \end{prop} \section{The universal special formal module} \label{sec:univ-spec-mod} \subsection{Special formal modules and Drinfeld's moduli theorem} Let $B_\fp$ be the quaternion algebra over $F_\fp$ with invariant $1/2$. So $B_\fp$ is isomorphic to $F_\fp^{(2)}[{\it \Pi}]$; ${\it \Pi}^2=\pi$ and ${\it \Pi} a=\bar{a} {\it \Pi}$ for all $a\in F^{(2)}_\fp$. Here, $\pi$ is a fixed uniformizer of $F_\fp$, $F_\fp^{(2)}$ is the unramified extension of $F_\fp$ of degree $2$, and $a\mapsto \bar{a}$ denotes the nontrivial $F_\fp$-automorphism of $F^{(2)}_\fp$. Let $\mathscr{O}_{B_\fp}$ be the ring of integers in $B_\fp$. Let $F_{\fp 0}$ be the maximal absolutely unramified subfield of $F_\fp$, $k$ the residue field of $F_\fp$, and $F_{\fp 0}^{(2)}$ the unramified extension of $F_{\fp 0}$ of degree $2$. Let $\mathscr{O}^\ur$ denote the maximal unramified extension of $\mathscr{O}_{F_\fp}$, $\widehat{\mathscr{O}^\ur}$ its $\pi$-adic completion. Fix an algebraic closure $\bar{k}$ of $k$. We identify $\widehat{\mathscr{O}^\ur}/\pi\widehat{\mathscr{O}^\ur}$ with $\bar{k}$. Then $\mathrm{W}(\bar{k})\otimes_{\mathscr{O}_{F_\fp^0}}\mathscr{O}_{F_\fp}\cong \widehat{\mathscr{O}^\ur}$. Let $\widehat{F_\fp^\ur}$ be the fractional field of $\widehat{\mathscr{O}^\ur}$. We use the notion of special formal $\mathscr{O}_{B_\fp}$-module in \cite{Drin}. First we fix a special formal $\mathscr{O}_{B_\fp}$-module over $\bar{k}$, $\Phi$, as in \cite[(3.54)]{RZ}. Let $\iota$ denote the natural embedding of $F_{\fp 0}$ into $\mathrm{W}(\bar{k})[1/p]$. Then all embeddings of $F_{\fp 0}$ into $\mathrm{W}(\bar{k})[1/p]$ are $\varphi^{j}\circ \iota$ ($0\leq j\leq v_p(q)-1$). We have the decomposition $$ \mathscr{O}_{B_\fp} \otimes_{\BZ_p} \mathrm{W}(\bar{k}) = \prod_{j=0}^{v_p(q)-1} \mathscr{O}_{B_\fp} \otimes_{\mathscr{O}_{F^0_\fp}, \varphi^{j}\circ \iota} \mathrm{W}(\bar{k}) . $$ Let $u\in \mathscr{O}_{B_\fp} \otimes_{\BZ_p} \mathrm{W}(\bar{k}) $ be the element whose $\varphi^j\circ \iota$-component with respect to this decomposition is $$ u_{\varphi^j\circ \iota} = \left\{\begin{array}{ll} {\it \Pi} \otimes 1 & \text{ if } j=0 , \\ 1 \otimes 1 & \text{ if } j=1, \dots, v_p(q)-1. \end{array}\right.$$ Let $\widetilde{\mathrm{F}}$ be the $1\otimes\varphi$-semilinear operator on $\mathscr{O}_{B_\fp}\otimes_{\BZ_p}\mathrm{W}(\bar{k})$ defined by $$\widetilde{\mathrm{F}}x = (1\otimes \varphi)x \cdot u, \hskip 10pt x \in \mathscr{O}_{B_\fp}\otimes_{\BZ_p}\mathrm{W}(\bar{k}). $$ Let $\widetilde{\mathrm{V}}$ be the $1\otimes \varphi^{-1}$-semilinear operator on $\mathscr{O}_{B_\fp}\otimes_{\BZ_p}\mathrm{W}(\bar{k})$ such that $\widetilde{\mathrm{F}}\widetilde{\mathrm{V}}=p$. Then $$ (\mathscr{O}_{B_\fp}\otimes_{\BZ_p}\mathrm{W}(\bar{k}), \widetilde{\mathrm{V}}, \widetilde{\mathrm{F}})$$ is a Dieudonne module over $\mathrm{W}(\bar{k})$ with an action of $\mathscr{O}_{B_\fp}$ by the left multiplication. Let $\Phi$ be the special formal $\mathscr{O}_{B_\fp}$-module over $\bar{k}$ whose contravariant Dieudonne crystal is $(\mathscr{O}_{B_\fp}\otimes_{\BZ_p}\mathrm{W}(\bar{k}), \widetilde{\mathrm{V}}, \widetilde{\mathrm{F}})$. \footnote{The Dieudonne crystal in \cite[(3.54)]{RZ} is the covariant Dieudonne crystal of $\Phi$. The duality between our contravariant Dieudonne crystal and the covariant Dieudonne crystal is induced by the trace map $$<\cdot, \cdot>:\mathscr{O}_{B_\fp}\times \mathscr{O}_{B_\fp}\rightarrow \BZ_p, (x,y)\mapsto \tr_{F_\fp/\BQ_p} \Big(\delta_{F_\fp/\BQ_p}^{-1} \tr_{B_\fp/F_\fp} (x y^t)\Big), $$ where $\delta_{F_\fp/\BQ_p}$ is the different of $F_\fp$ over $\BQ_p$, $\tr_{B_\fp/F_\fp}$ is the reduced trace map, and $y\mapsto y^t$ is the involution of $B_\fp$ such that ${\it \Pi}^t={\it \Pi}$ and $a^t=\bar{a}$ if $a\in F_\fp^{(2)}$. Then we have $< b \cdot x , y>=<x, b^t \cdot y>$ for any $b\in \mathscr{O}_{B_\fp}$. } Let $\iota_{0}$ and $\iota_1$ be the extensions of $\iota$ to $F_{\fp 0}^{(2)}$. Then $$ \varphi^{j}\iota_0, \ \varphi^j \iota_1 \hskip 10pt (0\leq j\leq v_p(q)-1)$$ are all embeddings of $F_{\fp 0}^{(2)}$ into $\mathrm{W}(\bar{k})[1/p]$. We have $$ \mathscr{O}_{B_\fp} \otimes_{\BZ_p} \mathrm{W}(\bar{k}) = \prod_{j=0}^{v_p(q)-1} \mathscr{O}_{B_\fp} \otimes_{\mathscr{O}_{F^{(2)}_{\fp0}}, \varphi^j\circ \iota_0} \mathrm{W}(\bar{k}) \times \prod_{j=0}^{v_p(q)-1} \mathscr{O}_{B_\fp} \otimes_{\mathscr{O}_{F^{(2)}_{\fp0}}, \varphi^j\circ \iota_1} \mathrm{W}(\bar{k}), $$ where $\mathscr{O}_{B_\fp}$ is considered as an $\mathscr{O}_{F_\fp^{(2)}}$-module by the left multiplication. Let $X$ be the element of $\mathscr{O}_{B_\fp} \otimes_{\BZ_p} \mathrm{W}(\bar{k})$ whose $\varphi^j\circ \iota_0$-component ($0\leq j\leq v_p(q) -1$) is $1\otimes 1$ and whose $\varphi^j\circ \iota_1$-component ($0\leq j\leq v_p(q) -1$) is ${\it \Pi}\otimes 1$. Similarly, let $Y$ be the element whose $\varphi^j\circ \iota_0$-component ($0\leq j\leq v_p(q)-1$) is ${\it \Pi} \otimes 1$ and whose $\varphi^j\circ \iota_1$-component ($0\leq j\leq v_p(q)-1$) is $\pi \otimes 1$. Then $\{X,Y\}$ is a basis of $\mathscr{O}_{B_\fp} \otimes_{\BZ_p} \mathrm{W}(\bar{k}) $ over $\mathscr{O}_{F_\fp^{(2)}}\otimes _{\BZ_p} \mathrm{W}(\bar{k})$. Note that $\GL(2,F_\fp)=(\End^0_{\mathscr{O}_{B_\fp}}\Phi)^\times$ \cite[Lemma 3.60]{RZ}. We normalize the isomorphism such that the action on the $\varphi$-module $$(\mathscr{O}_{B_\fp}\otimes_{\BZ_p}\mathrm{W}(\bar{k}), \widetilde{\mathrm{F}})[1/p]=(B_\fp\otimes_{\BQ_p}\mathrm{W}(\bar{k})[1/p], \widetilde{\mathrm{F}})$$ is given by $\wvec{a}{b}{c}{d}X= (a\otimes 1) X+ (c\otimes 1) Y$ and $\wvec{a}{b}{c}{d}Y= (b\otimes 1) X+ (d\otimes 1) Y$. Let $\widetilde{\mathrm{D}}_0$ denote the $\varphi_q$-module $$(B_\fp\otimes_{\BQ_p}\widehat{F_\fp^\ur}, \widetilde{\mathrm{F}}^{v_p(q)})$$ coming from the $\varphi$-module $(\mathscr{O}_{B_\fp}\otimes_{\BZ_p}\mathrm{W}(\bar{k}), \widetilde{\mathrm{F}})[1/p]$. We describe Drinfeld's moduli problem. Let $\mathrm{Nilp}$ be the category of $\widehat{\mathscr{O}^{\ur}}$-algebras on which $\pi$ is nilpotent. For any $A\in \mathrm{Nilp}$, let $\psi$ be the homomorphism $\bar{k}\rightarrow A/\pi A$; let $\mathrm{SFM}(A)$ be the set of pairs $(G, \rho)$ where $G$ is a special formal $\mathscr{O}_{B_\fp}$-module over $A$ and $\rho: \Phi_{A/\pi A}=\psi_* \Phi \rightarrow G$ is a quasi-isogeny of height zero. We state a part of Drinfeld's theorem \cite{Drin} as follows. Let $\CH$ be the Drinfeld upper half plane over $F_\fp$, i.e. the rigid analytic $F_\fp$-variety whose $\BC_p$-points are $\BC_p-F_\fp$. \begin{thm} The functor $\mathrm{SFM}$ is represented by a formal scheme $\widehat{\CH} \hat{\otimes} \widehat{\mathscr{O}^{\ur}}$ over $\widehat{\mathscr{O}^{\ur}}$ whose generic fiber is $\CH_{\widehat{F_\fp^\ur}}=\CH \hat{\otimes} \widehat{F_\fp^{\ur}}$. \end{thm} Let $\CG$ be the universal special formal $\mathscr{O}_{B_\fp}$-module over $\widehat{\CH} \hat{\otimes} \widehat{\mathscr{O}^{\ur}}$. There is an action of $\GL(2,F_\fp)$ on $\CG$ (see \cite[Chapter II (9.2)]{BC}): The group $\GL(2,F_\fp)$ acts on the functor $\mathrm{SFM}$ by $g\cdot (\psi; G, \rho)=(\psi\circ \mathrm{Frob}^{-n}; G, \rho\circ \psi_*(g^{-1}\circ \mathrm{Frob}^{n}))$ if $v_\fp(\det g)=n$. Here, $v_\fp$ is the valuation of $\BC_p$ normalized such that $v_\fp(\pi)=1$. \subsection{The filtered $\varphi_q$-isocrystal attached to the universal special formal module} \label{ss:univ-fil-phi-mod} It is rather difficult to describe $\CG$ precisely. \footnote{See \cite{Teit89} for some information about $\CG$ and \cite{Xie} for a higher rank analogue.} However, we can determine the associated (contravariant) filtered $\varphi_q$-isocrystal $\CM$. In the following, we write $\mathscr{O}_{\CH, \widehat{F_\fp^\ur}}$ for $\mathscr{O}_{\CH \hat{\otimes} \widehat{F_\fp^\ur}}$ and $\Omega_{\CH, \widehat{F_\fp^\ur}}$ for the differential sheaf $\Omega_{\CH \hat{\otimes} \widehat{F_\fp^{\ur}}}$. As is observed in \cite{Faltings} and \cite{RZ}, except for the filtration, the $\varphi_q$-isocrystal $\CM$ is constant. So it is naturally isomorphic to the $\varphi_q$-isocrystal $$\widetilde{\mathrm{D}}_0\otimes_{\widehat{F_\fp^\ur}} \mathscr{O}_{\CH, \widehat{F_\fp^\ur}}$$ with the $q$-Frobenius being $\mathrm{F}^{v_p(q)}\otimes \varphi_{q,\CH_{\widehat{F_\fp^\ur}}}$ and the connection being $$1\otimes \mathrm{d}: \widetilde{\mathrm{D}}_0\otimes_{\widehat{F_\fp^{\ur}}}\mathscr{O}_{\CH, \widehat{F_\fp^\ur}}\rightarrow \widetilde{\mathrm{D}}_0\otimes_{\widehat{F_\fp^{\ur}}}\Omega_{\CH, \widehat{F_\fp^\ur}}.$$ Next we determine the filtration on $\widetilde{\mathrm{D}}_0 \otimes_{\widehat{F_\fp^\ur}} \mathscr{O}_{\CH, \widehat{ F_\fp^\ur }}$. For any $F_\fp$-algebras $K$ and $ L$, $L\otimes_{\BQ_p}K$ is isomorphic to $L\otimes_{F_\fp}K \oplus (L\otimes_{\BQ_p}K)_{\mathrm{non}}$, where $(L\otimes_{\BQ_p}K)_{\mathrm{non}}$ is the kernel of the homomorphism $L\otimes_{\BQ_p}K\rightarrow L\otimes_{F_\fp}K$, $\ell\otimes a\mapsto \ell\otimes a$. If $L$ is a field extension of $F_\fp$ containing all embeddings of $F_\fp$, then $L\otimes_{\BQ_p}K=\bigoplus_{\tau: F_\fp\hookrightarrow L}L\otimes_{\tau, F_\fp} K$ and $(L\otimes_{\BQ_p}K)_{\mathrm{non}}$ corresponds to the non-canonical embeddings. We apply this to $L=F_\fp$ and $K=\widehat{F^\ur_\fp}$; consider $\widetilde{\mathrm{D}}_0=B_\fp\otimes_{\BQ_p}\widehat{F^\ur_\fp}$ as an $F_\fp\otimes_{\BQ_p}\widehat{F^\ur_\fp}$-module. Then $\widetilde{\mathrm{D}}_0$ splits into two parts: one is the canonical part which corresponds to the natural embedding $\mathrm{id}: F_\fp\hookrightarrow F_\fp$, and the other is the non-canonical part. Correspondingly, $ \widetilde{\mathrm{D}}_0 \otimes_{\widehat{F_\fp^\ur}} \mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} $ splits into two parts, the canonical part $B_\fp\otimes_{F_\fp}\mathscr{O}_{\CH, \widehat{ F_\fp^\ur }}$ and the non-canonical part. Since $F_\fp$ acts on the Lie algebra of any special formal $\mathscr{O}_{B_\fp}$-module through the natural embedding, the filtration on the non-canonical part is trivial. The filtration on the canonical part is closely related to the period morphism \cite{Faltings, RZ}. Let us recall the definition of the period morphism \cite[Section 2.2]{Xie}. We will use the notations in \cite{Xie}. Let $M(\Phi)$ be the Cartier module of $\Phi$, a $\mathbb{Z}/2\mathbb{Z}$-graded module. The $\mathbb{Z}/2\mathbb{Z}$-grading depends on a choice of $F_\fp$-embedding of $F_\fp^{(2)}$ into $\widehat{F_\fp^{\ur}}$. We choose the one, $\tilde{\iota}_0$, that restricts to $\iota_0$, and denote the other $F_\fp$-embedding by $\tilde{\iota}_{1}$. We fix a graded $\mathrm{V}$-basis $\{g^0, g^1\}$ of $M(\Phi)$ such that $\mathrm{V}g^0={\it \Pi} g^0$ and $\mathrm{V} g^1={\it \Pi} g^1$. Then $\{g^0, g^1, \mathrm{V} g^0, \mathrm{V} g^1 \}$ is a basis of $M(\Phi)[1/p]$ over $\widehat{F^\ur_\fp}$; $F_\fp^{(2)}\subset B_\fp$ acts on $\widehat{F^\ur_\fp} g^0 \oplus \widehat{F^\ur_\fp} \mathrm{V} g^1$ by $\tilde{\iota}_{0}$, and acts on $\widehat{F^\ur_\fp} \mathrm{V} g^0 \oplus \widehat{F^\ur_\fp} g^1$ by $\tilde{\iota}_{1}$. See \cite{Drin} for the definition of Cartier module and the meaning of graded $\mathrm{V}$-basis. See \cite[(3.55)]{RZ} for the relation between $M(\Phi)$ and the covariant Dieudonne module attached to $\Phi$. In loc. cit. Cartier module is called $\tau$-$W_F(L)$-crystal. Let $R$ be any $\pi$-adically complete $\widehat{F^\ur_\fp}$-algebra. Drinfeld constructed for each $(\psi; G, \rho)\in \mathrm{SFM}(R)$ a quadruple $(\eta, T, u, \rho)$. Let $M=M(G)$ be the Cartier module of $G$, $N(M)$ the auxiliary module that is the quotient of $M\oplus M$ by the submodule generated by elements of the form $(\mathrm{V}x, -{\it \Pi} x)$ and $\beta_M$ the quotient map $M\oplus M\rightarrow N(M)$. For $(x_0, x_1)\in M\oplus M$, we write $((x_0, x_1))$ for $\beta_M(x_0,x_1)$. Then we have a map $\varphi_M: N(M)\rightarrow N(M)$. See \cite[Definition 4]{Xie} for its definition. Put $$\eta_M:=N(M)^{\varphi_M}, \hskip 10pt T_M:=M/\mathrm{V}M;$$ both $\eta_M$ and $T_M$ are $\mathbb{Z}/2\mathbb{Z}$-graded. Let $u_M:\eta_M\rightarrow T_M$ be the $\mathscr{O}_{F_\fp}[{\it \Pi}]$-linear map of degree $0$ that is the composition of the inclusion $\eta_M \hookrightarrow N(M)$ and the map $$ N(M)\rightarrow M/\mathrm{V}M, \hskip 15pt ((x_0, x_1))\mapsto x_0 \ \mathrm{mod} \ \mathrm{V}M. $$ Then $\eta_{M(\Phi)}$ is a free $\mathscr{O}_{F_\fp}$-module of rank $4$ with a basis $$\{((g^0, 0)), \ ((g^1,0)) ,((\mathrm{V}g^0, 0)), ((\mathrm{V}g^1, 0))\},$$ where $((g^0, 0))$, $((\mathrm{V}g^1,0))$ are in degree $0$, and $((g^1, 0))$, $((\mathrm{V}g^0,0))$ are in degree $1$. The quasi-isogeny $\rho: \psi_*\Phi\rightarrow G_{R/\pi R}$ induces an isomorphism $$ \rho: \eta^0_{M(\Phi)} \otimes_{\mathscr{O}_{F_\fp}}F_\fp \xrightarrow{\sim} \eta^0_{M(G)} \otimes_{\mathscr{O}_{F_\fp}}F_\fp .$$ Then the period of $(G, \rho)$ is defined by \begin{equation}\label{eq:period} z(G,\rho) = \frac{u'_M\circ \rho((\mathrm{V}g^1,0))}{u'_M\circ \rho((g^0,0))}, \end{equation} where $u'_M$ is the the map $\eta^0_{M(G)}\otimes_{\mathscr{O}_{F_\fp}} F_\fp\rightarrow T_M^0\otimes_R R[1/p]$ induced by $u_M$. Note that considered as a $\varphi_q$-module, $M(\Phi)[1/p]$ is the dual of $B_\fp\otimes_{F_{\fp}} \widehat{F^\ur_\fp}$, the canonical part of $\widetilde{\mathrm{D}}_0$. Let $\{v_0, v_1, v_2, v_3\}$ be the basis of $B_\fp\otimes_{F_{\fp}} \widehat{F^\ur_\fp}$ over $\widehat{F^\ur_\fp}$ dual to $\{\pi g^1, g^0, \mathrm{V} g^0 , \mathrm{V} g^1 \}$. Then \begin{eqnarray*} {\mathrm{Fil}}^0 B_\fp\otimes_{F_\fp}\mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} &=& B_\fp\otimes_{F_\fp}\mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} \\ {\mathrm{Fil}}^1 B_\fp\otimes_{F_\fp} \mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} &=& \text{the } \mathscr{O}_{\CH, \widehat{ F_\fp^\ur }}\text{-submodule generated by } \\ && \hskip 20pt \widehat{F^\ur_\fp}\cdot(v_1 + zv_3) \oplus \widehat{F^\ur_\fp}\cdot(z v_0 + v_2) \\ {\mathrm{Fil}}^2 B_\fp\otimes_{F_\fp} \mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} &=& 0. \end{eqnarray*} Here $z$ is the canonical coordinate on $\CH_{\widehat{F_\fp^\ur}}$. We decompose $B_\fp\otimes_{F_\fp}\widehat{F^\ur_\fp}$ into two direct summands: $$ B_\fp\otimes_{F_\fp}\widehat{F^\ur_\fp} = B_\fp\otimes_{F_\fp^{(2)}, \ \tilde{\iota}_{0}}\widehat{F^\ur_\fp} \oplus B_\fp\otimes_{F_\fp^{(2)}, \ \tilde{\iota}_{1}} \widehat{F^\ur_\fp}, $$ where $B_\fp$ is considered as an $F_\fp^{(2)}$-module by left multiplication. Let $e_0$ and $e_1$ denote the projection to the first summand and that to the second, respectively. We may choose $g^0, g^1$ such that $v_0=e_0 X , \ v_1= e_1 Y, \ v_2= e_0 Y , \ v_3= e_1 X$. Thus \begin{eqnarray*} {\mathrm{Fil}}^0 B_\fp\otimes_{F_\fp}\mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} &=& B_\fp\otimes_{F_\fp}\mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} \\ {\mathrm{Fil}}^1 B_\fp\otimes_{F_\fp} \mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} &=& \text{the } F_\fp^{(2)}\otimes_{F_\fp}\mathscr{O}_{\CH, \widehat{ F_\fp^\ur }}\text{-submodule generated by } zX + Y \\ {\mathrm{Fil}}^2 B_\fp\otimes_{F_\fp} \mathscr{O}_{\CH, \widehat{ F_\fp^\ur }} &=& 0. \end{eqnarray*} Finally we note that the induced action of $\mathrm{GL}(2, F_\fp)$ on $\CH$ is given by $\wvec{a}{b}{c}{d} z = \frac{az+b}{cz+d}$. \section{Shimura curves} \label{sec:sh-curves} Fix a real place $\tau_1$ of $F$. Let $B$ be a quaternion algebra over $F$ that splits at $\tau_1$ and is ramified at other real places $\{\tau_2,\cdots, \tau_g\}$ and $\fp$. \subsection{Shimura curves $M$, $M'$ and $M''$} \label{ss:shimura-curves} We will use three Shimura curves studied by Carayol \cite{Car} and recall their constructions below (see also \cite{Saito}). Let $G$ be the reductive algebraic group over $\BQ$ such that $G(R)=(B\otimes R)^\times$ for any $\BQ$-algebra $R$. Let $Z$ be the center of $G$; it is isomorphic to $T={\mathrm{Res}}_{F/\BQ}\BG_m$. Let $\nu:G\rightarrow T$ be the morphism obtained from the reduced norm $\mathrm{Nrd}_{B/F}$ of $B$. The kernel of $\nu$ is $G^{\mathrm{der}}$, the derived group of $G$, and thus we have a short exact sequence of algebraic groups $$\xymatrix{ 1 \ar[r] & G^\mathrm{der} \ar[r] & G \ar[r]^{\nu} & T\ar[r] & 1. }$$ Let $X$ be the $G(\BR)$-conjugacy class of the homomorphism \begin{eqnarray*} h: \hskip 50pt \BC^\times & \rightarrow & G(\BR) =\GL_2(\BR)\times \BH^\times \times \cdots \times \BH^\times \\ z= x+\sqrt{-1}y & \mapsto & \hskip 45pt \left(\wvec{x}{y}{-y}{x}^{-1}, \ 1, \ \ \cdots, \ \ \ 1\right), \end{eqnarray*} where $\BH$ is the Hamilton quaternion algebra. The conjugacy class $X$ is naturally identified with the union of upper and lower half planes. Let $M=M(G,X)=(M_U(G,X))_U$ be the canonical model of the Shimura variety attached to the Shimura pair $(G,X)$; the canonical model is defined over $F$, the reflex field of $(G,X)$. There is a natural right action of $G(\BA_f)$ on $M(G,X)$. Here and in what follows, by abuse of terminology we call a projective system of varieties simply a variety. Take an imaginary quadratic field $E_0=\BQ(\sqrt{-a})$ ($a$ a square-free positive integer) such that $p$ splits in $E_0$. Put $E=FE_0$ and $D=B\otimes_FE=B\otimes_{\BQ}E_0$. We fix a square root $\rho$ of $-a$ in $\BC$. Then the prolonging of $\tau_i$ to $E$ by $x+ y \sqrt{-a} \mapsto \tau_i(x) + \tau_i(y) \rho$ (resp. $x+ y \sqrt{-a} \mapsto \tau_i(x) - \tau_i(y) \rho$) is denoted by $\tau_i$ (resp. $\bar{\tau}_i$). Let $T_E$ be the torus ${\mathrm{Res}}_{E/\BQ}\BG_m$, $T_E^1$ the subtorus of $T_E$ such that $T_E^1(\BQ)=\{z\in E: z\bar{z}=1\}$. We consider the amalgamate product $G''=G\times_Z T_E$, and the morphism $G''=G\times_Z T_E\xrightarrow{\nu''}T''=T\times T_E^1$ defined by $(g,z)\mapsto (\nu(g)z\bar{z},z/\bar{z})$. Consider the subtorus $T'=\BG_m\times T_E^1$ of $T''$, and let $G'$ be the inverse image of $T'$ by the map $\nu''$. The restriction of $\nu''$ to $G'$ is denoted by $\nu'$. Both the derived group of $G'$ and that of $G''$ are identified with $G^\mathrm{der}$, and we have two short exact sequences of algebraic groups $$ \xymatrix{ 1 \ar[r] & G^\mathrm{der} \ar[r] & G' \ar[r]^{\nu'} & T'\ar[r] & 1} $$ and $$\xymatrix{ 1 \ar[r] & G^\mathrm{der} \ar[r] & G'' \ar[r]^{\nu''} & T''\ar[r] & 1. }$$ The complex embeddings $\tau_1,\cdots, \tau_g$ of $E$ identify $G''(\BR)$ with $\GL_2(\BR)\cdot \BC^\times \times \BH^\times \cdot \BC^\times \times \cdots \times \BH^\times \cdot \BC^\times$. We consider the $G'(\BR)$-conjugacy class $X'$ (resp. $G''(\BR)$-conjugacy class $X''$) of the homomorphism \begin{eqnarray*} h': \hskip 40pt \BC^\times & \rightarrow & G'(\BR) \subset \\ && G''(\BR)=\GL_2(\BR)\cdot \BC^\times \times \BH^\times \cdot \BC^\times \times \cdots \times \BH^\times \cdot \BC^\times \\ z= x+\sqrt{-1}y & \mapsto & \hskip 50pt \left(\wvec{x}{y}{-y}{x}^{-1}\otimes 1, \ \ 1\otimes z^{-1}, \ \ \cdots, \ \ 1\otimes z^{-1}\right). \end{eqnarray*} Let $M'=M(G',X')$ and $M''=M(G'',X'')$ be the canonical models of the Shimura varieties defined over their reflex field $E$. There are natural right actions of $G'(\BA_f)$ and $G''(\BA_f)$ on $M'$ and $M''$, respectively. Put $T_{E_0}={\mathrm{Res}}^{E_0}_{\BQ}\BG_m$. Using the complex embeddings $\tau_1,\cdots, \tau_g$ of $E$, we identify $T_E(\BR)$ with $\BC^\times\times \cdots \times \BC^\times$; similarly via the embedding $x+y\sqrt{-a}\rightarrow x+y\rho$ we identify $T_{E_0}(\BR)$ with $\BC^\times$. Consider the homomorphisms \begin{eqnarray*} h_E &:& \BC^\times \rightarrow T_E(\BR)=\BC^\times \times \cdots \times \BC^\times, \ \ z\mapsto (z^{-1}, 1,\cdots, 1), \\ h_{E_0} &:& \BC^\times \rightarrow T_{E_0}(\BR)=\BC^\times, \ \ \hskip 55pt z\mapsto z^{-1}.\end{eqnarray*} Let $N_E=M(T_E, h_E)$ and $N_{E_0}=M(T_{E_0}, h_{E_0})$ be the canonical models attached to the pairs $(T_E,h_E)$ and $(T_{E_0}, h_{E_0})$ respectively. Then $N_E$ is defined over $E$, and $N_{E_0}$ is defined over $E_0$. Consider the homomorphism $\alpha: G\times T_E\rightarrow G''$ of algebraic groups inducing $$B^\times \times E^\times \rightarrow G''(\BQ)\subset (B\otimes_\BQ E)^\times, \ \ (b, e)\mapsto b \otimes \mathrm{N}_{E/E_0}(e)e^{-1}$$ on $\BQ$-valued points, and the homomorphism $\beta:G\times T_E\rightarrow T_{E_0}$ inducing $$\mathrm{N}_{E/E_0}\circ \mathrm{pr}_2: B^\times \times E^\times\rightarrow E_0^\times$$ on $\BQ$-valued points. Here, $\mathrm{N}_{E/E_0}$ denotes the norm map $E^\times\rightarrow E_0^\times$. Since $h'=\alpha\circ (h\times h_E)$ and $h_{E_0}=\mathrm{N}_{E/E_0}\circ h_E$, $\alpha$ and $\beta$ induce morphisms of Shimura varieties $M\times N\rightarrow M''$ and $M\times N_E\rightarrow N_{E_0}$ again denoted by $\alpha$ and $\beta$ respectively. We have the following diagram $$\xymatrix{ M & M\times N_E \ar[d]^{\beta} \ar[l]_{\mathrm{pr}_1}\ar[r]^{\alpha} & M'' & M'\ar[l] \\ & N_{E_0} . && }$$ \subsection{Connected components of $M$, $M\times N_E$, $M'$ and $M''$} We write $\widetilde{G}$ for $G\times T_E$ and write $\widetilde{M}$ for $M\times N_E$. For $\natural= \widetilde{} , \emptyset, \ ', ''$, as $B$ is ramified at $\fp$, there exists a unique maximal compact open subgroup $U^\natural_{p,0}$ of $G^\natural(\BQ_p)$. Then $U'_{p,0}=U''_{p,0}\cap G'(\BQ_p)$ and $U''_{p,0}=\alpha(\widetilde{U}_{p,0})$. If $U^\natural$ is a subgroup of $G^\natural(\BA_f)$ of the form $U^\natural_{p,0}U^{\natural,p}$ where $U^{\natural,p}$ is a compact open subgroup of $G^\natural(\BA_f^p)$, we will write $M^\natural_{0,U^{\natural,p}}$ for $M^\natural_{U^\natural}$. Let $M^\natural_0$ denote the projective system $(M^\natural_{0, U^{\natural,p}})_{U^{\natural,p}}$; this projective system has a natural right action of $G^\natural(\BA_f^p)$. \begin{lem} \label{lem:geom-conn} \begin{enumerate} \item \label{it:geom-conn} For any sufficiently small $U^{\natural,p}$, each geometrically connected component of $M^\natural_{0, U^{\natural,p}}$ is defined over a field that is unramified at all places above $p$. \item \label{it:iso-on-geom-conn} Let $\widetilde{U}^{p}$ be a sufficiently small compact open subgroup of $\widetilde{G}(\BA_f^p)$. Then the morphism $$ \widetilde{M}_{0, \widetilde{U}^{p}} \rightarrow M''_{0, \alpha(\widetilde{U}^{p})} $$ induced by $\alpha$ is an isomorphism onto its image when restricted to every geometrically connected component. \end{enumerate} \end{lem} \begin{proof} When $U^{\natural,p}$ is sufficiently small, $M^\natural_{0, U^{\natural,p}}$ is smooth. Let $\pi_0(M^\natural_{0, U^{\natural,p}})$ denote the group of geometrically irreducible components of $ M^\natural_{0, U^{\natural,p}}$ over $\overline{\BQ}$ (which must be connected since $M^\natural_{0, U^{\natural,p}}$ is smooth). Then $\Gal(\overline{\BQ}/E)$ acts on $\pi_0(M^\natural_{0, U^{\natural,p}})$. This action is explicitly described by Deligne \cite[Theorem 2.6.3]{Del}, from which we deduce (\ref{it:geom-conn}). As $\alpha$ induces an isomorphism from the derived group of $\widetilde{G}$ to that of $G''$, by \cite[Remark 2.1.16]{Del} or \cite[Proposition II.2.7]{Mil} we obtain (\ref{it:iso-on-geom-conn}). \end{proof} \subsection{Modular interpolation of $M'$} Let $\ell\mapsto \bar{\ell}$ be the involution on $D=B\otimes_{\BQ}E_0$ that is the product of the canonical involution on $B$ and the complex conjugate on $E_0$. Choose an invertible symmetric element $\delta\in D$ ($\delta=\bar{\delta}$). Then we have another involution $\ell\mapsto \ell^*:=\delta^{-1}\bar{\ell}\delta$ on $D$. Let $V$ denote $D$ considered as a left $D$-module. Let $\psi$ be the non-degenerate alternating form on $V$ defined by $ \psi(x,y) =\mathrm{Tr}_{E/\BQ} (\sqrt{-a} \ \mathrm{Trd}_{D/E}(x \delta y^*)) $, where $\mathrm{Tr}_{E/\BQ}$ is the trace map and $\mathrm{Trd}_{D/E}$ is the reduced trace map. For $\ell\in D$ put $$t(\ell)=\tr(\ell; V_\BC/{\mathrm{Fil}}^0 V_\BC)$$ where ${\mathrm{Fil}}^\bullet$ is the Hodge structure defined by $h'$. We have $$ t(\ell)=(\tau_1+\bar{\tau}_1 +2\tau_2+\cdots+2\tau_g)(\tr_{D/E}(\ell)) $$ for $\ell\in D$. The subfield of $\BC$ generated by $t(\ell)$, $\ell\in D$, is exactly $E$. Choose an order $\mathscr{O}_D$ of $D$, $T$ the corresponding lattice in $V$. With a suitable choice of $\delta$, we may assume that $\mathscr{O}_D$ is stable by the involution $\ell\mapsto \ell^*$ and that $\psi$ takes integer values on $T$. Put $\hat{\mathscr{O}}_D:=\mathscr{O}_D\otimes \hat{\BZ}$ and $\hat{T}:=T\otimes \hat{\BZ}$. In Section \ref{sec:p-unif} when we consider the $p$-adic uniformization of the Shimura curves, we need to make the following assumption. \begin{ass} \label{ass} We assume that $\delta$ is chosen such that $\hat{T}$ is stable by $U'_{p,0}$. \end{ass} If $U'$ is a sufficiently small compact open subgroup of $G'(\BA_f)$ keeping $\hat{T}$, then $M'_{U'}$ represents the following functor $\CM_{U'}$ \cite[Section 5]{Saito}: For any $E$-algebra $R$, $\CM_{U'}(R)$ is the set of isomorphism classes of quadruples $(A, \iota, \theta, \kappa)$ where $\bullet$ $A$ is an isomorphism class of abelian schemes over $R$ with an endomorphism $\iota:\mathscr{O}_D\rightarrow \End(A)$ such that $\tr(\iota(\ell), \Lie A)=t(\ell)$ for all $\ell\in \mathscr{O}_D$. $\bullet$ $\theta$ is a polarization $A\rightarrow \check{A}$ whose associated Rosati involution sends $\iota(\ell)$ to $\iota(\ell^*)$. $\bullet$ $\kappa$ is a $U'$-orbit of $\mathscr{O}_D\otimes \hat{\BZ}$-linear isomorphisms $\hat{T}(A):=\prod\limits_\ell T_\ell(A) \rightarrow \hat{T}$ such that there exists a $\hat{\BZ}$-linear isomorphism $\kappa': \hat{T}(1)\rightarrow \hat{\BZ}$ making the diagram $$ \xymatrix{ \hat{T}(A) \times \hat{T}(A) \ar[r]^{(1,\theta_*)} \ar[d]^{\kappa\times \kappa } & \hat{T}(A)\times \hat{T}(\check{A}) \ar[r] & \hat{\BZ}(1)\ar[d]^{\kappa'} \\ \hat{T}\times \hat{T} \ar[rr]^{\psi\otimes\: \hat{\BZ}} && \hat{\BZ} } $$ commutative. Let $\CA_{U'}$ be the universal $\mathscr{O}_D$-abelian scheme over $M'_{U'}$. \section{$p$-adic Uniformizations of Shimura curves} \label{sec:p-unif} \subsection{Preliminaries} We provide two simple facts, which will be useful later. (i) Let $X$ be a scheme with a discrete action of a group $C$ on the right hand side, and let $Z$ be a group that contains $C$ as a normal subgroup of finite index. Fix a set of representatives $\{g_i\}_{i\in C \backslash Z}$ of $C\backslash Z$ in $Z$. We define a scheme $X*_CZ$ with a right action of $Z$ below. As a scheme $X*_CZ$ is $\bigsqcup_{C\backslash Z} X^{(g_i)}$, where $X^{(g_i)}$ is a copy of $X$. For any $g\in Z$ and $x^{(g_i)}\in X^{(g_i)}$, if $g_ig=h g_k$ with $h\in C$, then $x^{(g_i)}\cdot g = (x\cdot h)^{(g_k)}$. It is easy to show that up to isomorphism $X*_CZ$ and the right action of $Z$ are independent of the choice of $\{ g_i \}_{i\in C\backslash Z}$. (ii) Let $X_1$ and $X_2$ be two schemes whose connected components are all geometrically connected. Suppose that each of $X_1$ and $X_2$ has an action of an abelian group $Z$; $Z$ acts freely on the set of components of $X_1$ (resp. $X_2$). Let $C$ be a closed subgroup of $Z$. Then the $Z$-actions on $X_1$ and $X_2$ induce $Z/C$-actions on $X_1/C$ and $X_2/C$. \begin{lem} \label{lem:prelim} If there exists a $Z/C$-equivariant isomorphism $\gamma: X_1/C \rightarrow X_2/C$, then there exists a $Z$-equivariant isomorphism $\tilde{\gamma}: X_1\rightarrow X_2$ such that the following diagram \[ \xymatrix{ X_1 \ar[r]^{\tilde{\gamma}} \ar[d]^{\pi_1} & X_2 \ar[d]^{\pi_2} \\ X_1/C \ar[r] ^{\gamma} & X_2/C } \] is commutative, where $\pi_1$ and $\pi_2$ are the natural projections. \end{lem} \begin{proof} We identify $X_1/C$ with $X_2/C$ by $\gamma$, and write $Y$ for it. The condition on $Z$-actions implies that the action of $Z/C$ on the set of connected components of $Y$ is free and that the morphism $\pi_1$ (resp. $\pi_2$) maps each connected component of $X_1$ (resp. $X_2$) isomorphically to its image. We choose a set of representatives $\{Y_i\}_{i\in I}$ of the $Z/C$-orbits of components of $Y$. Then $\{\bar{g}Y_i: \bar{g} \in Z/C, i\in I\} $ are all different connected components of $Y$. For each $i\in I$ we choose a connected component $\tilde{Y}_i^{(1)}$ (resp. $\tilde{Y}_i^{(2)}$) of $X_1$ (resp. $X_2$) that is a lifting of $Y_i$. Then $\{g \tilde{Y}_i^{(1)}: g \in Z, i\in I \}$ (resp. $ \{ g \tilde{Y}_i^{(2)}: g \in Z, i\in I \} $) are all different connected components of $X_1$ (resp. $X_2$). As $ \pi_1|_{\tilde{Y_i}^{(1)}}: \tilde{Y_i}^{(1)}\rightarrow Y_i $ and $ \pi_2|_{\tilde{Y_i}^{(2)}}: \tilde{Y_i}^{(2)}\rightarrow Y_i $ are isomorphisms, there exists an isomorphism $ \tilde{\gamma}_i: \tilde{Y_i}^{(1)}\rightarrow \tilde{Y_i}^{(2)} $ such that $ \pi_1|_{\tilde{Y_i}^{(1)}} = \pi_2|_{\tilde{Y_i}^{(2)}} \circ \tilde{\gamma}_i $. We define the morphism $\tilde{\gamma}: X^{(1)}\rightarrow X^{(2)}$ as follows: $ \tilde{\gamma} $ maps $ g \tilde{Y}_i^{(1)} $ to $ g \tilde{Y}_i^{(2)} $, and $ \tilde{\gamma}|_{g \tilde{Y}_i^{(1)} }= g\circ \tilde{\gamma}_i \circ g^{-1} $. Then $\tilde{\gamma}$ is a $Z$-equivariant isomorphism and $ \pi_1 = \pi_2\circ\tilde{\gamma}$. \end{proof} \subsection{Some Notations} \label{ss:notations} Fix an isomorphism $\BC\cong \BC_p$. Combining the isomorphism $\BC\cong \BC_p$ and the inclusion $E_0\hookrightarrow \BC$, $x+y\sqrt{-a}\mapsto x+y\rho$, we obtain an inclusion $E_0\hookrightarrow \BQ_p$ and $E\hookrightarrow F_\fp$. Thus $D\otimes \BQ_p$ is isomorphic to $B_\fp\oplus B_\fp$. Note that $G(\BQ_p)$ is isomorphic to $B_\fp^\times$, $G'(\BQ_p)$ is isomorphic to the subgroup $$\{ (a,b): a,b \in B_\fp^\times, \bar{a}b\in \BQ_p^\times \}$$ of $B_\fp^\times\times B_\fp^\times$, and $G''(\BQ_p)$ is isomorphic to $$\{ (a,b): a,b \in B_\fp^\times, \bar{a}b\in F_\fp^\times \},$$ where $a\mapsto \bar{a}$ is the canonical involution on $B$. Note that $T_{E}(\BQ_p)$ is isomorphic to $F_\fp^\times\times F_\fp^\times$, and $T_{E_0}(\BQ_p)$ is isomorphic to $\BQ_p^\times \times \BQ_p^\times$. We normalize these isomorphisms such that $G'(\BQ_p)\hookrightarrow G''(\BQ_p)$ becomes the natural inclusion $$ \{ (a,b): a,b \in B_\fp^\times, \bar{a}b\in \BQ_p^\times \} \hookrightarrow \{ (a,b): a,b \in B_\fp^\times, \bar{a}b\in F_\fp^\times \} ,$$ $\alpha: G(\BQ_p)\times T_E(\BQ_p)\rightarrow G''(\BQ_p)$ becomes \begin{eqnarray*} B_\fp^\times \times (F_\fp^\times \times F_\fp^\times) &\rightarrow & \{ (a,b): a,b \in B_\fp^\times, \bar{a}b\in F_\fp^\times \} \\ (a, (t_1, t_2)) &\mapsto& (a\frac{\mathrm{N}_{F_\fp/\BQ_p}(t_1)}{t_1}, a \frac{\mathrm{N}_{F_\fp/\BQ_p}(t_2)}{t_2}) , \end{eqnarray*} and $\beta: G(\BQ_p)\times T_E(\BQ_p)\rightarrow T_{E_0}(\BQ_p)$ becomes \begin{eqnarray*} B_\fp^\times \times (F_\fp^\times \times F_\fp^\times) & \rightarrow & \BQ_p^\times \times \BQ_p^\times \\ (a, (t_1, t_2)) &\mapsto & (\mathrm{N}_{F_\fp/\BQ_p}(t_1), \mathrm{N}_{F_\fp/\BQ_p}(t_2)) . \end{eqnarray*} Let $\bar{B}$ be the quaternion algebra over $F$ such that $$ \mathrm{inv}_v(\bar{B}) = \left\{ \begin{array}{ll} \mathrm{inv}_v(B) & \text{ if }v \neq \tau_1, \fp , \\ \frac{1}{2} & \text{ if } v=\tau_1, \\ 0 & \text{ if }v=\fp . \end{array}\right. $$ With $\bar{B}$ instead of $B$ we can define analogues of $G$, $G'$ and $G''$, denoted by $\bar{G}$, $\bar{G}'$ and $\bar{G}''$ respectively. For $\natural= \emptyset, ',''$ we have $ \bar{G}^\natural (\BA_f^p) = G^\natural (\BA_f^p) $; $\bar{G}(\BQ_p)$ is isomorphic to $\GL(2,F_\fp)$; $\bar{G}'(\BQ_p)$ is isomorphic to the subgroup $$\{ (\wvec{a_1}{b_1}{c_1}{d_1},\wvec{a_2}{b_2}{c_2}{d_2}): a_i,b_i, c_i, d_i \in F_\fp , \ \wvec{d_1}{-b_1}{-c_1}{a_1} \wvec{a_2}{b_2}{c_2}{d_2}\in \BQ_p^\times \}$$ of $\GL(2,F_\fp)\times \GL(2,F_\fp)$, and $\bar{G}''(\BQ_p)$ is isomorphic to $$\{ \wvec{a_1}{b_1}{c_1}{d_1},\wvec{a_2}{b_2}{c_2}{d_2}): a_i,b_i, c_i, d_i \in F_\fp , \ \wvec{d_1}{-b_1}{-c_1}{a_1} \wvec{a_2}{b_2}{c_2}{d_2} \in F_\fp^\times \}.$$ If $\natural=\emptyset$, let $\bar{G}(\BQ_p)$ act on $\CH_{\widehat{F^\ur_\fp}}$ as in Section \ref{sec:univ-spec-mod}. If $\natural=\widetilde{}$, let $\widetilde{\bar{G}}=\bar{G}\times T_E$ act on $\CH_{\widehat{F^\ur_\fp}}$ by the projection to the first factor. If $\natural='$ or $''$, let $\bar{G}^\natural(\BQ_p)$ act on $\CH_{\widehat{F^\ur_\fp}}$ by the first factor. Let $\bar{G}^\natural(\BQ)$ act on $\CH_{\widehat{F^\ur_\fp}}$ via its embedding into $\bar{G}^\natural(\BQ_p)$. The center of $\bar{G}^\natural$, $Z(\bar{G}^\natural)$, is naturally isomorphic to the center of $G^\natural$, $Z(G^\natural)$; we denote both of them by $Z^\natural$. \subsection{The $p$-adic uniformizations} \label{ss:p-adic-unif} Let $\natural$ be either $\widetilde{}, \ '$ or $''$. For any compact open subgroup $U^{\natural, p}$ of $G^\natural(\BA_f^p)$, let $X^\natural_{U^{\natural,p}}$ denote $M^\natural_{0, U^{\natural,p}}\times_{\Spec(F_\fp)} \Spec(\widehat{F_\fp^\ur})$. \begin{prop}\label{prop:p-adic-unif} Suppose that Assumption \ref{ass} holds. \begin{enumerate} \item\label{it:p-adic-unif-a} Assume that $\natural=\widetilde{} \ , '$ or $''$. For any sufficiently small compact open subgroup $U^{\natural,p}$ of $G^\natural(\BA_f^p)$, writing $U^\natural=U^\natural_{p,0} U^{\natural p}$, we have a $Z^\natural(\BQ)\backslash Z^\natural(\BA_f)/ (Z^\natural(\BA_f) \cap U^\natural) $-equivariant isomorphism \begin{equation} \label{eq:linshi} X^\natural_{U^{\natural, p}} \cong \bar{G}^\natural(\BQ) \backslash (\CH_{\widehat{F^\ur_\fp}}\times G^\natural(\BA_f)/U^\natural). \end{equation} Here, $\bar{G}^\natural(\BQ)$ acts on $\CH_{\widehat{F^\ur_\fp}}$ as mentioned above and acts on $G^\natural(\BA_f^p)/U^{\natural, p}$ by the embedding $\bar{G}^\natural(\BQ)\hookrightarrow \bar{G}^\natural(\BA_f^p)\xrightarrow{\simeq} G^\natural(\BA_f^p)$; in the case of $\natural='$ or $''$, if $g\in \bar{G}^\natural(\BQ)$ satisfies $g_p=(a, b)$ with $a, b\in \GL(2,F_\fp)$, then $g$ acts on $G^\natural(\BQ_p)/U^\natural_{p,0}$ via the left multiplication by $({\it \Pi}^{v_{\fp}(\det a)}, {\it \Pi}^{v_\fp(\det b)})$; while, in the case of $\natural=\widetilde{}$, $\widetilde{g} = (g , t) \in \widetilde{\bar{G}}(\BQ)$ $(g\in G(\BQ), t\in T_E(\BQ))$ acts on $\widetilde{G}(\BQ_p)/\widetilde{U}_{p,0}$ via the left multiplication by $({\it \Pi}^{v_\fp(\det g_p)}, t_p)$; the group $Z^\natural(\BQ)\backslash Z^\natural(\BA_f)/ (Z^\natural(\BA_f) \cap U^\natural)$ acts on the right hand side of $($\ref{eq:linshi}$)$ by right multiplications on $\bar{G}^\natural(\BA_f)$. \item \label{it:p-adic-unif-b} The isomorphisms in $($\ref{it:p-adic-unif-a}$)$ can be chosen such that, for either $\sharp= \widetilde{}$ and $\natural=''$, or $\sharp='$ and $\natural=''$, we have a commutative diagram \[\xymatrix{ X^\sharp_{U^{\sharp, p}} \ar[r]\ar[d] & \bar{G}^\sharp(\BQ) \backslash (\CH_{\widehat{F^\ur_\fp}}\times G^\sharp(\BA_f)/U^\sharp ) \ar[d] \\ X^\natural_{U^{\natural, p}} \ar[r] & \bar{G}^\natural(\BQ) \backslash (\CH_{\widehat{F^\ur_\fp}}\times G^\natural(\BA_f)/U^\natural) }\] compatible with the { $Z^\sharp(\BQ)\backslash Z^\sharp(\BA_f)/ (Z^\sharp(\BA_f) \cap U^\sharp)$}-actions on the upper and the \\ { $Z^\natural(\BQ)\backslash Z^\natural(\BA_f)/(Z^\natural(\BA_f) \cap U^\natural)$}-actions on the lower, where the left vertical arrow is induced from the morphism $M^\sharp\rightarrow M^\natural$, and the right vertical arrow is induced by the identity morphism $\CH_{\widehat{F_\fp^\ur}} \rightarrow \CH_{\widehat {F_\fp^\ur } }$ and the homomorphism $\alpha: \widetilde{G}=G \times T_E\rightarrow G''$ or the inclusion $G'\hookrightarrow G''$. Here, in the case of $\sharp = \widetilde{}$ and $\natural=''$, $U^\natural=\alpha(U^\sharp)$; in the case of $\sharp='$ and $\natural=''$, $U^\sharp= U^\natural \cap G'(\BA_f)$. \end{enumerate} \end{prop} The conclusions of Proposition \ref{prop:p-adic-unif} especially (\ref{it:p-adic-unif-a}) are well known \cite{RZ,V}. However, the author has no reference for (\ref{it:p-adic-unif-b}), so we provide some detail of the proof. \begin{proof} Assertion (\ref{it:p-adic-unif-a}) in the case of $\natural='$ comes from \cite[Theorem 6.50]{RZ}. For the case of $\sharp='$ and $\natural=''$ we put $$C=Z'(\BQ)\backslash Z'(\BA_f)/ (Z'(\BA_f) \cap U')$$ and $$Z=Z''(\BQ)\backslash Z''(\BA_f)/ (Z''(\BA_f) \cap U'').$$ Then $X''_{U''^{p}}$ is $Z$-equivariantly isomorphic to $X'_{U'^p}*_CZ$, and $\bar{G}''(\BQ) \backslash (\CH_{\widehat{F^\ur_\fp}}\times G''(\BA_f)/U'' )$ is $Z$-equivariantly isomorphic to $\Big(\bar{G}'(\BQ) \backslash (\CH_{\widehat{F^\ur_\fp}}\times G'(\BA_f)/U' )\Big)*_CZ$. So (\ref{it:p-adic-unif-a}) in the case of $\natural=''$ and (\ref{it:p-adic-unif-b}) in the case of $\sharp='$, $\natural=''$ follow. Now we consider the rest cases. Let $H$ be the kernel of the homomorphism $\alpha: \widetilde{G}=G\times T_E\rightarrow G''$. Put $$ C = H(\BQ) \backslash H(\BA_f)/(H(\BA_f)\cap \widetilde{U}), \ \ Z=\widetilde{Z}(\BQ) \backslash \widetilde{Z}(\BA_f)/(\widetilde{Z}(\BA_f)\cap \widetilde{U}). $$ Put $X_1=\widetilde{X}_{\widetilde{U}^{p}}$ and $X_2=\widetilde{\bar{G}}(\BQ) \backslash (\CH_{\widehat{F^\ur_\fp}}\times \widetilde{G}(\BA_f)/\widetilde{U}_{p,0}\widetilde{U}^p )$. By Lemma \ref{lem:geom-conn} (\ref{it:geom-conn}), all connected components of $X_1$ are geometrically connected; it is obvious that all connected components of $X_2$ are geometrically connected. Thus $Z$ acts freely on the set of components of $X_1$ (resp. $X_2$). Furthermore $X_1/C$ is isomorphic to $X''_{\alpha(\widetilde{U}^p)}$, and $X_2/C$ is isomorphic to $\bar{G}''(\BQ) \backslash (\CH_{\widehat{F^\ur_\fp}}\times G''(\BA_f)/U''_{p,0}\alpha(\widetilde{U}^p))$. We have already proved that $X_1/C$ is $Z/C$-equivariantly isomorphic to $X_2/C$. Applying Lemma \ref{lem:prelim} we obtain (\ref{it:p-adic-unif-a}) in the case of $\natural=\widetilde{}\ $ and (\ref{it:p-adic-unif-b}) in the case of $\sharp=\widetilde{}\ , \natural=''$. \end{proof} \begin{rem} By \cite{V} the similar conclusion of Proposition \ref{prop:p-adic-unif} (\ref{it:p-adic-unif-a}) holds for the case of $\natural=\emptyset$. We use $X_{U^p}$ to denote $\bar{G}(\BQ)\backslash (\CH_{\widehat{F_\fp^\ur}} \times G(\BA_f)/U_{p,0}U^p)$, where the action of $\bar{G}(\BQ)$ on $\CH_{\widehat{F_\fp^\ur}} \times G(\BA_f)/U_{p,0}U^p$ is defined similarly. \end{rem} \section{Local systems and the associated filtered $\varphi_q$-isocrystals on Shimura Curves} \label{sec:comp-iso} \subsection{Local systems on Shimura curves} \label{ss:local-sys} We choose a number field $L$ splitting $F$ and $B$. We identify $\{\tau_i: F\rightarrow L\}$ with $I=\{\tau_i: F\rightarrow \BC\}$ by the inclusion $L\rightarrow \BC$. Fix an isomorphism $L\otimes_{\BQ}B= M(2,L)^I$. Then we have a natural inclusion $G(\BQ)\hookrightarrow \GL(2,L)^I$. Let $\mathfrak{P}$ be a place of $L$ above $\fp$. For a multiweight $\mathrm{k}=(k_1,\cdots, k_g,w)$ with $k_1\equiv \cdots k_g\equiv w \hskip 6pt \mathrm{mod} \hskip 3pt 2$ and $k_1\geq 2, \cdots, k_g\geq 2$, we define the morphism $\rho^{(\mathrm{k})}:G\rightarrow \GL(n,L)$ $(n=\prod_{i=1}^g(k_i-1))$ to be the product $\otimes_{i\in I}[(\Sym^{k_i-2}\otimes \det^{(w-k_i)/2})\circ \check{pr}_i]$. Here $\check{pr}_i$ denotes the contragradient representation of the $i$th projection $pr_i: \GL(2,L)^I\rightarrow \GL(2,L)$. The algebraic group denoted by $G^c$ in \cite[Chapter III]{Mil} is the quotient of $G$ by $\ker(\mathrm{N}_{F/\BQ}:F^\times\rightarrow \BQ^\times)$. As the restriction of $\rho^{(\mathrm{k})}$ to the center $F^\times$ is the scalar multiplication by $\mathrm{N}_{F/\BQ}^{-(w-2)}(\cdot)$, $\rho^{(\mathrm{k})}$ factors through $G^c$, so we can attach to the representation $\rho^{(\mathrm{k})}$ a $G(\BA_f)$-equivariant smooth $L_\mathfrak{P}$-sheaf $\CF{(\mathrm{k})}$ on $M$. Let $p_2: G''_{E_0}\rightarrow G_{E_0}$ be the map induced by the second projection on $(D\otimes_\BQ E_0)^\times =D^\times \times D^\times$ corresponding to the conjugate $E_0\rightarrow E_0$. As the algebraic representation $\rho''^{(\mathrm{k})}=\rho^{(\mathrm{k})}\circ p_2$ factors through $G''^c$, we can attach to it a $G''(\BA_f)$-equivariant smooth $L_\mathfrak{P}$-sheaf $\CF'' (\mathrm{k})$ on $M''$. Let $\CF' (\mathrm{k}) $ be the restriction of $\CF'' {(\mathrm{k})}$ to $M'$. We define a charcater $\bar{\chi}: T_0\rightarrow \BG_m$ such that on $\BC$-valued point $\bar{\chi}$ is the inverse of the second projection $T_{0 \BC}=\BC^\times\times \BC^\times\rightarrow \BC^\times$. Let $\CF(\bar{\chi})$ be the $L_\mathfrak{P}$-sheaf attached to the representation $\bar{\chi}$. By \cite{Saito} one has the following $G(\BA_f)\times T(\BA_f)$-equivariant isomorphism of $L_\mathfrak{P}$-sheaves \begin{equation} \label{eq:iso-local-sys} \mathrm{pr}_1^* \CF {(\mathrm{k})} \simeq \alpha^* \CF'' {(\mathrm{k})}\otimes \beta^* \CF(\bar{\chi}^{-1})^{\otimes (g-1)(w-2)} \end{equation} on $M\times N$, where $\mathrm{pr}_1$ is the projection $M\times N\rightarrow M$. Note that $L\otimes_{\BQ}D\simeq (\mathrm{M}_2(L)\times \mathrm{M}_2(L))^I$. For each $i\in I$, the first component $\mathrm{M}_2(L)$ corresponds to the embedding $E_0\subset L\subset \BC$ and the second $\mathrm{M}_2(L)$ to its conjugate. Let $\CF'$ be the local system $R^1g_*L_\mathfrak{P}$ where $g:\CA\rightarrow M'$ is the universal $\mathscr{O}_D$-abelian scheme; it is a sheaf of $L\otimes_{\BQ}D$-modules. For each $i\in I$, let $e_i\in L\otimes_{\BQ}D$ be the idempotent whose $(2,i)$-th component is a rank one idempotent e.g. $\wvec{1}{0}{0}{0}$ and the other components are zero. Let $\CF'_i$ denote the $e_i$-part $e_i\cdot R^1g_* L_\mathfrak{P}$. Note that $\CF'_i$ does not depend on the choice of the rank one idempotent. By \cite{Saito} we have an isomorphism of local systems \begin{equation} \label{eq:isocrystal-general-case} \CF'{(\mathrm{k})}=\bigotimes_{i\in I}\Big(\Sym^{k_i-2}\CF'_i\bigotimes (\det \CF'_i)^{(w-k_i)/2}\Big).\end{equation} We can define more local systems on $M'$. For $(\mathrm{k},\mathrm{v})=(k_1,\cdots, k_g;v_1,\cdots, v_g)$, let $\CF' {(\mathrm{k},\mathrm{v})}$ be the local system $\bigotimes_{i\in I}\Big(\Sym^{k_i-2}\CF'_i\bigotimes (\det \CF'_i)^{v_i}\Big)$. \subsection{Filtered $\varphi_q$-isocrystals associated to the local systems} \label{ss:compute} We use $\tilde{\mathrm{k}}$ uniformly to denote $(\mathrm{k},\mathrm{v})=(k_1,\cdots, k_g;v_1,\cdots, v_g)$ (resp. $\mathrm{k}=(k_1,\cdots, k_g,w)$) in the case of $\natural='$ (resp. $\natural=\emptyset,',''$). We shall need the filtered $\varphi_q$-isocrystal attached to $\CF {(\tilde{\mathrm{k}})}$. However we do not know how to compute it. Instead, we compute that attached to $\mathrm{pr}_1^*\CF {(\tilde{\mathrm{k}})}$. As a middle step we determine the filtered $\varphi_q$-isocrystals associated to $\CF' {(\tilde{\mathrm{k}})}$ and $\CF'' {(\tilde{\mathrm{k}})}$. For any integers $k$ and $v$ with $k\geq 2$, and any inclusion $\sigma: F_\fp\rightarrow L_\mathfrak{P}$, let $V_{\sigma}(k,v)$ be the space of homogeneous polynomials in two variables $X_\sigma$ and $Y_\sigma$ of degree $k-2$ with coefficients in $L_\mathfrak{P}$; let $\GL(2,F_\fp)$ act on $V_{\sigma}(k,v)$ by $$ \wvec{a}{b}{c}{d}P(X_\sigma, Y_\sigma) = \sigma(ad-bc)^v P(\sigma(a)X_\sigma+\sigma(c)Y_\sigma, \sigma(b)X_\sigma+\sigma(d)Y_\sigma).$$ For $(\mathrm{k},\mathrm{v})=(k_1,\cdots, k_g; v_1,\cdots, v_g)$ we put $$V(\mathrm{k}, \mathrm{v})=\bigotimes_{\sigma\in I}V_{\sigma}(k_\sigma, v_\sigma)$$ where the tensor product is taken over $L_\mathfrak{P}$. Let $\bar{G}^\natural$ ($\natural= \emptyset, ', '', \widetilde{}\ $) be the groups defined in Section \ref{ss:notations}. For $\natural= \widetilde{}$, via the projection $\bar{G}^{\natural}(\BQ_p)\rightarrow \GL(2,F_\fp)$, $V(\tilde{\mathrm{k}})$ becomes a $\bar{G}^\natural(\BQ_p)$-module. For $\natural=',''$, via the projection of $\bar{G}^\natural(\BQ_p)\subset \mathrm{GL}(2, F_\fp) \times \mathrm{GL}(2, F_\fp)$ to the second factor, $V(\tilde{\mathrm{k}})$ becomes a $\bar{G}^\natural(\BQ_p)$-module. In each case via the inclusion $\bar{G}^\natural(\BQ)\hookrightarrow \bar{G}^\natural(\BQ_p)$, $V(\tilde{\mathrm{k}})$ becomes a $\bar{G}^\natural(\BQ)$-module. Using the $p$-adic uniformization of $X^\natural=X^\natural_{U^{\natural,p}}$ we attach to this $\bar{G}^\natural(\BQ)$-module a local system $\CV^\natural(\tilde{\mathrm{k}})$ on $X^\natural$. Let $\varphi_{q,\mathrm{k},\mathrm{v}}$ be the operator on $ V(\mathrm{k}, \mathrm{v}) $ $$\bigotimes_\sigma P_\sigma(X_\sigma, Y_\sigma) \mapsto \prod_\sigma \sigma(-\pi)^{v_\sigma} \cdot \bigotimes_\sigma P_\sigma(Y_\sigma, \sigma(\pi)X_\sigma ).$$ For $\mathrm{k}=(k_1,\cdots, k_g, w)$ we put $$ V(\mathrm{k}) = V(k_1,\cdots, k_g; (w-k_1)/2,\cdots, (w-k_g)/2) $$ and $$ \varphi_{q, \mathrm{k}}= \varphi_{q,(k_1,\cdots, k_g; (w-k_1)/2,\cdots, (w-k_g)/2)}.$$ Let $\mathscr{F}^\natural(\tilde{\mathrm{k}})$ be the filtered $\varphi_q$-isocrystal $ \CV^\natural(\tilde{\mathrm{k}})\otimes_{\BQ_p} \mathscr{O}_{X^\natural}$ on $X^\natural$ with the $q$-Frobenius $\varphi_{q, \tilde{\mathrm{k}} }\otimes \varphi_{q,\mathscr{O}_{X^\natural}}$ and the connection $1\otimes \mathrm{d}: \CV^\natural(\tilde{\mathrm{k}})\otimes_{\BQ_p} \mathscr{O}_{X^\natural}\rightarrow \CV^\natural(\tilde{\mathrm{k}})\otimes_{\BQ_p} \Omega^1_{X^\natural}$; the filtration on \begin{eqnarray}\label{eq:decom-isocrystal} \CV^\natural(\tilde{\mathrm{k}})\otimes_{\BQ_p} \mathscr{O}_{X^\natural} = \bigoplus_{\tau:F_\fp\hookrightarrow L_\mathfrak{P}} \CV^\natural(\tilde{\mathrm{k}})\otimes_{\tau, F_\fp} \mathscr{O}_{X^\natural} \end{eqnarray} is given by \begin{eqnarray*} && {\mathrm{Fil}}^{j+v_\tau} (\CV^\natural(\tilde{\mathrm{k}})\otimes_{\tau, F_\fp} \mathscr{O}_{X^\natural}) \\ &=& \left\{\begin{array}{ll} \CV^\natural(\tilde{\mathrm{k}})\otimes_{\tau, F_\fp} \mathscr{O}_{X^\natural} & \text{ if } j\leq 0 , \\ \text{the } \mathscr{O}_{X^\natural}\text{-submodule locally generated by polynomials} & \\ \hskip 110pt \text{in } V(\tilde{\mathrm{k}}) \text{ divided by } (z X_\tau + Y_\tau)^{j} & \text{ if } 1\leq j\leq k_\tau-2 \\ 0 & \text{ if } j \geq k_\tau - 1 \end{array}\right. \end{eqnarray*} with the convention that $v_\tau=\frac{w-k_\tau}{2}$ in the case of $\tilde{\mathrm{k}}=(k_1,\cdots, k_g,w)$, where $z$ is the canonical coordinate on $\CH_{\widehat{F_\fp^\ur}}$. \begin{lem}\label{lem:isocrystal-special} When $k_1=\cdots=k_{i-1}=k_{i+1}=\cdots=k_g=2$, $k_i=3$, and $v_1=\cdots=v_g=0$, the filtered $\varphi_q$-isocrystal attached to $\CF' {(\mathrm{k},\mathrm{v})}$ is isomorphic to $\mathscr{F}'(\mathrm{k},\mathrm{v})$. \end{lem} \begin{proof} Let $\tilde{e}_i\in L\otimes_{\BQ}D$ be the idempotent whose $(2,i)$-th component is $\wvec{1}{0}{0}{1}$ and the other component are zero. Let $\CA$ be the universal $\mathscr{O}_D$-abelian scheme over $M'$, $\widehat{\CA}$ the formal module on $X'$ attached to $\CA$. Note that $\tilde{e}_i (\mathfrak{o}_{L_\mathfrak{P}}\otimes_{\BZ_p}\widehat{\CA})$ is just the pullback of $\mathfrak{o}_{L_\mathfrak{P}}\otimes_{\tau_i, \mathfrak{o}_{F_\fp} }\CG$ by the projection $X'_{U'^p}\rightarrow (\bar{G}'(\BQ)\cap U'^pU'_{p,0})\backslash \CH_{\widehat{F^\mathrm{ur}_\fp}}$ \cite[6.43]{RZ}, where $\CG$ is the universal special formal $\mathscr{O}_{B_\fp}$-module (forgetting the information of $\rho$ in Drinfeld's moduli problem). As $L_\mathfrak{P}$ splits $B_\fp$, $L_\mathfrak{P}$ contains all embeddings of $F^{(2)}_\fp$. The embedding $\tau_i: F_\fp\hookrightarrow L_\mathfrak{P}$ extends in two ways to $F^{(2)}_\fp$ denoted respectively by $\tau_{i,0}$ and $\tau_{i,1}$. Then $$ \fo_{ L_\mathfrak{P} } \otimes_{ \tau_i, \fo_{F_\fp} } \fo_{B_\fp}= \fo_{L_\mathfrak{P}} \otimes_{ \tau_{i,0}, \fo_{F_\fp^{(2)}} } \fo_{B_{\fp}} \bigoplus \fo_{L_\mathfrak{P}} \otimes_{ \tau_{i,1}, \fo_{ F_\fp^{(2)} } } \fo_{B_\fp}. $$ We decompose $\mathfrak{o}_{L_\mathfrak{P}}\otimes_{\tau_i, \mathfrak{o}_{F_\fp}}\CG$ into the sum of two direct summands according to the action of $\fo_{F_\fp^{(2)}}\subset \fo_{B_\fp}$: $ \fo_{F_\fp^{(2)}}$ acts by $\tau_{i,0}$ on the first direct summand, and acts by $\tau_{i,1}$ on the second. Without loss of generality we may assume that $e_i$ in the definition of $\CF'_i$ (see Section \ref{ss:local-sys}) is chosen such that $e_i$ is the projection onto the first direct summand. So $e_i (\mathfrak{o}_{L_\mathfrak{P}}\otimes_{\BZ_p}\widehat{\CA})$ is just the pullback of $\mathfrak{o}_{L_\mathfrak{P}}\otimes_{\tau_{i,0}, \mathfrak{o}_{F_\fp^{(2)}} }\CG$ by the projection $X'_{U'^p}\rightarrow (\bar{G}'(\BQ)\cap U'^pU'_{p,0})\backslash \CH_{\widehat{F^\mathrm{ur}_\fp}}$. Now the statement of our lemma follows from the discussion in Section \ref{ss:univ-fil-phi-mod}. \end{proof} \begin{prop}\label{prop:fil-F-iso} The filtered $\varphi_q$-isocrystal attached to $\CF' {(\mathrm{k},\mathrm{v})}$ is isomorphic to $\mathscr{F}'(\mathrm{k},\mathrm{v})$. \end{prop} \begin{proof} Let $\mathscr{F}'_i$ denote the filtered $\varphi_q$-isocrystal attached to $\mathcal{F}'_i$. By (\ref{eq:isocrystal-general-case}) the filtered $\varphi_q$-isocrystal attached to $\CF'(\mathrm{k},\mathrm{v})$ is isomorphic to \begin{equation} \label{eq:isocrystal-general-case} \bigotimes_{i\in I}\Big(\Sym^{k_i-2}\mathscr{F}'_i\bigotimes (\det \mathscr{F}'_i)^{(w-k_i)/2}\Big).\end{equation} By Lemma \ref{lem:isocrystal-special} a simple computation implies our conclusion. \end{proof} \begin{cor} \label{cor:fil-mod-1} The filtered $\varphi_q$-isocrystal attached to $\CF''{(\mathrm{k})}$ is isomorphic to $\mathscr{F}''(\mathrm{k})$. \end{cor} \begin{proof} This follows from Proposition \ref{prop:fil-F-iso} and \cite[Lemma 6.1]{Saito}. \end{proof} \begin{lem}\label{lem:fil-mod-1} The filtered $\varphi_q$-isocrystal associated to the local system $\CF(\bar{\chi})$ over $(N_{E_0,0})_{\widehat{F_\fp^\ur}}$ is $\CF(\bar{\chi})\otimes \mathscr{O}_{(N_{E_0,0})_{\widehat{F_\fp^\ur}}}$ with the $q$-Frobenius being $1\otimes \varphi_{q, (N_{E_0,0})_{\widehat{F_\fp^\ur}}}$ and the filtration being trivial. \end{lem} \begin{proof} We only need to show that any geometric point of $(N_{E_0, 0})_{\widehat{F_\fp^\ur}}$ is defined over $\widehat{F_\fp^\ur}$. Let $h_{E_0}$ be as in Section \ref{ss:shimura-curves}, $\mu$ the cocharacter of $T_0$ defined over $E_0$ attached to $h_{E_0}$. Let $r$ be the composition $$\BA_{E_0}^\times\xrightarrow{\mu} T_0(\BA_{E_0}) \xrightarrow{\mathrm{N}^{E_0}_{\BQ}} T_0(\BA).$$ Let $$ \mathrm{art}_{E_0}: \BA^\times_{E_0}\twoheadrightarrow \Gal(E_0^\mathrm{ab}/E_0) $$ be the reciprocal of the reciprocity map from class field theory. For any compact open subgroup $U$ of $T_0(\BA_{f})$, $\Gal(\overline{\BQ}/E_0)$ acts on $(N_{E_0})_U(\overline{\BQ})= T_0(\BQ)\backslash T_0(\BA_{f})/U$ by $\sigma (T_0(\BQ) a U) = T_0(\BQ) r_f(s_\sigma) a U$, where $s_\sigma$ is any id\`ele such that $\mathrm{art}_{E_0}(s_\sigma)=\sigma| E^\mathrm{ab}$, and $r_{f}$ is the composition $$\BA^\times_{E_0}\rightarrow T_0(\BA) \rightarrow T_0(\BA_{f})$$ of $r$ and the projection map $T_0(\BA)\rightarrow T_0(\BA_{f})$. Let $\mathcal{I}$ be the subgroup of $\Gal(\overline{\BQ}/E_0)$ consisting of $\sigma$ such that $s_\sigma\in r_f^{-1}(U)$. Put $\mathcal{K}=\overline{\BQ}^{\mathcal{I}}$. Then any geometric point of $(N_{E_0})_U$ is defined over $\mathcal{K}$. Observe that, when $U$ is of the form $U_{p,0}U^p$ with $U^p$ a compact open subgroup of $T_0(\BA_f^p)$ and $U_{p,0}$ the maximal compact open subgroup of $T_0(\BQ_p)$, $\mathcal{K}$ is unramified over $p$. Therefore, any geometric point of $(N_{E_0, 0})_{\widehat{F_\fp^\ur}}$ is already defined over $\widehat{F_\fp^\ur}$. \end{proof} \begin{cor} \label{cor:fil-isocry} The filtered $\varphi_q$-isocrystal attached to $\mathrm{pr}_1^*\CF(\mathrm{k})$ is $\mathrm{pr}_1^*\mathscr{F}(\mathrm{k})$. \end{cor} \begin{proof} By (\ref{eq:iso-local-sys}) the filtered $\varphi_q$-isocrystal attached to $\mathrm{pr}_1^*\CF {(\mathrm{k})}$ is the tensor product of the filtered $\varphi_q$-isocrystal attached to $\alpha^* \CF'' {(\mathrm{k})}$ and that attached to $\beta^* \CF(\bar{\chi}^{-1})^{(g-1)(w-2)}$. Our conclusion follows from Proposition \ref{prop:p-adic-unif} (\ref{it:p-adic-unif-b}) (in the case of $\sharp=\widetilde{}$ and $\natural= ''$), Corollary \ref{cor:fil-mod-1} and Lemma \ref{lem:fil-mod-1}. \end{proof} It is rather possible that the filtered $\varphi_q$-isocrystal attached to $\CF(\mathrm{k})$ is $\mathscr{F}(\mathrm{k})$. But the author does not know how to descent the conclusion of Corollary \ref{cor:fil-isocry} to $X_{U^p}$. \section{The de Rham cohomology} \label{sec:cover-hodge} \subsection{Covering filtration and Hodge filtration for de Rham cohomology} \label{ss:cover-hodge} We fix an arithmetic Schottky group $\Gamma$ that is cocompact in $\PGL(2, F_\fp)$. Then $\Gamma$ acts freely on $\CH$, and the quotient $X_\Gamma=\Gamma\backslash \CH$ is the rigid analytic space associated with a proper smooth curve over $F_\fp$. Here we write $\CH$ for $\CH_{\widehat{F_\fp^\ur}}$. We recall the theory of de Rham cohomology of local systems over $X_\Gamma$ \cite{SchLoc, SchSt, deSh-1, deSh-2}. We denote by $\widehat{\CH}$ the canonical formal model of $\CH$. The curve $X_\Gamma$ has a canonical semistable module $\CX_\Gamma=\Gamma\backslash \widehat{\CH}$; the special fiber $\CX_{\Gamma,s}$ of $\CX_\Gamma$ is isomorphic to $\Gamma\backslash \widehat{\CH}_s$. The graph $\mathrm{Gr}(\CX_{\Gamma,s})$ is closely related to the Bruhat-Tits tree $\CT$ for $\PGL(2,F_\fp)$. The group $\Gamma$ acts freely on the tree $\CT$. Let $\CT_\Gamma$ denote the quotient tree. The set of connected components of the special fiber $\CX_{\Gamma,s}$ is in one-to-one correspondence to the set $\mathrm{V}(\CT_\Gamma)$ of vertices of $\CT_\Gamma$, each component being isomorphic to the projective line over $k(=$the resider field of $F_\fp$). Write $\{\RP^1_v\}_{v\in \mathrm{V}(\CT_\Gamma)}$ for the set of components of $\CX_{\Gamma,s}$. The singular points of $\CX_{\Gamma,s}$ are ordinary $k$-rational double singular points; they correspond to (unoriented) edges of $\CT_\Gamma$. Two components $\RP^1_{u}$ and $\RP^1_v$ intersect if and only if $u$ and $v$ are adjacent; in this case, they intersect at a singular point. For simplicity we will use the edge $e$ joining $u$ and $v$ to denote this singular point. There is a reduction map from $X_\Gamma^\mathrm{an}$ to $\CX_{\Gamma,s}$. For a closed subset $U$ of $\CX_{\Gamma,s}$ let $]U[$ denote the tube of $U$ in $X_\Gamma^\mathrm{an}$. Then $\{]\RP^1_v[\}_{v\in \mathrm{V}(\CT_\Gamma)}$ is an admissible covering of $X_\Gamma^\mathrm{an}$. Clearly $]\RP^1_{o(e)}[\cap ]\RP^1_{t(e)}[=]e[$. Let $L$ be a field that splits $F_\fp$. Fix an embedding $\tau: F_\fp\hookrightarrow L$. Let $V$ be an $L[\Gamma]$-module that comes from an algebraic representation of $\PGL(2,F_\fp)$ of the form $V(\mathrm{k})$ with $\mathrm{k}=(k_1,\cdots, k_g;2)$. We will regard $V$ as an $F_\fp$-vector space by $\tau$. Let $\mathscr{V}=\mathscr{V}(\mathrm{k})$ be the local system on $X_\Gamma$ associated with $V$. Let $H^*_{\dR,\tau}(X_\Gamma,\mathscr{V})$ be the hypercohomology of the complex $\mathscr{V}\otimes_{\tau, F_\fp} \Omega_{X_\Gamma}^\bullet$. We consider the Mayer-Vietorus exact sequence attached to $H^*_{\dR,\tau}(X_\Gamma, \mathscr{V})$ with respect to the admissible covering $\{]\RP^1_v[\}_{v\in \mathrm{V}(\CT_\Gamma)}$. As a result we obtain an injective map $$ \iota: (\bigoplus_{e\in \mathrm{E}(\CT_\Gamma)}H^0_{\dR,\tau}(]e[,\mathscr{V}))^-/ \bigoplus_{v\in \mathrm{V}(\CT_\Gamma)}H^0_{\dR,\tau}(]\RP^1_v[,\mathscr{V}) \hookrightarrow H^1_{\dR,\tau}(X_\Gamma^\mathrm{an}, \mathscr{V}) .$$ As $]\RP^1_v[$ and $]e[$ are quasi-Stein, a simple computation shows that $H^0_{\dR,\tau}(]\RP^1_v[,\mathscr{V})$ and $H^0_{\dR,\tau}(]e[,\mathscr{V})$ are isomorphic to $V$. Let $C^0(V)$ be the space of $V$-valued functions on $\mathrm{V}(\CT)$, $C^1(V)$ the space of $V$-valued functions on $\mathrm{E}(\CT)$ such that $f(e)=-f(\bar{e})$. Let $\Gamma$ act on $C^i(V)$ as $f\mapsto \gamma \circ f\circ \gamma^{-1}$. Then we have a $\Gamma$-equivariant short exact sequence \begin{equation} \label{eq:cover-sq} \xymatrix{ 0\ar[r] & V \ar[r] & C^0(V) \ar[r]^{\partial} & C^1(V) \ar[r] & 0 }\end{equation} where $\partial(f)(e)=f(o(e))-f(t(e))$. Observe that \begin{eqnarray*} \bigoplus_{v\in \mathrm{V}(\CT_\Gamma)}H^0_{\dR,\tau}(]\RP^1_v[,\mathscr{V}) & \cong & C^0(V)^\Gamma , \\ (\bigoplus_{e\in \mathrm{E}(\CT_\Gamma)}H^0_{\dR,\tau}(]e[,\mathscr{V}))^- & \cong & C^1(V)^\Gamma \end{eqnarray*} and the map $$\bigoplus_{v\in \mathrm{V}(\CT_\Gamma)}H^0_{\dR,\tau}(]\RP^1_v[,\mathscr{V})\rightarrow (\bigoplus_{e\in \mathrm{E}(\CT_\Gamma)}H^0_{\dR,\tau}(]e[,\mathscr{V}))^-$$ coincides with $\partial$. Thus $$(\bigoplus_{e\in \mathrm{E}(\CT_\Gamma)}H^0_{\dR,\tau}(]e[,\mathscr{V}))^-/ \bigoplus_{v\in \mathrm{V}(\CT_\Gamma)}H^0_{\dR,\tau}(]\RP^1_v[,\mathscr{V}) $$ is isomorphic to $C^1(V)^\Gamma/\partial C^0(V)^\Gamma$. From (\ref{eq:cover-sq}) we get the injective map $$\delta: C^1(V)^\Gamma/\partial C^0(V)^\Gamma \hookrightarrow H^1(\Gamma,V).$$ Let $C^1_\mathrm{har}(V)$ be the space of harmonic forms $$ C^1_\mathrm{har}(V):=\{f: \mathrm{Edge}(\CT)\rightarrow V | f(e)=-f(\bar{e}), \ \forall\ v, \sum_{t(e)=v} f(e)=0 \}.$$ Fixing some $v\in \mathrm{V}(\CT)$, let $\epsilon$ be the map $C^1_\mathrm{har}(V)^\Gamma\rightarrow H^1(\Gamma, V)$ \cite[(2.26)]{CMP} defined by \begin{equation} \label{eq:sch} c\mapsto (\gamma \mapsto \sum_{e:v\rightarrow \gamma v}c(e)), \end{equation} where the sum runs over the edges joining $v$ and $\gamma v$; $\epsilon$ does not depend on the choice of $v$. By \cite[Appendix A]{CMP} $\epsilon$ is minus the composition $$C^1_\mathrm{har}(V)^\Gamma\rightarrow C^1(V)^\Gamma/\partial C^0(V)^\Gamma\xrightarrow{\delta} H^1(\Gamma,V) ,$$ and is an isomorphism. Combining this with the injectivity of $\delta$ we obtain that both the natural map $C^1_\mathrm{har}(V)^\Gamma\rightarrow C^1(V)^\Gamma/\partial C^0(V)^\Gamma$ and $\delta$ are isomorphisms. Below, we will identify $C^1_\mathrm{har}(V)^\Gamma$ with $C^1(V)^\Gamma/\partial C^0(V)^\Gamma$. By \cite{deSh-2} we have \begin{equation}\label{eq:second-diff-form}\begin{aligned} H^1_{\dR,\tau}(X_\Gamma, \mathscr{V}) \cong \{ V &\text{-valued differentials of second kind on }X_\Gamma\} / \\ & \{d f | f \text{ a } V\text{-valued meromorphic function on }X_\Gamma\} .\end{aligned}\end{equation} In loc. cit, de Shalit only considered a special case, but his argument is valued for our general case. If $\omega$ is a $\Gamma$-invariant $V$-valued differential of the second kind on $\CH$, let $F_\omega$ be a primitive of it \cite{deSh-1}, which is defined by Coleman's integral \cite{Col}. \footnote{Precisely we choose a branch of Coleman's integral.} Let $P$ be the map $$P: H^1_{\dR,\tau}(X_\Gamma, \mathscr{V})\rightarrow H^1(\Gamma, V) , \hskip 10pt \omega \mapsto (\gamma \mapsto \gamma(F_\omega)-F_\omega).$$ Note that $P\circ \iota$ coincides with $\delta$. Thus $P$ splits the inclusion $\iota\circ \delta^{-1}:H^1(\Gamma,V)\rightarrow H^1_{\dR,\tau}(X_\Gamma,\mathscr{V})$. Let $I$ be the map $$ I: H^1_{\dR,\tau}(X_\Gamma, \mathscr{V})\rightarrow C^1_\mathrm{har}(V)^\Gamma, \omega\mapsto (e\mapsto {\mathrm{Res}}_e(\omega)).$$ Now, we suppose that $\Gamma$ is of the form in \cite[Appendix A]{CMP}. We do not describe it precisely, but only point out that $\Gamma_{i, 0}$ in Section \ref{ss:apply} is of this form. \begin{prop}\label{prop:deShalit} We have an exact sequence called the covering filtration exact sequence $$ \xymatrix{ 0 \ar[r] & H^1(\Gamma, V)\ar[r]^{\iota\circ \delta^{-1}} & H^1_{\dR,\tau}(X_\Gamma, \mathscr{V}) \ar[r]^{I} & C^1_\mathrm{har}(V)^\Gamma\ar[r] & 0 . } $$ \end{prop} \begin{proof} What we need to prove is that the map $$H^1_{\dR,\tau}(X_\Gamma, \mathscr{V})\rightarrow H^1(\Gamma, V)\oplus C^1_\mathrm{har}(V)^\Gamma\hskip 10pt \omega\mapsto (P(\omega), I(\omega))$$ is an isomorphism. When $V$ is the trivial module, this is already proved in \cite{deSh-2}. So we assume that $V$ is not the trivial module. First we prove the injectivity of the above map. For this we only need to repeat the argument in \cite[Theorem 1.6]{deSh-2}. Let $\omega$ be a $\Gamma$-invariant differential form of second kind on $\CH$ such that $P([\omega])=I([\omega])=0$, where $[\omega]$ denotes the class of $\omega$ in $H^1_{\dR,\tau}(X_\Gamma, \mathscr{V})$. Let $F_\omega$ be a primitive of $\omega$. As $I(\omega)=0$, the residues of $\omega$ vanish, and thus $F_\omega$ is meromorphic. As $P(\omega)=0$, we may adjust $F_\omega$ by a constant so that it is $\Gamma$-invariant. By (\ref{eq:second-diff-form}) we have $[\omega]=0$. To show the surjectivity we only need to compare the dimensions. By \cite[Appendix A]{CMP} we have $$\dim_{F_\fp}C^1_\mathrm{har}(V)^\Gamma=\dim_{F_\fp} H^1(\Gamma, V)$$ and $$\dim_{F_\fp}H^1(\Gamma,V)=\dim_{F_\fp}H^1(\Gamma, V^*)$$ where $V^*=\Hom_{F_\fp}(V, F_\fp)$ is the dual $F_\fp[\Gamma]$-module. By \cite[Theorem 1]{SchLoc} we have $$\dim_{F_\fp}H^1_{\dR,\tau}(X_\Gamma, \mathscr{V})=\dim_{F_\fp}H^1(\Gamma, V)+\dim_{F_\fp}H^1(\Gamma, V^*).$$ Hence $$ \dim_{F_\fp}H^1_{\dR,\tau}(X_\Gamma, \mathscr{V})=\dim_{F_\fp}C^1_\mathrm{har}(V)^\Gamma+\dim_{F_\fp} H^1(\Gamma, V), $$ as desired. \end{proof} We have also a Hodge filtration exact sequence $$ \xymatrix{ 0 \ar[r] & H^0(X_\Gamma, \mathscr{V}\otimes_{F_\fp,\tau}\Omega^1_{X_\Gamma}) \ar[r] & H^1_{\dR,\tau}(X_\Gamma, \mathscr{V}) \ar[r] & H^1(X_\Gamma, \mathscr{V}\otimes_{F_\fp,\tau}\CO_{X_\Gamma})\ar[r] & 0. } $$ This exact sequence and the covering filtration exact sequence fit into the commutative diagram \[ \xymatrix{ && 0\ar[d] && \\ && H^1(\Gamma, V)\ar[d]^{\iota\circ \delta^{-1}}\ar[rd]^{\simeq} && \\ 0 \ar[r] & H^0(X_\Gamma, \mathscr{V}\otimes_{F_\fp,\tau}\Omega^1_{X_\Gamma})\ar[rd]^{\simeq} \ar[r] & H^1_{\dR,\tau}(X_\Gamma, \mathscr{V}) \ar[r]\ar[d]^{I} & H^1(X_\Gamma, \mathscr{V}\otimes_{F_\fp,\tau}\CO_{X_\Gamma}) \ar[r] & 0 .\\ && C^1_\mathrm{har}(V)^\Gamma \ar[d] && \\ && 0 && } \] The diagonal arrows are isomorphisms. Indeed, by \cite[Section 3]{deSh-1} the south-west arrow is an isomorphism; one easily deduces from this that the north-east arrow is also an isomorphism. In particular, we have a Hodge-like decomposition \begin{equation} \label{eq:hodge-cover} H^1_{\dR,\tau}(X_\Gamma, \mathscr{V}) = H^0(X_\Gamma, \mathscr{V}\otimes_{F_\fp,\tau}\Omega^1_{X_\Gamma}) \bigoplus \iota\circ \delta^{-1} ( H^1(\Gamma, V) ). \end{equation} \subsection{De Rham cohomology of $\mathscr{F}(\mathrm{k})$} \label{ss:apply} Let $\bar{B}$ be as in Section \ref{ss:notations}. Write $\widehat{\bar{B}}^\times :=(\bar{B}\bigotimes \BA_{ f})^\times$ and $\widehat{\bar{B}}^{p,\times} :=(\bar{B}\bigotimes \BA_{ f}^p)^\times$. Let $U$ be a compact open subgroup of $G(\BA_f)$ that is of the form $U_{p,0}U^p$. We identify $U^p=\prod\limits_{\fl\neq \fp}U_\fl$ with a subgroup of $\widehat{\bar{B}}^{p,\times}$. Write $\widehat{\bar{B}}^{p,\times} = \sqcup_{i=1}^{h} \bar{B}^\times x_i U^p$. For $i=1,\cdots, h$, put $$ \widetilde{\Gamma}_i = \{ \gamma\in \bar{B}^\times: \gamma_\mathfrak{l}\in (x_i)_\mathfrak{l} U_\mathfrak{l} (x_i)_\mathfrak{l}^{-1} \text{ for }\mathfrak{l}\neq \mathfrak{p} \} .$$ Then $X_{U^p}$ is isomorphic to $$ \bar{B}^\times \backslash (\CH \times G(\BQ_p)/U_{p,0} \times \widehat{\bar{B}}^{p,\times}/U^p) \cong \sqcup_{i=1}^h \widetilde{\Gamma}_i \backslash (\CH \times \BZ) . $$ Here we identify $\BZ$ with $G(\BQ_p)/U_{p,0}$. Note that $\widetilde{\Gamma}_i$ acts transitively on $G(\BQ_p)/U_{p,0}$. Note that, for every point in $\BZ=G(\BQ_p)/U_{p,0}$ it is fixed by $\gamma\in \bar{B}^\times$ if and only if $\gamma_\fp$ is in $\mathrm{GL}(2, \mathscr{O}_{F_\fp})$. Put \begin{eqnarray*} \widetilde{\Gamma}_{i,0} &=& \{ \gamma\in \widetilde{\Gamma}_i: |\det(\gamma_\mathfrak{p})|_\fp=1 \} \\ &=& \{ \gamma\in \bar{B}^\times: \gamma_\mathfrak{l}\in (x_i)_\mathfrak{l} U_\mathfrak{l} (x_i)_\mathfrak{l}^{-1} \text{ for }\mathfrak{l}\neq \mathfrak{p} \text{ and } |\det(\gamma_\mathfrak{p})|_\fp=1 \} . \end{eqnarray*} Let $\Gamma_{i,0}$ be the image of $\widetilde{\Gamma}_{i,0}$ in $\mathrm{PGL}(2, F_\fp)$. Then we have an isomorphism \begin{equation} \label{eq:decomp} X_{U^p} \cong \sqcup_{i=1}^h \Gamma_{i,0}\backslash \CH .\end{equation} Applying the constructions in Section \ref{ss:cover-hodge} to each part $\Gamma_{i,0}\backslash \CH$ of $X_{U^p}$, we obtain operators $\iota$, $P$ and $I$. \section{Automorphic Forms on totally definite quaternion algebras and Teitelbaum type $L$-invariants} \label{sec:Teit-L-inv} In this section we recall Chida, Mok and Park's definition of Teitelbaum type $L$-invariant \cite{CMP}. \subsection{Automorphic Forms on totally definite quaternion algebras} We recall the theory of automorphic forms on totally definite quaternion algebras. Let $\bar{B}$ be as in Section \ref{sec:p-unif}, which is a totally definite quaternion algebra over $F$. Let $\Sigma=\prod_{\mathfrak{l}}\Sigma_{\mathfrak{l}}$ be a compact open subgroup of $\widehat{\bar{B}}^\times$. Let $\chi_{F,\mathrm{cyc}}:\BA_F^\times/F^\times\rightarrow \BZ_p^\times$ be the Hecke character obtained by composing the cyclotomic character $\chi_{\BQ,\mathrm{cyc}}: \BA_\BQ^\times/\BQ^\times\rightarrow \BZ_p^\times$ and the norm map from $\BA_F^\times$ to $\BA_\BQ^\times$. \begin{defn} An {\it automorphic form} on $\bar{B}^\times$, of {\it weight} $\mathrm{k}=(k_1,\cdots, k_g, w)$ and {\it level} $\Sigma$, is a function $\mathbf{f}: \widehat{\bar{B}}^\times\rightarrow V(\mathrm{k})$ that satisfies $$ \mathbf{f}( z \gamma b u ) = \chi_{F,\mathrm{cyc}}^{2-w}(z) (u_p^{-1}\cdot \mathbf{f}(b)) $$ for all $\gamma\in \bar{B}^\times$, $u\in \Sigma$, $b\in \widehat{\bar{B}}^\times$ and $z\in \widehat{F}^\times$. Denote by $S^{\bar{B}}_{\mathrm{k}}(\Sigma)$ the space of such forms. Remark that our $S^{\bar{B}}_{\mathrm{k}}(\Sigma)$ coincides with $S^{\bar{B}}_{\mathrm{k}',\mathrm{v}}(\Sigma)$ for $\mathrm{k}'=(k_1-2,\cdots, k_g-2)$ and $\mathrm{v}=(\frac{w-k_1}{2}, \frac{w-k_2}{2}, \cdots, \frac{w-k_g}{2})$ in \cite{CMP}. \end{defn} Observe that a form $\mathbf{f}$ of level $\Sigma$ is determined by its values on the finite set $\bar{B}^\times\backslash \widehat{\bar{B}}^\times/ \Sigma$. As in Section \ref{ss:apply} write $\widehat{\bar{B}}^{\times} = \sqcup_{i=1}^{h} \bar{B}^\times x_i \GL(2, F_\fp) \Sigma$; for $i=1,\cdots, h$, put $$ \widetilde{\Gamma}_i = \{ \gamma\in \bar{B}^\times: \gamma_\mathfrak{l}\in (x_i)_\mathfrak{l} \Sigma_\mathfrak{l} (x_i)_\mathfrak{l}^{-1} \text{ for }\mathfrak{l}\neq \mathfrak{p} \} .$$ Then we have a bijection $$ \sqcup_{i=1}^h \widetilde{\Gamma}_i\backslash \GL(2,F_\fp)/ \Sigma_\fp \xrightarrow{\sim} \bar{B}^\times \backslash \widehat{\bar{B}}^\times/\Sigma . $$ The class of $g$ in $\widetilde{\Gamma}_i\backslash \GL(2,F_\fp)/ \Sigma_\fp$ corresponds to the class of $x_{i,\fp} g_\fp $ in $\bar{B}^\times \backslash \widehat{\bar{B}}^\times/\Sigma$, where $g_\fp$ is the element of $\widehat{\bar{B}}^\times$ that is equal to $g$ at the place $\fp$, and equal to the identity at each other place. Using this we can attach to an automorphic form $\mathbf{f}$ of weight $\mathrm{k}$ and level $\Sigma$ an $h$-tuple of functions $(f_1, \cdots, f_h)$ on $\GL(2, F_\fp)$ with values in $V(\mathrm{k})$ defined by $f_i(g)=\mathbf{f}(x_{i,\fp} g_\fp)$. The function $f_i$ satisfies $$ f_i(\gamma_\fp g uz ) = \chi_{F,\mathrm{cyc}}^{2-w}(z) u^{-1}\cdot f_i(g) $$ for all $\gamma_\fp \in \widetilde{\Gamma}_i$, $g\in \GL(2,F_\fp)$, $u\in \Sigma_\fp$ and $z\in F_\fp^\times$. For each prime $\fl$ of $F$ such that $\bar{B}$ splits at $\fl$, $\fl\neq \fp$, and $\Sigma_\fl$ is maximal, one define a Hecke operator $T_\fl$ on $S^{\bar{B}}_{\mathrm{k}}(\Sigma)$ as follows. Fix an isomorphism $\iota_\fl: B_\fl\rightarrow M_2(F_\fl)$ such that $\Sigma_\fl$ becomes identified with $\mathrm{GL}_2(\mathfrak{o}_{F_\fl})$. Let $\pi_\fl$ be a uniformizer of $\mathfrak{o}_{F_\fl}$. Given a double coset decomposition $$ \mathrm{GL}_2(\mathfrak{o}_{F_\fl}) \wvec{1}{0}{0}{\pi_\fl} \mathrm{GL}_2(\mathfrak{o}_{F_\fl}) =\coprod b_i \mathrm{GL}_2(\mathfrak{o}_{F_\fl}) $$ we define the Hecke operator $T_\fl$ on $S^{\bar{B}}_{\mathrm{k}}(\Sigma)$ by $$ (T_\fl \mathbf{f})(b) =\sum_i \mathbf{f}(b b_i) . $$ We define $U_\fp$ similarly. Let $\BT_{\Sigma}$ be the Hecke algebra generated by $U_\fp$ and these $T_\fl$. Denote by $\mathfrak{o}_F^{(\fp)}$ the ring of $\fp$-integers of $F$ and $(\mathfrak{o}_F^{(\fp)})^\times$ the group of $\fp$-units of $F$. We have $\widetilde{\Gamma}_i\cap F^\times = (\mathfrak{o}_F^{(\fp)})^\times$. For $i=1,\cdots, h$, put $\Gamma_i=\widetilde{\Gamma}_i/(\mathfrak{o}_F^{(\fp)})^\times$. Consider the following twisted action of $\widetilde{\Gamma_i}$ on $V(\mathrm{k})$: $$ \gamma \star v = |\mathrm{Nrd}_{\bar{B}/F}\gamma|_\fp^{\frac{w-2}{2}} \gamma_\fp \cdot v .$$ Then $(\mathfrak{o}_F^{(\fp)})^\times$ is trivial on $V(\mathrm{k})$, so we may consider $V(\mathrm{k})$ as a $\Gamma_i$-module via the above twisted action. \subsection{Teitelbaum type $L$-invariants} \label{ss:Teitelbaum} Chida, Mok and Park \cite{CMP} defined Teitelbaum type $L$-invariant for automorphic forms $\mathbf{f}\in S^{\bar{B}}_{\mathrm{k}}(\Sigma)$ satisfying the condition (CMP) given in the introduction: \begin{equation*} \mathbf{f} \text{ is new at } \fp \text{ and } U_\fp \mathbf{f} = \CN \fp ^{w/2} \mathbf{f}. \end{equation*} \noindent We recall their construction below. We attach to each $f_i$ a $\Gamma_i$-invariant $V(\mathrm{k})$-valued cocycle $c_{f_i}$, where $\Gamma_i$ acts on $V(\mathrm{k})$ via $\star$. For $e=(s,t)\in \mathrm{E}(\CT)$, represent $s$ and $t$ by lattices $L_s$ and $L_t$ such that $L_s$ contains $L_t$ with index $\CN \mathfrak{p}$. Let $g_e\in\GL(2,F_\fp)$ be such that $g_e(\mathfrak{o}_{F_\fp}^2)=L_s$. Then we define $c_{f_i}(e):=|\det(g)|_\fp^{\frac{w-2}{2}} g_e\star f_i(g_e)$. If $\mathbf{f}$ satisfies (CMP), then $c_{f_i}$ is in $C^1_\mathrm{har}(V(\mathrm{k}))^{\Gamma_i}$ \cite[Proposition 2.7]{CMP}. Thus we obtain a vector of harmonic cocycles $c_{\mathbf{f}}=(c_{f_1},\cdots, c_{f_h})$. For each $c\in C^1_\mathrm{har}(V(\mathrm{k}))^{\Gamma_i}$ we define $\kappa^{\mathrm{sch}}_c$ to be the following function on $\Gamma_i$ with values in $V(\mathrm{k})$: fixing some $v\in \mathrm{V}(\CT)$, for each $\gamma\in\Gamma_i$, we put $$ \kappa^{\mathrm{sch}}_c(\gamma) := \sum_{e:v\rightarrow \gamma v} c(e) $$ where $e$ runs over the edges in the geodesic joining $v$ and $\gamma v$. As $c$ is $\Gamma_i$-invariant, $\kappa_c^{\mathrm{sch}}$ is a $1$-cocycle. Furthermore the class of $\kappa^{\mathrm{sch}}_c$ in $H^1(\Gamma_i, V(\mathrm{k}))$ is independent of the choice of $v$. Hence we obtain a map $$ \kappa^{\mathrm{sch}}: \bigoplus_{i=1}^h C^1_\mathrm{har}( V(\mathrm{k}))^{\Gamma_i}\rightarrow \bigoplus_{i=1}^h H^1(\Gamma_i, V(\mathrm{k})) . $$ By \cite[Proposition 2.9]{CMP} $\kappa^{\mathrm{sch}}$ is an isomorphism. For each $\sigma: F_\fp\rightarrow L_\mathfrak{P}$, let $L_{\mathfrak{P},\sigma}(k,v)$ be the dual of $V_\sigma(k,v)$ with the right action of $\GL(2, F_\fp)$: if $g\in \GL(2,F_\fp)$, $P' \in L_{\mathfrak{P}, \sigma}(k,v)$ and $P\in V_\sigma(k,v)$, then $ \langle P' , g\cdot P \rangle = \langle P'|_g , P\rangle $. We realize $L_{\mathfrak{P},\sigma}(k,v)$ by the same space as $V_\sigma(k,v)$, with the pairing $$\langle X_\sigma^jY_\sigma^{k-2-j}, X_\sigma^{j'}Y_\sigma^{k-2-j'} \rangle = \left\{ \begin{array}{ll} 1 & \text{ if }j=j' \\ 0 & \text{ if }j\neq j' \end{array}\right.$$ and the right $\GL(2, F_\fp)$-action $$P|_\wvec{a}{b}{c}{d} = P( \sigma(a) X_\sigma + \sigma(b) Y_\sigma, \sigma(c) X_\sigma + \sigma(d) X_\sigma ). $$ Put \begin{equation} \label{eq:L-sigma} L_\mathfrak{P}(\mathrm{k})^\tau :=\bigotimes_{\sigma\neq\tau}L_{\mathfrak{P},\sigma}(k_\sigma,(w-k_\sigma)/2) \end{equation} with the right action of $\GL(2,F_\fp)$, where the tensor product is taken over $L_\mathfrak{P}$. For each harmonic cocycle $c\in C^1_\mathrm{har}( V(\mathrm{k}))^{\Gamma_i}$ the method of Amice-Velu and Vishik allows one to define the $V(\mathrm{k})^\tau$-valued rigid analytic distribution $\mu_{c}^\tau$ on $\RP^1(F_\fp)$ such that the value of $ \int_{U_e} t^j \mu_{c}^\tau(t) \in V(\mathrm{k})^\tau$ ($0\leq j\leq k_\tau-2$) satisfies $$ \langle Q, \int_{U_e} t^j \mu_{c}^\tau(t)\rangle= \frac{ \langle X_\tau^j Y_\tau^{k_\tau-2-j}\otimes Q, c(e)\rangle}{\binc{k_\tau-2}{j}} $$ for each $Q\in L_\mathfrak{P}(\mathrm{k})^\tau$. \footnote{There are minors in the definitions of $\mu_c^\tau$ and $\lambda^\tau_c$ in \cite{CMP}. See \cite{Teit} Definition 6 and the paragraph before Proposition 9.} Using $\mu_{c}^\tau$ we obtain a $V(\mathrm{k})^\tau$-valued rigid analytic function $g^\tau_c$, precisely a global section of $V(\mathrm{k})^\tau\otimes_{\tau, F_\fp} \CO_{\CH,\widehat{F_\fp^\ur}}$ by $$ g^\tau_{c}(z) =\int_{\RP^1(F_\fp)}\frac{1}{z-t} \mu_{c}^\tau(t) $$ for $z\in \CH_{\widehat{F_\fp^\ur}}$. The function $g_c^\tau$ satisfies the transformation property: for $\gamma\in \tilde{\Gamma}_i$, let $\wvec{a}{b}{c}{d}$ be the image of $\gamma$ in $\bar{B}_\fp\cong \GL(2,F_\fp)$; then $$ g_c^\tau(\gamma\cdot z) = |\mathrm{Nrd}_{B/F}\gamma|_\fp^{\frac{w-2}{2}} (\mathrm{Nrd}_{B/F}\gamma )^{\frac{w-2-k_\tau}{2}} (cz+d)^{k_\tau} \gamma \cdot g_c^\tau(z) . $$ Consider $V(\mathrm{k})$ as an $F_\fp$-module via $\tau: F_\fp\hookrightarrow L_\mathfrak{P}$. We define the $V(\mathrm{k})$-valued cocycle $\lambda^\tau_c$ as follows. Fix a point $z_0\in \CH$. For each $\gamma\in\Gamma_i$ the value $\lambda_c^\tau(\gamma)$ is given by the formula: for $Q\in L_\mathfrak{P}(\mathrm{k})^\tau$, $$ \langle X_\tau^jY_\tau^{k_\tau-2-j}\otimes Q, \lambda_c^\tau(\gamma)\rangle = \binc{k_\tau-2}{j}\langle Q, \int_{z_0}^{\gamma z_0} z^j g_c^\tau(z)\mathrm{d}z \rangle $$ $(0\leq j\leq k_\tau-2)$, where the integral is the branch of Coleman's integral chosen in Section \ref{sec:cover-hodge}. Then $\lambda_c^\tau$ is a $1$-cocycle on $\Gamma_i$, and the class of $\lambda_c^\tau$ in $H^1(\Gamma_i, V(\mathrm{k}))$, denoted by $[\lambda_c^\tau]$, is independent of the choice of $z_0$. This defines a map $$ \kappa^{\mathrm{col},\tau}: \bigoplus_{i=1}^h C^1_\mathrm{har}(\CT, V(\mathrm{k}))^{\Gamma_i}\rightarrow \bigoplus_{i=1}^h H^1(\Gamma_i, V(\mathrm{k})), \hskip 10pt (c_i)_i \mapsto ([\lambda_{c_i}^\tau])_i. $$ As $\kappa^{\mathrm{sch}}$ is an isomorphism, for each $\tau$ there exists a unique $\ell_\tau\in L_\mathfrak{P}$ such that $$\kappa^{\mathrm{col},\tau}(c_{\mathbf{f}})=\ell_\tau \kappa^{\mathrm{sch}}(c_{\mathbf{f}}).$$ The {\it Teitelbaum type $L$-invariant} of $\mathbf{f}$, denoted by $\CL_T(\mathbf{f})$, is defined to be the vector $(\ell_\tau)_\tau$ \cite[Section 3.2]{CMP}. We also write $\CL_{T,\tau}(\mathbf{f})$ for $\ell_\tau$. \section{Comparing $L$-invariants} \label{sec:compare} Let $B$, $\bar{B}$, $G$ and $\bar{G}$ be as before. Let $\fn^-$ be the conductor of $\bar{B}$. By our assumption on $\bar{B}$, $\fp\nmid \fn^-$ and the conductor of $B$ is $\fp\fn^-$. Let $\fn^+$ be an ideal of $\mathfrak{o}_F$ that is prime to $\fp\fn^-$, and put $\fn:=\fp\fn^+\fn^-$. For any prime ideal $\fl$ of $\mathfrak{o}_F$, put $$ \bar{R}_\fl := \left\{ \begin{array}{ll}\text{an maximal compact open subgroup of } \bar{B}_\fl^\times & \text{ if } \fl \text{ is prime to } \fn, \\ \text{the maximal compact open subgroup of } \bar{B}_\fl^\times & \text{ if } \fl \text{ divides } \fn^-, \\ 1 + \text{ an Eichler order of } \bar{B}_\fl \text{ of level } \fl^{\mathrm{val}_\fl(\fp\fn^+)} & \text{ if } \fl \text{ divides }\fp\fn^+. \end{array} \right. $$ Let $\bar{\Sigma}=\Sigma(\fp\fn^+,\fn^-)$ be the level $\prod_\fl \bar{R}_\fl $. We write $S^{\bar{B}}_{\mathrm{k}}(\fp\fn^+,\fn^-)$ for $S^{\bar{B}}_{\mathrm{k}}(\Sigma(\fp\fn^+,\fn^-))$. Similarly we define $\Sigma=\Sigma(\fn^+,\fp\fn^-)$, a compact open subgroup of $G(\BA_f)$. Let $S_\mathrm{k}^B(\fn^+, \fp\fn^-)$ be the space of modular forms on the Shimura curve $M$ of weight $\mathrm{k}$ and level $\Sigma$. Let $\mathrm{k}=(k_1,\cdots, k_g,w)$ be a multiweight such that $k_1\equiv \cdots k_g\equiv w \hskip 6pt \mathrm{mod} \hskip 3pt 2$ and $k_1,\cdots ,k_g $ are all even and larger than $2$. Let $f_\infty$ be a (Hilbert) eigen cusp newform of weight $\mathrm{k}$ and level $\fn$ that is new at $\mathfrak{p}\mathfrak{n}^-$. Let $\mathbf{f}\in S_{\mathrm{k}}^{\bar{B}}(\fp\fn^+, \fn^-)$ (resp. $f_B\in S_{\mathrm{k}}^{B}(\fn^+, \fp\fn^-)$) be an eigen newform corresponding to $f_\infty$ by the Jacquet-Langlands correspondence; $\mathbf{f}$ (resp. $f_B$) is unique up to scalars. We further assume that $\mathbf{f}$ satisfies (CMP), so that we can attach to $\mathbf{f}$ the Teitelbaum type $L$-invariant $\CL_T(\mathbf{f})$. We define $\CL_T(f_\infty)$ to be $\CL_T(\mathbf{f})$. The goal of this section is to compare $\CL_{FM}(f_\infty)$ and $\CL_T(f_\infty)$. Let $L$ be a (sufficiently large) finite extension of $F$ that splits $B$ and contains all Hecke eigenvalues acting on $f_\infty$. Let $\lambda$ be an arbitrary place of $L$. \begin{lem} \label{lem:saito} \cite[Lemma 3.1]{Saito} There is an isomorphism $$ H^1_{\mathrm{et}}(M_{\overline{F}}, \CF(\mathrm{k})_\lambda) \simeq \bigoplus_{f'} \pi^\infty_{f', L(f')}\otimes_{L(f')} (\bigoplus_{\lambda'|\lambda} \rho_{f', \lambda'}) $$ of representations of $G(\BA_f)\times \Gal(\overline{F}/F)$ over $L_\lambda$. Here $f'$ runs through the conjugacy classes over $L$, up to scalars, of eigen newforms of multiweight $\mathrm{k}$ that are new at primes dividing $\fp\fn^-$. The extension of $L$ generated by the Hecke eigenvalues acting on $f'$ is denoted by $L(f')$, and $\lambda'$ runs through places of $L(f')$ above $\lambda$. \end{lem} By the strong multiplicity one theorem (cf. \cite{PS}) there exists a primitive idempotent $e_{\mathbf{f}}\in \BT_{\bar{\Sigma}}$ such that $e_{\mathbf{f}}\BT_{\bar{\Sigma}}=Le_{\mathbf{f}}$ and $e_{\mathbf{f}}\cdot S_{\mathrm{k}}^{\bar{B}}(\Sigma(\fp\fn^+,\fn^-)) = L\cdot \mathbf{f}$. Lemma \ref{lem:saito} tells us that $e_{\mathbf{f}}\cdot H^1_{\mathrm{et}}(M_{\overline{F}}, \CF(\mathrm{k})_\lambda)^{\Sigma}$ is exactly $\rho_{f_B,\lambda}$, the $\lambda$-adic representation of $\Gal(\overline{F}/F)$ attached to $f_B$. By Carayol's construction of $\rho_{f_\infty,\lambda}$ \cite{Car2} $\rho_{f_\infty,\lambda}$ coincides with $\rho_{f_B,\lambda}$. Now we take $\lambda$ to be a place above $\fp$, denoted by $\mathfrak{P}$. Recall that in Section \ref{ss:apply} and Section \ref{sec:Teit-L-inv} we associate to $\bar{\Sigma}$ the groups $\widetilde{\Gamma}_{i,0}, \widetilde{\Gamma}_i,\Gamma_i, \Gamma_{i,0}$ ($i=1,\cdots,h$). By (\ref{eq:decomp}) $X_{\Sigma}$ is isomorphic to $\coprod_{i} X_{\Gamma_{i,0}}$, where $X_{\Gamma_{i,0}}=\Gamma_{i,0}\backslash \CH_{\widehat{F_\fp^\ur}}$. \begin{thm} \label{thm:semistable} Let $f_B$ be as above. Then $\rho_{f_B, \mathfrak{P}, \fp}$ is a semistable $($non-crystalline$)$ representation of $\Gal(\overline{F}_\fp/F_\fp)$, and the filtered $(\varphi_q,N)$-module $D_{{\mathrm{st}}, F_\fp}(\rho_{f_B,\mathfrak{P},\fp})$ is a monodromy $L_\mathfrak{P}$-module. \end{thm} \begin{proof} To show that $\rho_{f_B, \mathfrak{P}, \fp}$ is semistable, we only need to prove that $H^1_{\mathrm{et}}((X_{\Sigma})_{{\bar{F}}_\fp}, \CF(\mathrm{k}))$ is semistable, since $\rho_{f_B, \mathfrak{P}, \fp}$ is a subrepresentation of $H^1_{\mathrm{et}}((X_{\Sigma})_{{\bar{F}}_\fp}, \CF(\mathrm{k}))$. But this follows from Proposition \ref{prop:semistable} and the fact that $X_{\Sigma}$ is semistable. Next we prove that $D_{{\mathrm{st}}, F_\fp}(\rho_{f_B,\mathfrak{P},\fp})$ is a monodromy $L_\mathfrak{P}$-module. We only need to consider $D_{{\mathrm{st}}, {\widehat{F_\fp^\ur}}}(\rho_{f_B,\mathfrak{P},\fp})$ instead. Twisting $f_B$ by a central character we may assume that $w=2$. Being a Shimura variety, $N_{E}$ is a family of varieties. But in the following we will use $N_E$ to denote any one in this family that corresponds to a level subgroup whose $\fp$-factor is $\mathscr{O}_{E_\fp}^\times$. By the proof of Lemma \ref{lem:fil-mod-1}, any geometric point of $(N_E)_{ \widehat{F_\fp^\ur}}$ is defined over $\widehat{F_\fp^\ur}$. In other words, $(N_E)_{ \widehat{F_\fp^\ur}}$ is the product of several copies of $\Spec( \widehat{F_\fp^\ur})$. Let $\mathrm{pr}_1$ be the projection $X_{\Sigma}\times (N_E)_{\widehat {F_\fp^\ur}}\rightarrow X_{\Sigma}$. By Corollary \ref{cor:fil-isocry} the filtered $\varphi_q$-isocrystal attached to $\mathrm{pr}_1^*\mathcal{F}(\mathrm{k})$ is $\mathrm{pr}_1^*\mathscr{F}(\mathrm{k})$. Note that $$ H^1_{\mathrm{et}}((X_{\Sigma}\times (N_E)_{ \widehat{F_\fp^\ur}})_{\widehat{\overline{F}}_\fp}, \mathrm{pr}_1^*\mathcal{F}(\mathrm{k}))= H^0_{\mathrm{et}}((N_{E})_{\widehat{\overline{F}}_\fp},\BQ_p)\otimes_{\BQ_p} H^1_{\mathrm{et}}((X_{\Sigma})_{\widehat{\overline{F}}_\fp}, \mathcal{F}(\mathrm{k})) . $$ The $\Gal(\overline{F}_\fp/F_\fp^\ur)$-representation $H^0_{\mathrm{et}}((N_{E})_{\widehat{\overline{F}}_\fp},\BQ_p)$ is crystalline and the associated filtered $\varphi_q$-module is $H^0_\dR((N_E)_{ \widehat{F^\ur_\fp}},\BQ_p)$ with trivial filtration. Let $H^0$ denote this filtered $\varphi_q$-module for simplicity. As a consequence, we have an isomorphism of filtered $(\varphi_q,N)$-modules \begin{equation} H^0\otimes _{\BQ_p} D_{{\mathrm{st}}, \widehat{F_\fp^\ur}}(H^1_{\mathrm{et}}( (X_{\Sigma})_{\widehat{\overline{F}}_\fp}, \mathcal{F}(\mathrm{k}))) = H^0 \otimes _{\BQ_p} H^1_\dR( X_{\Sigma} , \mathscr{F}(\mathrm{k}) ) . \end{equation} Using the decomposition (\ref{eq:decom-isocrystal}), for each embedding $\tau:F_\fp\hookrightarrow L_\mathfrak{P}$ we put $$ H^1_{\dR,\tau} ( X_{\Sigma} , \mathscr{F}(\mathrm{k}) ) := \mathbb{H}^1(X_{\Sigma} , \CV(\mathrm{k})\otimes_{\tau, F_\fp } \Omega^\bullet_{X_{\Sigma}} ). $$ In Section \ref{ss:Teitelbaum} we attached to $\mathbf{f}=(f_1,\cdots, f_h)$ an $h$-tuple $g^\tau=(g_1^\tau,\cdots, g_h^\tau)$. Let $M_{\tau}(\mathbf{f})$ denote the $L_\mathfrak{P}$-subspace of $\bigoplus_i H^1_{\dR,\tau}(X_{\Gamma_{i,0}}, \mathscr{F}(\mathrm{k}))$ generated by the element $$\omega_{\mathbf{f}}^\tau = \Big( g^\tau_i(z)(z X_\tau + Y_\tau)^{k_\tau-2} \mathrm{d} z \Big)_{1\leq i\leq h}.$$ (Note that, when $w=2$, the twisted action $\star$ in Section \ref{sec:Teit-L-inv} coincides with the original action.) Therefore, we have $$ e_\mathbf{f} \cdot {\mathrm{Fil}}^{\frac{w+k_\tau}{2}-2}H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k})) \supseteq M_\tau(\mathbf{f}). $$ Consider the pairing $<\cdot,\cdot>$ on $V(\mathrm{k})$ defined by $$<Q_1,Q_2> \ = \text{the coefficient of }\prod_\sigma (X_\sigma Y_\sigma)^{k_\sigma-2} \text{ in } Q_1Q_2.$$ It is perfect and induces a perfect pairing on $H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}))$. With respect to this pairing ${\mathrm{Fil}}^{\frac{w-k_\tau}{2}+1} H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}))$ is orthogonal to ${\mathrm{Fil}}^{\frac{w+k_\tau}{2}-2} H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}))$. As $e_\mathbf{f} \cdot H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}) )$ is of rank $2$ over $L_\mathfrak{P}$, we obtain $$ e_\mathbf{f} \cdot {\mathrm{Fil}}^{\frac{w-k_\tau}{2}+1}H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}))= e_\mathbf{f} \cdot {\mathrm{Fil}}^{\frac{w+k_\tau}{2}-2}H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}) ) = M_\tau(\mathbf{f}). $$ Thus $$H^0\otimes_{\BQ_p}\mathrm{pr}_1^* M_\tau(\mathbf{f}) = H^0\otimes_{\BQ_p}{\mathrm{Fil}}^{\frac{w+\min_\tau\{ k_\tau\}}{2}-2} D_{{\mathrm{st}}, \widehat{F_\fp^\ur}}(\rho_{f_B,\mathfrak{P},\fp})_\tau.$$ Let $\iota^\tau$ and $I^\tau$ be the operators attached to the sheaf $\mathscr{F}(\mathrm{k})$ over $X_{\Sigma}$ (see Section \ref{ss:apply}). Here, the superscript $\tau$ is used to emphasize the embedding $\tau:F_\fp\hookrightarrow L_\mathfrak{P}$. Proposition \ref{prop:monodromy} tells us that the monodromy $N$ on $H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}))$ coincides with $\iota^\tau\circ I^\tau$. By Proposition \ref{prop:deShalit} the kernel of $N$ is $$\iota^\tau\circ \delta^{-1} (\bigoplus_i H^1(\Gamma_{i,0}, V(\mathrm{k}))).$$ So, by (\ref{eq:hodge-cover}) the restriction of $N$ to $M_{\tau}(\mathbf{f})$ is injective. Hence, $$\ker(N)\cap {\mathrm{Fil}}^{\frac{w+\min_\tau\{ k_\tau\}}{2}-2}D_{{\mathrm{st}}, F_\fp^\ur}(\rho_{f_B,\mathfrak{P},\fp})_\tau=0,$$ as desired. \end{proof} Let $P^\tau$ be the operator attached to $\mathscr{F}(\mathrm{k})$ (see Section \ref{ss:apply}). \begin{lem}\label{lem:PI-comp-kappa} Let $\omega_{\mathbf{f}}^\tau$ be as in the proof of Theorem \ref{thm:semistable}. Then $$ P^\tau(\omega_{\mathbf{f}}^\tau)=\kappa^{\mathrm{col},\tau}(c_{\mathbf{f}}) , \hskip 20pt I^\tau(\omega_{\mathbf{f}}^\tau)=c_{\mathbf{f}}. $$ \end{lem} \begin{proof} The first formula comes from the definitions. The proof of the second formula is similar to that of \cite[Theorem 3]{Teit}. Let $\mu^\tau_i$ ($i=1,\cdots,h$) be the rigid analytic distributions on $\mathrm{P}^1(F_\fp)$ coming from $c_\mathbf{f}$ (see Section \ref{sec:Teit-L-inv}). Recall that $$ g_{i}^\tau(z) = \int_{\RP^1(F_\fp)} \frac{1}{z-t}\mu_i^\tau (t) . $$ For each edge $e$ of $\CT$ let $B(e)$ be the affinoid open disc in $\RP^1(\BC_p)$ that corresponds to $e$. Assume that $B(e)$ meets the limits set $\RP^1(F_\fp)$ in a compact open subset $U(e)$. Put $$ g^\tau_{i,e}(z)=\int_{U(e)} \frac{1}{z-t} \mu_i^\tau(t).$$ Let $a(e)$ be a point in $U(e)$. Expanding $\frac{1}{z-t}$ at $a(e)$ we obtain that $$ g^\tau_{i,e}(z) =\sum_{n=0}^{+\infty} \frac{1}{(z-a(e))^{n+1}} \int_{U(e)} (t-a(e))^n \mu^\tau_i(t) , $$ and thus $g^\tau_{i,e}(z)$ converges on the complement of $B(e)$. Note that $g^\tau_i-g^\tau_{i,e}$ is analytic on $B(e)$. So, we have \begin{eqnarray*} && I^\tau(g_i^\tau (z X_\tau + Y_\tau)^{k_\tau-2}\mathrm{d} z) (e) = {\mathrm{Res}}_e \Big(g_i^\tau (z X_\tau + Y_\tau)^{k_\tau-2}\mathrm{d} z \Big) = {\mathrm{Res}}_e \Big( g^\tau_{i,e} (zX_\tau+Y_\tau)^{k_\tau-2}\mathrm{d} z \Big) \\ && = {\mathrm{Res}}_e \Big( \int_{U(e)} \frac{ ( z X_\tau + Y_\tau)^{k_\tau-2} } {z-t} \mu_i^\tau(t) \Big) = \int_{U(e)} ( t X_\tau + Y_\tau)^{k_\tau-2} \mu_i^\tau(t) = c_{f_i}(e), \end{eqnarray*} where the fourth equality follows from the fact that ${\mathrm{Res}}_e$ commutes with $\int_{U(e)} \ \cdot \ \mu_i^\tau(t)$. \end{proof} \begin{thm} \label{thm:main-text} Let $f_\infty$ be as above. Then $\CL_{FM}(f_\infty)=\CL_T(f_\infty)$. \end{thm} \begin{proof} Twisting $f_\infty$ by a central character we may assume that $w=2$. Put $D_\tau=H^0\otimes_{\BQ_p} e_{\mathbf{f}} H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}) )$. Note that $N=\iota^\tau\circ I^\tau$. As the kernel of $N$ coincides with the image of $\iota^\tau\circ \delta^{-1}$ and $P^\tau$ splits $\iota^\tau\circ \delta^{-1}$, we have $D_\tau = \ker(N) \oplus \ker(P^\tau)$. Write $\omega_{\mathbf{f}}^\tau= x+y$ according to this decomposition. Then \begin{equation}\label{eq:proof-a} \iota^\tau\circ \delta^{-1}\circ P^\tau (\omega^\tau_\mathbf{f}) = x .\end{equation} By the proof of Theorem \ref{thm:semistable}, $y$ is non-zero and so $N(y)\neq 0$. By Lemma \ref{lem:PI-comp-kappa} and the definition of Teitelbaum type $L$-invariant, $\CL_{T,\tau}(f_\infty)$ is characterized by the property \begin{equation} \label{eq:proof-b} (P^\tau-\CL_{T,\tau}(f_\infty) \epsilon\circ I^\tau)\omega^\tau_{\mathbf{f}} = 0, \end{equation} where $\epsilon$ is the map defined by (\ref{eq:sch}) which coincides with $\kappa^{\mathrm{sch}}$. As $\delta^{-1}\circ\epsilon =-\mathrm{id}$ and $\iota^\tau\circ I^\tau=N$, we have \begin{equation}\label{eq:proof-c} \iota^\tau \circ \delta^{-1} \circ \epsilon \circ I^\tau (\omega^\tau_\mathbf{f}) = - N(\omega_\mathbf{f}^\tau) . \end{equation} By (\ref{eq:proof-a}), (\ref{eq:proof-b}) and (\ref{eq:proof-c}) we get \begin{equation}\label{eq:L-Teit} \CL_{T,\tau}(f_\infty)N(\omega_\mathbf{f}^\tau) + x = 0. \end{equation} By the definition of Fontaine-Mazur $L$-invariant, $\CL_{FM,\tau}(f_\infty)$ is characterized by the property \begin{equation} \label{eq:L-FM} y-\CL_{FM,\tau}(f_\infty)N(y) \in H^0\otimes_{\BQ_p}{\mathrm{Fil}}^{\frac{w+\min_\tau\{ k_\tau\}}{2}-2} H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k})). \end{equation} Combining (\ref{eq:L-Teit}) and (\ref{eq:L-FM}) we obtain {\allowdisplaybreaks \begin{eqnarray*} && (\CL_{FM,\tau}(f_\infty)-\CL_{T,\tau}(f_\infty))N(y) \\ & = & \CL_{FM,\tau}(f_\infty)N(y) - \CL_{T,\tau}(f_\infty)N(\omega_\mathbf{f}^\tau) \\ & \in & \omega_{\mathbf{f}}^\tau + H^0\otimes_{\BQ_p}{\mathrm{Fil}}^{\frac{w+\min_\tau\{ k_\tau\}}{2}-2} H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}) ) \\ &=& H^0\otimes_{\BQ_p}{\mathrm{Fil}}^{\frac{w+\min_\tau\{ k_\tau\}}{2}-2} H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}) ). \end{eqnarray*} } \noindent But $N(y)$ is in $\ker(N)$ and is non-zero, and by Theorem \ref{thm:semistable} $$\ker(N)\cap H^0\otimes_{\BQ_p}{\mathrm{Fil}}^{\frac{w+\min_\tau\{ k_\tau\}}{2}-2} H^1_{\dR,\tau}(X_{\Sigma}, \mathscr{F}(\mathrm{k}) ) = 0 .$$ Therefore $$\CL_{FM,\tau}(f_\infty)-\CL_{T,\tau}(f_\infty)=0,$$ as wanted. \end{proof} \end{document}
arXiv
Journal of the Brazilian Computer Society Original Paper On the analysis of the collaboration network of the Brazilian symposium on computer networks and distributed systems 30 Editions of history Guilherme Maia1, Pedro O. S. Vaz de Melo1, Daniel L. Guidoni2, Fernanda S.H. Souza2, Thiago H. Silva1, Jussara M. Almeida1 & Antonio A. F. Loureiro1 Journal of the Brazilian Computer Society volume 19, pages 361–382 (2013)Cite this article The Brazilian symposium on computer networks and distributed systems (SBRC) reached its 30th edition as the paramount scientific event in the area of computer networks and distributed systems in Brazil. Faced with this opportune moment in the event's history, we here study the collaboration network established among authors who have jointly published in the symposium. Towards that end, we collected bibliographic data from all 30 editions, and built the co-authorship network of the event. We then analyzed the network structural features and evolution throughout its history. Our results reveal the main kind of co-author relationship among authors, show the most prominent communities within SBRC, the regions of Brazil that attracts the most authors, the researchers with central roles in the network as well as the importance of inter-state collaborations. Finally, we align our results with historical facts that may have had a key impact on the symposium success. In 2012, the Brazilian symposium on computer networks and distributed systems (SBRC) reached its 30th edition as the paramount scientific event in the area of computer networks and distributed systems in Brazil. Its importance may be evidenced by the number of papers submitted and by the number of participants in the last editions of the event. For instance, in the last few editions, the symposium received between 250 and 300 papers from about 1,000 authors, including researchers, professionals and students. Due to its wide acceptance, SBRC assembles most of the work in the areas of computer networks and distributed systems from Brazil's academic and professional communities, besides international researchers. Scientific events play a central role in knowledge dissemination, since they are one of the few opportunities for researchers with common interests to gather together, present new ideas and establish new collaborations. SBRC is not different, as we shall present throughout this paper. Hence, given this opportune moment in the event's history, a broad investigation of such research community is timely. We use social network analysis (SNA) to further investigate this well established research community. Because of the popularity of online social networks and the large availability of real social data, SNA has gained a lot of momentum in the last few years [22, 26, 36, 43]. Besides online social networks [20, 23, 25], it is possible to apply SNA to discover knowledge in the most diverse systems, such as mobile operators [12, 19, 40], Internet websites [1, 13], railroads [13], citation networks [17], movies and actors [21], sports leagues [28] and many others. In summary, a social network is composed by a set of individuals or a group connected by different kinds of relationships. Individuals, also known as actors, may represent a single person, a group or even an organization. Their relationships, or ties, may indicate, for instance, a friendship, a professional relationship or a scientific collaboration. Actors and ties are defined according to the question of interest. A scientific collaboration network is a special type of social network in which the actors represent authors and ties indicate that the authors have published at least one paper together. Collaboration networks have been widely analyzed [31–33, 35], as these studies disclose several interesting features of the academic communities that comprise them. For instance, the analysis of topological features enables the identification of communities [2], the intensity of collaborations among authors [11] and how the network evolves over the years [25]. Therefore, in this paper we study the SBRC's collaboration network. Towards that goal, bibliographic data from all 30 editions of the event were collected and a series of features, obtained from the topological structure of the collaboration network, was analyzed. In particular, we here investigate the evolution of the largest connected component, number of communities, importance of nodes, their degree distribution and correlations, and network homophily.Footnote 1 Through this study, it is possible to better understand the behavior of such a vibrant community and part of the impact produced by some crucial collaborations established through the years. For example, we are interested in investigating the peculiarities of collaborations among researchers from a region with a historically very active and productive research community, and among researchers from a region with no such community. It is worth noticing that when compared to previous studies on collaboration networks, our work stands out for three main reasons. First because we analysed 30 years of data, which to the best of our knowledge is more than any other study available in the literature. Second, our analysis considers several features that are usually not present, such as the geographic location of the researchers, the institutions they work for, among others. Finally, we make a parallel of our findings with several historical facts that may have had a key impact on the symposium success and also may have changed the way research is done in Brazil. The remainder of this paper is organized as follows. Section 2 presents the related work. Then, Sect. 3 describes how data used in this work was collected and how the network was built. Section 4 presents some statistics about the participation of authors from different regions of Brazil. Next, Sect. 5 describes the main kinds of collaborations among authors, whereas Sect. 6 presents a study of the connected components of the network. Section 7 discusses distance and clustering measures, and Sect. 8 analyzes the main communities within SBRC. Researchers with strategic positions in the network are identified in Sect. 9, and Sect. 10 analyzes homophily in the SBRC network. Section 11 presents a cross analysis among some evaluated metrics. Section 12 presents the conclusions of this work. Finally, Appendix presents the historical aspects that may have contributed to foster the research development in Brazil. The analysis of collaboration networks is well explored in the literature. For instance, Newman [31, 32] presents some of the pioneering studies in this area. The author analyzes three scientific communities—Computer Science, Physics and Biomedicine—and presents several structural and topological features of these communities, focusing on the main similarities and differences among them. Although these communities share some similarities, Newman shows that they also have substantial differences. In that direction, Menezes et al. [29] assess how the process of knowledge production in Computer Science happens in different geographic regions of the globe. The authors divide the globe into three main regions and evaluate how research is conducted in 30 different subfields of Computer Science for each of the considered regions, focusing on the structural and temporal features of the network. Among the main results, Menezes et al. show that the scientific production of Brazilian researchers is increasing in recent years, which they attribute to an increase in funding provided by Brazilian government agencies to foster research in the country. Towards analyzing the Brazilian scientific production, Freire and Figueiredo [15] show the main similarities and differences between two co-authorship networks they propose: "Global", created from all publications of the DBLP database, and "Brazilian", which is a subset of the first network considering only researchers affiliated to Brazilian institutions. Moreover, they propose a new ranking metric to measure the importance of both an individual in the network and groups of individuals. This metric is applied to the Brazilian network and is compared with two existing ranking measurements in Brazil: the Research Fellowship Program of CNPq (an agency of the Brazilian Ministry of Science and Technology) and graduate programs in Computer Science provided by CAPES (an agency of the Brazilian Ministry of Education). The authors show that the proposed metric can accurately identify influential groups and well-established graduate programs in Brazil. There are studies that analyze specific events and areas. Procópio et al. [38] create and analyze the co-authorship network of articles published during the first 25 years of the Brazilian Symposium on Databases (SBBD). The authors focus on the network's structural features and temporal evolution throughout the event's history. They present and study statistics such as average number of papers per author, average number of papers per edition of the symposium, average co-authors per paper, among others. Finally, the work shows that the studied network follows a well-known phenomenon called small world, typically found in other social networks. Silva et al. [41] create and analyze the co-authorship network of papers published in three international top conferences focused on Ubiquitous Computing (Ubicomp). They provide useful analysis for that network, such as representativeness of authors and institutions, and formation of communities. Finally, Nascimento et al. [30] analyze the co-authorship graph of the ACM Special Interest Group on Management of Data (SIGMOD) Conference. Among the main results, the authors observe that the SIGMOD community is also a small-world network. In comparison with these previous studies of co-authorship networks of specific research communities, we go further and analyze three fundamental aspects of researchers who publish in SBRC: geographic location, topological characteristics in the network and productivity statistics in the conference. Finally, scientific collaboration networks are not limited to co-authorship networks. Bazzan and Argenta [4] create a social network of the PC (Program Committee) members of conferences sponsored by the Brazilian Computer Society (SBC). The relations among nodes of this network are established according to co-authorship data extracted from the DBLP. By using well-known network metrics, such as node degree, largest connected component and clustering coefficient, the authors show that the studied network does not fit any well-established pattern when compared to other networks studied in the literature. This is probably due to the fact that members of this network do not necessarily interact with one another in terms of co-authorship, once they belong to different sub-areas within Computer Science. One of the main findings was that the most connected nodes are non-Brazilian PC members, and they play an important role in the network by acting as connectors between Brazilian researchers. When compared to our work, we point out that SBRC includes both well-established authors and newcomers to the symposium, while the PC network is formed exclusively by members of senior character, which explains the difference in some of the metrics. Nevertheless, we observed that the SBRC network follows similar patterns to other previously analyzed scientific events and communities, such as the ones in [30, 38] and [41]. The network of the SBRC symposia Our study is based on bibliographic data of the 30 editions of SBRC, which took place from 1983 until 2012. It is focused on the collaboration network established among authors of papers published in the main track of each edition of the symposium. Thus, we collected data of full papers published in the proceedings of the event, excluding lectures, tutorials and workshop papers. For each paper, we collected its title, year of publication, list of authors with their respective affiliations, geographic location of the authors' institutions, and the language the paper was written. The data comprises digital and non-digital sources, since the first editions of the event occurred before the existence of the Web. Part of the bibliographic data was obtained automatically through the website of the Brazilian Computer Society (SBC)Footnote 2, while the rest was collected manually from the proceedings of each edition. We manually disambiguated all author names to ensure data consistency. Network creation In this paper, the SBRC network is represented as a temporal graph \(G_y=(V_y,E_y)\), where \(V_y\) is the set of vertices, \(E_y\) is the set of edges and \(y\) is the year the network refers to. The graph \(G_y=(V_y,E_y)\) is an undirected weighted graph, where the vertices are authors and the edges indicate that two authors have published together in and before the year \(y\). Moreover, each edge has a corresponding weight, which represents the number of papers the authors published together in and before the year \(y\). The complete SBRC collaboration network, built from all papers published in its 30 editions, has a total of 1,808 authors (vertices) and 4,066 collaborations (edges), comprising a total of 1,406 papers. The average number of papers per year is 46.8 (with a standard deviation of 20.66) and the average number of authors per year is 115.1 (with a standard deviation of 65.51). The reason behind this large variance is due to the constant growth of the conference throughout the years. For instance, in the first year, 1983, the number of authors was 22 and the number of papers was 12. In the last year, 2012, the number of authors was 174 (690 % higher) and the number of papers was 59 (391 % higher). Finally, the average number of papers per author is 2.31 (with a standard deviation of 4.25), while the average number of authors per paper is 1.97 (with a standard deviation of 1.37). Figure 1 shows the complete SBRC network as viewed in 2012, representing 30 years of history. Observe that the network contains clusters of nodes with the same color, which represent authors affiliated to universities located in a given region of Brazil. Green represents authors affiliated to universities located in the North region of Brazil, blue for the south, red for southeast, yellow for center-west, orange for northeast and, finally, black for authors affiliated to foreign universities. The complete network as viewed in 2012. Nodes with the same color represent authors affiliated to universities located in a given region of Brazil. Green represents authors affiliated to universities located in the North region of Brazil, blue for the South, red for South-east, yellow for Center-west, orange for North-east and finally, black for authors affiliated to foreign universities. Notice that the colors used in this figure are the same as the ones used in Fig. 4, where we show the Brazilian map Network metrics Network metrics have great importance while investigating network representation, characterization and behavior. This section presents a summary of the key network measurements used in our analysis, which are discussed along the paper. The order of \(G_y\) is the number of its vertices. The size of \(G_y\) is the number of its edges. The degree (\(k_i\)) of a vertex \(i \in V_y\) is the number of edges incident to vertex \(i\) and the degree distribution (\(P(k)\)) expresses the fraction of vertices in the whole graph with degree \(k\). The assortativity measures whether vertices of high degree tend to connect to vertices of high degree (assortative network) whereas the network is called disassortative when vertices of high degree tend to connect to vertices of low degree. A path connecting two vertices \(i,j \in V_y\) is said to be minimal if there is no other path connecting \(i\) to \(j\) with less links. Accordingly, the average path length of \(G_y\) is the average number of links in all shortest paths connecting all pairs of vertices in \(V_y\). The graph diameter is the length of the longest shortest path between all pairs of vertices in \(V_y\). The clustering coefficient of a vertex\(i\) is the ratio of the number of edges between neighbors of vertex \(i\) to the upper bound on the number of edges between them. For instance, given \(i,j,k \in V_y\) and assuming that edges \((i,j), (i,k) \in E_y\), the clustering coefficient defines the probability that \((j,k)\) also belongs to the set \(E_y\). The clustering coefficient of a graph is the average value of the clustering coefficients of all vertices in \(G_y\). The betweenness centrality of a vertex \(i\) is associated with an importance measure, based on the number of shortest paths between other pairs of vertices that include vertex \(i\). The closeness centrality of a vertex \(i\) is defined as the inverse of farness, which in turn, is the sum of distances to all other nodes. Homophily is the tendency of people (in our case, researches) with similar features to interact with one another more than with people with dissimilar features. The indicator function \({\small 1}\!\!1{c_i = c_j}\) assumes value \(1\) if the class \(c_i\) of node \(i\) is equal to the class \(c_j\) of node \(j\), and 0 otherwise. Notice that the assortativity is the homophily when the class \(c_i\) of node \(i\) is its degree \(k_i\). Table 1 summarizes the mathematical formulas for the main network metrics outlined above. Please refer to Costa et al. [10] for a complete review of measurements. Table 1 Network metrics In this section we present some statistics that give evidence of why SBRC is one of the most important scientific events in Computer Science in Brazil, with a growing community over the years. Figure 2 presents the aggregated number of distinct authors who published papers in SBRC, Fig. 2a, the aggregated number of distinct authors' affiliations, Fig. 2b, and also the aggregated number of published papers, Fig. 2c, over the years.Footnote 3 As can be observed, the number of new authors more than doubled between the years 2000 and 2012. The same increase also happened to the number of new universities and published papers. These results show that SBRC is attracting the participation of new researchers and new institutions over the years. Moreover, they clearly reflect the increase in the number of new graduate programs in Computer Science in Brazil, especially during the 2000s, as shown in Fig. 26 of the Appendix. Evolution of the number of distinct authors, institutions and published papers in all 30 editions of SBRC The previous results can be summarized in Fig. 3, that shows the SBRC network density over the years. The network density is calculated by dividing the number of edges by the number of nodes present in the graph. Observe that the density grows fast in the first years of the symposium, then it remained practically constant during the 1990s and grew again in the 2000s. Once more, this behavior is strongly correlated with the number of graduate programs in Computer Science in Brazil. In the 1990s, since the number of graduate programs remained practically constant and the means of communication were not as developed as in the 2000s, the papers were mostly composed either by repeated collaborators or by new authors, what explains the constant network density in this decade. The graph density (number of edges divided by the number of nodes) over time Figure 4 illustrates the participation of authors from different Brazilian states and regions in the symposium by showing the number of papers published with authors from each state (Fig. 4a), and from each of the five Brazilian regions (Fig. 4b). It is possible to see that the participation is mostly concentrated in the northeast, southeast and south regions of Brazil, summing up more than 95 % of the total published papers. Moreover, the top three states in numbers of papers (Rio de Janeiro, São Paulo and Minas Gerais) are in the same Brazilian region (southeast). Notice that of five states (Acre, Amapá, Rondônia, Roraima and Sergipe), four of them belonging to the north region of Brazil, never published in SBRC. To better understand the participation of each region of Brazil in SBRC, Fig. 5 shows the evolution of the number of publications for each of the five regions. An interest fact in this figure is that it clearly reflects the evolution of the number of Computer Science graduate programs per region, as shown in Fig. 26 of Appendix. This shows that investments in educational initiatives, especially the opening of new graduate programs, leads to research advancements. These results also explain why the participation in SBRC is mostly concentrated in the northeast, southeast and south regions, while the north and center-west are under represented. Total number of publications with authors from each state and region Evolution of the number of publications per Brazilian's regions SBRC is a national symposium targeted at the Brazilian research community. However, the participation of authors with foreign affiliation is increasing over the years, as it can be observed in Fig. 6, which shows the aggregated number of foreign institutions with papers published in SBRC. In order to verify if such increase in the number of foreign institutions is solely a consequence of an increase in the number of foreign authors, Fig. 6 shows the number of papers published in English over the years. For our surprise, this number is actually decreasing in recent years. Intuitively, this result tells us that the number of active foreign authors publishing in SBRC is not increasing, but rather the number of Brazilian authors in foreign institutions is. This finding is consistent with Bazzan and Argenta [4], who suggest that more efforts are necessary to internationalize the Brazilian research community. Aggregated evolution of collaborations with distinct foreign universities, and the number of papers written in English Finally, Tables 2 and 3 show the top 20 authors with the largest number of published papers from Brazilian and foreign institutions, respectively. Table 2 identifies several well-known researchers in the fields of computer networks and distributed systems. This is another indication of the paramount importance of the SBRC for the Brazilian community. Table 3 also identifies some Brazilian researchers with foreign affiliations at the time of publication. This reinforces the hypothesis that the number of active foreign authors publishing in SBRC is not increasing. Table 2 Top 20 Brazilian authors Table 3 Top 20 foreign authors As stated before, an edge between two researchers indicates a scientific collaboration between them. Thus, the degree of a node \(i\) represents the number of collaborators of researcher \(i\). The analysis of the node's degree in a collaboration network allows the assessment of the structure of co-authorship relationships among researchers in the communities of computer networks and distributed systems in Brazil. Figure 7 shows the first three moments of the degree distribution over the years. We can observe that the average number of collaborations only increased from approximately 2 in the first year of the symposium to approximately 4 in the last year. However, both variance and skewness of the distribution are significantly large, indicating that a considerable number of researchers possess a high degree. Finally, we observe that the three moments of the distribution become reasonably steady in the late 1980's, and after that the network variance increases at the end of the 1990's. First three moments of the degree distribution over the years Analyzing each year individually, we can observe that the node degree distribution is close to a power-law distribution [13], as shown in Fig. 8 for selected years. Mathematically, an amount \(x\) follows a power-law if it can be taken from a probability distribution \(p(x) \propto x^{-\alpha }\), where \(\alpha \) is a constant parameter known as exponent or scale parameter, and it is typically a value between \(2 < \alpha < 3 \). Graphically, \(\alpha \) and \(\alpha -1\) represent the slopes of the lines that define the probability density function \(\text{ Pr } (X = x)\) and the complementary cumulative distribution function \(\text{ Pr } (X \ge x)\), respectively. The adjustments were made according to the method based on the maximum likelihood described in [9]. Degree distribution at four specific years Figure 9 shows the evolution of the exponent \(\alpha \) of the degree distributions over time. The points identified as "biased fit" represent biased fits and should not be considered good fitsFootnote 4 [9]. It is worth noticing that there is a general trend towards \(\alpha \) decreasing over the years, which indicates that the variance distribution increases as the number of nodes with a high degree in the network grows. For instance, in the first year of the SBRC network, all nodes have degrees of the first order of magnitude, i.e., lower than \(10\). In the last year, however, while several nodes have node degrees close to the third order of magnitude, the large majority still have degree lower than \(10\). This is an expected behavior in a collaborative network since, over time, researchers tend to consolidate and aggregate groups and communities that share the same interests. This shall be seen in more details hereafter. Slope \(\alpha \) of the adjustment made in the degree distribution. We can observe the slope decreases over the years, which testifies the increase in the variance observed in Fig. 7. Points identified as in detail are the distributions presented in Fig. 8 An interesting way to identify the differences in the way senior researchers and newcomers connect among themselves is through a metric called \({\langle } k_{nn} {\rangle }_K\) [5], which indicates the average degree of neighbors of a given node with degree \(k\). By using the \({\langle } k_{nn} {\rangle }_K \) metric, it is possible, for instance, to observe if high degree nodes tend to connect to each other or with low degree nodes. Figure 10 shows the function \({\langle } k_{nn} {\rangle }_K \) for four different years. While in 1989 there is a slight tendency of nodes with similar degrees to connect to each other (slightly increasing curve), in 1995 there is almost no correlation (curve slightly negative). In 2003 and 2012, the tendency is to have high degree nodes connected to low degree nodes (descending curves). Average \({\langle }k_{nn}{\rangle }_k\) degree of the neighbors of a given node with degree \(k\) In order to evaluate the behavior of \({\langle } k_{nn} {\rangle }_K\) over the years, the assortativity [34] is calculated for each network, over the years. The network assortativity measures the tendency of nodes with similar degrees to be connected. That is, in a assortative network, high degree nodes tend to connect with other high degree nodes, whereas in a disassortative network, high degree nodes tend to be connected to low degree nodes. The assortativity values range from \(-\)1, when the network is fully disassortative, to 1, when it is fully assortative. Figure 11 shows that the SBRC collaboration network becomes disassortative over the years. In 1983, the network is completely assortative due to the presence of cliques, i.e., each node is connected to nodes having the same degree. During the initial years, the network still presents an assortative feature, due to the large presence of isolated cliques or small connected components. However, from the end of the 1990s on, the network is consolidated as disassortative, where the tendency is that high degree nodes be connected to low degree nodes. This is the natural behavior in collaboration networks, as students or newcomers (low degree nodes) tend to connect with well-established and expert researchers (high degree nodes) to grow in their academic careers. The network assortativity over the years. It is possible to observe that the network becomes disassortative over the years, indicating that high degree nodes tend to be connected to low degree nodes. The behavior of \({\langle }k_{nn}{\rangle }_k\) for the networks, which is represented by dots marked as in detail, is shown in Fig. 10 Connected components In this section we show how the connected components of the network evolved over the years. Figure 12 shows the evolution of the number of network components. Notice that the increase in the number of network components is more significant during the first editions of the symposium. For instance, in 1983, the network had 11 components, while in 1989, after seven editions, the collaboration network had 78 components, an increase of more than 609 %. Thereafter, 21 editions later, in 2011, the network had 124 components, an increase of 58 % compared to 1989. This is explained by the fact that the collaborations among researchers in the early years of the conference were geographically constrained, i.e., a collaboration between researchers of different institutions was rare. Recall from Fig. 2b of Sect. 4 that the number of new authors' affiliations more than doubled in the first seven editions of the event. Moreover, the means of communication in Brazil during this early period were not as developed. Therefore, collaborations among authors were restricted to researchers working at the same institutions, leading to the creation of many network components or isolated groups of researchers (for a proper discussion of this fact, see Sect. 10 on homophily). Number of components Table 4 shows the top five largest components for different years. We can observe that in the first editions of the symposium, the number of researches in each component was small, thus confirming the discussion above. In the first editions, essentially, each component was a representation of each published paper so far. In 1985 and 1986, we can observe the creation of research groups inside each university. This also reinforces the fact that in the first editions of the symposium the collaborations were geographically constrained. As the means of communication evolved during the mid-1990s and the number of graduate programs in Brazil started to increase, we can also observe an increase of the size of each component, since new collaborations among authors from different groups start to arise. From the last decade until today, we can also observe an increase in the size of the largest connected component. This happens because nowadays, the collaborations among researchers are not geographically constrained and the students from the 1980s and 1990s are, today, research leaders in different regions of Brazil with well-established communities (for a discussion on communities, see Sect. 8, and for a discussion on important nodes, see Sect. 9). Table 4 Top five largest components Figure 13 shows the evolution of the two largest connected components of the network. We can observe that, up to 1995, the largest connected component (LCC) and the second largest connected component (SLCC) represent about 21 and 10 % of the network, respectively. After 1995, the LCC increases over the years and the SLCC becomes steady until 2001, when it suddenly decreases considerably. This sudden decrease was caused by the previous SLCC merging with the LCC. An important issue when analyzing connected components is the collaboration between individual researchers. A collaboration which previously did not exist may drastically change the network structure. Two largest components To illustrate how important individual collaborations can impact the network structure, consider the year of 2001, when the SLCC merges with the LCC. This happened exclusively because of the collaboration of two researchers from the SLCC with researchers belonging to the LCC. More specifically, in 2001, Michael A. Stanton, an author in the SLCC in 2000, was a co-author with Noemi de La Rocque Rodriguez, who belongs to the LCC in 2000. Similarly, also in 2001, José Neuman de Souza, who belongs to the SLCC in 2000, co-authored a paper with Nelson L. S. da Fonseca, who belongs to the LCC in 2000. These two collaborations illustrate a non-geographically constrained collaboration and a geographically constrained collaboration, respectively. For instance, in 2001, Michael A. Stanton was working at the Federal University Fluminense, located in Niterói, RJ, and Noemi de La Rocque Rodriguez was working at the Pontifical Catholic University of Rio de Janeiro, located in Rio de Janeiro, RJ. These two cities are about 20 km from one another. However, in 2001, José Neuman de Souza was working at the Federal University of Ceará, located in Fortaleza, CE, and Nelson L. S. da Fonseca was working at the State University of Campinas, located in Campinas, SP. These two cities are about 3,000 km far way from one another. It is important to notice that during the 2000s, collaborations like the one between Neuman and Fonseca start to become more common due to the many technological advancements in telecommunication and transportation, and also to the expansion of Computer Science graduate programs in many regions of Brazil. Figure 14 presents the number of newcomers to the symposium per year. Newcomers are the authors who are publishing in the SBRC for the first time. In Fig. 14, we classify them according to two categories: connected to the LCC and not connected to the LCC. Note that, in the early editions of the symposium, newcomers connected to the LCC are a minority, compared to the others. However, from 1995 on, the number of newcomers connected to the LCC starts to increase considerably, on a year basis, whereas the same is not observed for the number of newcomers not connected to the LCC. Indeed, from 2001 on, most of the newcomers are connected to the LCC. As the LCC becomes much larger than any other connected component starting at 1995, it is natural that the number of newcomers connected to it also increases from this year onwards. This result also corroborates the fact that until the mid-1990s, authors from the same paper would form a new connected component or connect to the smaller components already present in the network, thus leading to many isolated communities. However, from the mid 1990s onward, as new collaborations start to emerge, isolated components merge into one another, thus resulting in many larger communities. Number of newcomers per year Clustering and distance The clustering coefficient (CC) and distance are important metrics to evaluate social networks. The clustering coefficient \({\text{ cc }}_i\) characterizes the density of connections close to vertex \(i\). It measures the probability of two given neighbors of node \(i\) to be connected. The clustering coefficient of the network is the average \({\text{ cc }}_i, \forall i \in V\). Figure 15 shows the evolution of the network clustering coefficient and the clustering coefficient of the equivalent random network. The random network was generated using the model proposed in [3], that generates a random graph with the same number of vertices, edges and degree distribution. In the first edition of the symposium, in 1983, the clustering coefficient was 0.45. In that year, the authors had a CC equal to 0 or 1. A CC equal to 0 indicates that an article has one or two authors while a CC equal to 1 indicates that an article has three or more authors. In the first edition of the symposium, there were only collaborations among authors of the same article. In 1984, the CC of the network is significantly reduced, decreasing to 0.34. This is due to an increase in the number of authors with a CC equal to zero, i.e., articles with one or two authors. For instance, from the 27 authors in that edition, 20 authors have a CC equal to zero. In the most recent years, the CC tends to stabilize, due to an increase in the collaborations among authors. In 2012, the CC is 0.67, similar to other collaboration networks studied in the literature [18, 33]. We also observe the SBRC clustering coefficient is, on average, one order of magnitude higher than the clustering coefficient of its equivalent random network (from late 1980s). Clustering coefficient over the years An important construction of social networks is the small-world network concept [44]. It is characterized by having a clustering coefficient significantly higher than the one of its equivalent random network and an average shortest path length (SP) as low as the one of the equivalent random network. The SP measures the average shortest distance (in hops) between every pair of nodes in the network. Figure 16 shows the evolution of the average shortest path of the historical SBRC network in comparison to the average shortest path of the random network. We observe that the SBRC network SP increases until the late 1990's, when the SP starts to decrease. This can be attributed to the advancements in telecommunications and technology as well as the creation of graduates programs, which resulted in an increase of the collaboration among researchers from different groups. During the last editions of the SBRC, the SP of the SBRC is 1.29 times greater compared to the random network. The high clustering coefficient, combined with the small shortest path, characterizes the SBRC network as a small-world network. In 2012, the average shortest path between authors was around 5.5, which follows the six degree of separation theory [42]. As a practical consequence, the short paths between SBRC researchers means that new hot topics on computer networks and distributed systems may propagate quickly among SBRC researchers. Average shortest path over the years The behavior of the network diameter is illustrated in Fig. 17. The network diameter measures the largest shortest path in the network. In the first two decades, the shortest paths among researchers increase, which leads to an increase in the network diameter. However, after 1999, due to an increase in new collaborations among authors and the network densification (see Fig. 3 of Sect. 4), the network diameter starts to decrease. In 1999, it was 19 hops, but diminished to 15 hops in 2012. Network diameter over the years One of the most relevant characteristics of graphs representing real systems is the structure of communities, i.e., the organization of vertices into clusters, with many edges between the vertices of the same cluster and relatively few edges connecting vertices of different clusters. In order to identify communities in the collaboration network, we used the \(k\)-clique community identification algorithm. A community is defined as the union of all cliques of size \(k\) that can be achieved through adjacent \(k\)-cliques (two \(k\)-cliques are considered adjacent if they share \(k-1\) vertices). In other words, a k-clique community is the largest connected sub-graph obtained by the union of a \(k\)-clique and of all \(k\)-cliques which are connected to it. The implementation of this algorithm was based on Palla et al. [37]. Our main goal is to evaluate how distributed and clustered are the collaborations among authors in the SBRC network. This justifies the choice for the \(k\)-clique community algorithm, since it is a good measure to select sub-communities and also overlapping communities [14]. In order to achieve our goal, we use the lowest bound value of \(k = 3\), since it is the most favorable value to capture the largest group of authors (largest connected sub-graph) that forms a community, according to the algorithm specification. When executing the \(k\)-clique community algorithm with \(k=3\), assuming a network with high collaboration between nodes, it is expected to find very few communities. However, as discussed hereafter, this is not the case for the SBRC network. View of communities In this section we present two visualizations of communities: one observing the university the author has worked for and the other observing the state in which this university is located (a more detailed discussion about communities shall be presented in Sect. 8.2). Thus, each node in the network is associated with one or more states and universities, given that an author may be affiliated to more than one university during his career. Figure 18 presents a view of the four largest communities by Brazilian states, while Fig. 19 shows the four largest communities by university. These communities have 182, 87, 79 and 69 authors, respectively. In both figures, the size of the word indicates its popularity within the community. This means that in the largest identified community, shown in Fig. 18, the states of Rio de Janeiro (RJ) and Rio Grande do Sul (RS) are the most representative ones. It is worth noting that the word FOREIGN represents researchers from institutions located outside Brazil. The four largest communities: visualization by state The four largest communities: visualization by university After executing the \(k\)-clique community algorithm (with \(k=3\)), we would expect to find a small number of communities. But, as we can see, we identified many different communities. Obviously, with higher values of \(k\) we find communities that have authors more connected among themselves. Considering \(k = 4\), for example, the largest, second largest, and third largest communities have 42, 39, 31 authors, respectively. If we consider \(k = 5\), the number of authors in the largest, second largest, and third largest communities drops to 16, 16, 15, respectively. A value of \(k = 3\) is particularly interesting for visualizing the general interaction among the authors of the SBRC network, but on the other hand this may not find very strong communities. This is what happened for the community consisting mainly of authors from RS and RJ (largest 3-clique community). After a closer look, we can see that the number of collaborations between these groups of authors is not as large as the number of collaborations within the groups. For instance, when we execute the algorithm considering \(k=4\), we notice that this community is divided into two communities, one formed mostly by authors from RS, and the other formed by authors from RJ. This shows that RJ and RS together as the largest 3-clique community do not represent a very strong connected community. In general, we observe that most of the interactions tend to happen among authors from particular regions and institutions. This information might be particularly interesting to support decisions towards the improvement of collaborations among researchers from different universities and regions of Brazil. Community evolution over time In this section we present a more detailed analysis of the identified communities. Figure 20a shows the number of communities over the years. We can see that the number of communities increases over time, reaching more than 250 communities in 2012. The choice of \(k = 3\) also has implications in this result. For the SBRC network, higher values of \(k\) may imply into a smaller number of communities. For example, three authors of the same paper, that published just this paper in the entire history of SBRC, are considered a community when using \(k = 3\), but not when using \(k = 4\). Communities. a Number of communities per year. b CDF of the number of authors in communities of the years 1983, 1993, 2003 and 2012. c Number of authors in specific communities Figure 20b shows the cumulative distribution function (CDF) of the number of authors in the communities, considering the years of 1983, 1993, 2003, and 2012. A high number of communities, as observed in Fig. 20a, does not mean that there are many authors in all these communities. Figure 20b shows that communities with a small number of authors represent a considerable subset of all communities. Around 90 % of all communities have less than 10 authors, and approximately 55 % have only three authors. However, we can notice that over the years, due to an increase in the number of collaborations, communities with a higher number of authors start to arise. For example, in 1983 the largest community had only four authors, whereas in 2012 six communities had more than 30 authors. Figure 20c shows the number of authors over the years for the following groups of communities: all communities, 20, 10 and 5 largest communities, and the largest community. We observe that from 2004 to 2012 the number of authors per community increases considerably. As stated before, such increase is due to the growth of a few communities with a large number of authors. In this way, we observe that in 2004, the 5 largest communities represent approximately 64 % of the top 10 communities and approximately 48 % of the top 20 largest communities. Considering the year 2012, these values are 79 and 65 %, respectively. We also observe that the top 5 communities represent a significant amount (29 %) of all considered authors. This result indicates that authors in the largest communities interact with researchers outside their communities, thus increasing it over time. Finally, someone may attribute the change in the communities dynamics during the 2000s, as shown in Fig. 20, to the merge of the LCC and SLCC in 2001, as previously described in Sect. 6. However, this event alone does not totally explain such a change. It is worth noting that it is during the 2000s that significant historical events start to happen in Brazil (see Appendix). For instance, we can outline the developments in the telecommunications and transportation sectors. Moreover, Brazil witnessed a rapid growing in the number of Computer Science graduate programs all over the country. Therefore, we can conclude that the combination of these events changed the way researchers used to collaborate, thus better explaining the change in the communities dynamics during this decade. Important nodes The identification of important nodes within a social network structure is a common activity in SNA. Usually, the identification of such nodes is performed by using centrality metrics, such as the closeness and betweenness [6]. These metrics aim to identify nodes that possess strategic locations within the social network structure. A strategic location may indicate that a node has a high influence over other nodes, or it hold the attention of nodes whose positions are not as convenient in the social context. The main idea behind the closeness centrality metric is to show how close a node is to all other nodes in the network, i.e., how many edges separate a node from other nodes. On the other hand, the main idea behind the betweenness centrality is to show how often a node is in the shortest path between any two other nodes. In the perspective of a co-authorship network, the closeness centrality may indicate the authors with a favorable location in the network structure to start the dissemination of new scientific findings or research directions to the whole network. For instance, if an author with a high closeness disseminates a new scientific finding, the probability for this new finding reaching the whole network in the least amount of time is higher than if the dissemination started at an author with a lower closeness. In the case of the betweenness centrality, it may indicate the most efficient authors to act as bridges to carry information among different authors or communities. For instance, if an author has a high betweenness, the probability that a given piece of information being disseminated passes through this researcher is higher than for an author with a lower betweenness. Therefore, we hope that these metrics are able to identify not only strategically located authors in the co-authorship network, but also distinguished researchers in the scientific community of computer networks and distributed systems. Table 5 shows the top 10 authors with the largest betweenness values, and Table 6 shows the top 10 authors with the largest closeness values. Indeed, we can note by looking at both tables that authors identified by both metrics are researchers that are widely known within the SBRC community, and even within the international scientific community. Conversely, we also can note that some prolific authors (e.g., Antonio A. F. Loureiro and Otto C. M. B. Duarte shown in Table 2) are not listed by both tables. Hence, one may wonder whether these metrics are actually accurate in capturing influential authors in the co-authorship network and also distinguished researchers. On the other hand, these authors may have a high impact in their research field but a not as high impact considering the interaction among research topics. Table 5 Top 10 betweenness authors Table 6 Top 10 closeness authors For instance, the researcher Alexandre Lages is in the top 10 authors for the closeness, but this author has only four publications in the SBRC and his last work was in 2007. However, a careful analysis of the collaborations of this author explains why such a fact occurs. It also highlights that the importance of an author in the co-authorship network, as identified by the centrality metrics, is strongly influenced by the pattern of his collaborations. That is, despite Lages' small number of publications, they were in collaboration with very influential and central authors. For instance, in 2004, Lages' work has as collaborators the following influential authors: Flávia Coimbra Delicato (16 publications in SBRC), Luci Pirmez (30 publications in SBRC) and José Ferreira de Rezende (46 publications in SBRC). Lages also has collaborations with José Neuman de Souza (17 publications in SBRC), Lisandro Granville Zambenedetti, (25 publications in SBRC) and Liane Margarida Rochenbach Tarouco (44 publications in SBRC). It can be observed that these authors are identified by one or both metrics as influential within the SBRC community (despite the author Lisandro Granville Zambenedetti does not appear in both tables, he is in the top 20 for both centrality metrics). From this result we can conclude that when an author collaborates with central authors with a high closeness, then this researcher also increases his own closeness to all other authors in the network. For instance, in 2004, when Lages published together with José Ferreira de Rezende, his distance to Otto C. M. B. Duarte went from not possible to reach to two edges. Therefore, a collaboration with a central author made Lages closer to another author that was not his direct collaborator. Notice that the same may also happen to the betweenness, i.e., when two or more authors publish a paper together, these authors may create a new "bridge" connecting different groups of researchers, thus increasing the betweenness for these authors. Looking at Tables 5 and 6 in this section and Table 2 in Sect. 4, we can notice two interesting facts. First, the top two publishers in SBRC, Antonio A. F. Loureiro and Otto C. M. B. Duarte, do not appear in the top 10 of both centrality metrics. Second, an author that is not in the top 30 publishers in SBRC, José Neuman de Souza (17 publications in SBRC), is the most central author according to both centrality metrics. For instance, if we look into the history of both Loureiro and Souza we can notice similar aspects. They are constantly publishing in SBRC since 1995, they appear in almost the same number of communities (Loureiro appears in 7, while Souza in 6), they collaborate with almost the same number of universities (Loureiro has collaborators in 14 universities, while Souza has collaborators in 15) and also states (Loureiro has collaborators in 11 states, while Souza has in 10). However, once again, a careful analysis of the collaboration of these authors might explain why such facts occur. Using the same \({\langle } k_{nn} {\rangle }\) metric as in Sect. 5 we find that the average degree of Loureiro's collaborators is 6.42, while for Souza it is 14.28. Therefore, we can assume that while Loureiro usually publishes with his students, Souza usually publishes with senior researchers, probably acting as a "bridge" among prominent groups within the SBRC community. In particular, Souza is a collaborator to 5 authors in the top 10 betweenness and to 8 authors in the top 10 closeness. As an experiment, let us assume that Loureiro and Souza published a paper together at some point in the history of SBRC, resulting in an edge between the two authors. By adding this single collaboration, Loureiro goes from the 51st largest closeness in the network to the 13th largest closeness. Considering the betweenness, Loureiro goes from the 11th largest betweenness to the 6th largest betweenness. Actually, Loureiro's betweenness suffers an increase of about 60 %. Therefore, we can conclude that in a co-authorship collaboration network, the number of publications alone does not dictate the importance of an author within the community, but rather the pattern of his collaborations. Furthermore, it is important to notice that centrality metrics are important tools in identifying strategic nodes in a network structure. Nevertheless, these metrics alone do not hold the final word on which nodes are actually important or not. For instance, we showed that using these metrics alone we were able to identify a central author that, apparently, is not active in the community anymore, and also active and prolific authors that are not considered as central authors. Figure 21a and b show the evolution of the betweenness and closeness over the years for authors owning the five largest values in all SBRC history, as previously presented in Tables 5 and 6. For both metrics, the authors alternate their positions for the highest value throughout the years. For instance, Maurício F. Magelhães, Paulo R. F. Cunha, Nelson L. S. da Fonseca and José Neuman de Souza had the largest value of betweenness in different years, with the latter holding the top position since 2004. Notice that the values of closeness follow a similar behavior, which is mainly due to the arrival of new authors in the network and the emergence of new collaborations, especially after 1995. In particular, we can see that both metrics drastically increased in 2001 for the authors José Neuman de Souza and Nelson L. S. da Fonseca due to a new collaboration between them. Recall from Sect. 6 that this collaboration was responsible for merging the two largest connected components at the time. Figure 21c shows the degree evolution for the five researchers with the highest degrees in the network. It is worth noticing that four of the five researchers have little collaborations until 1995, but then experience a dramatic increase in their degrees afterwards. Evolution of the betweenness, closeness and degree over the years for the top-5 prominent authors Figure 22a, b show the first three moments of the betweenness and closeness distributions. Regarding the betweenness, the skewness value remains at 1, indicating that the betweenness distribution follows a power-law distribution. For the closeness, a signal change is observed for the skewness, indicating a move in the skewness distribution. During late 1980s and early 1990s, there is a small number of authors with high closeness values. Around 1997 there is a balance, and, in 2012, there is a high number of authors with high closeness values. The main observations we can draw from these results are that the SBRC network has a small set of highly influential nodes. Moreover, these nodes can easily spread information to all nodes in the network, due to the "proximity" among nodes in the network. Indeed, such characteristic is very desirable for a scientific network, especially if we consider the easiness in spreading new research directions or research findings. The first three moments of the betweenness and closeness distributions Homophily and its impact In SNA, the homophily principle states that similar nodes are more likely to connect than non-similar ones [24]. Consider similar nodes that share, for instance, the same gender, age, social status, religion, education, geographic location, and other types of attributes. Homophily has powerful implications in our world, limiting the information people receive, the attitudes they take, and the interactions they experience [27]. Thus, in this section we analyze homophily in the SBRC network, using the geographic location of the corresponding author as the node attribute that determines similarity, i.e., the state where the author's institution is located. It is natural to think that researchers who are geographically closer are more likely to publish together. However, here we also show the impact of this expected geographic segregation in the spread of research information in a large country as Brazil. The calculation of the network homophily we use here is very intuitive. Consider a node \(i\) and its class \(c_i\), which, in the present case, can be its geographical region (e.g. southeast), state or university. The homophily of the network \(G(V,E)\) is calculated in the following as $$\begin{aligned} \text{ Homophily }=\frac{\sum _{\forall (i,j) \in E}{{\small 1}\!\!1{c_i = c_j}}}{2|E|}, \end{aligned}$$ where \({1\!1}_{\left[ {c_i = c_j} \right] }\) is an indicator function that assumes value \(1\) if the class, or state, \(c_i\) of node \(i\) is equal to the class \(c_j\) of node \(j\), and 0 otherwise. In other words, the homophily is calculated by counting the number of edges between collaborators of the same state and dividing it by the total number of edges. In Figs. 23a–c, we show the evolution of the homophily in the SBRC network. We show homophily results computed yearly, i.e., computed considering the papers published during each edition of the event, as well as homophily results computed over the aggregated network built from all publications up to a given year. In the first year, the network homophily is 1 for all node classes (regions, states and universities), indicating that researchers only collaborated with others from the same university. However, the aggregated homophily drops very sharply in the first 4 years for all four node classes, with a smooth decay in the following years. Considering the yearly homophily measures, we note a very irregular behavior, with peaks and valleys, although, in general, a decreasing trend can be noticed. Finally, observe that the homophily in general grows when the granularity of the node class moves from "university" to "region", indicating that the geographic aspect plays an important role in the formation of collaborations. The growing rate of collaborations between researchers of different backgrounds, and the decrease in the network's homophily, occurs together with a more evenly distribution of publications among the regions, states and universities of Brazil After verifying that homophily decreases over time in the SBRC network, a natural step is to analyze if it brings any impact to research. As we have seen previously, the distribution of publications among the states is concentrated into a few states. However, recently a few states, which were completely inactive, showed a small but significant progress. For instance, the state of Pará had only two publications in the first 20 years of SBRC, in the years of 1997 and 1998. In the last 10 years, researchers from the state of Pará had published a total of nine papers in six distinct years. In order to formalize this, we use the Gini coefficient [8, 16] to measure the inequality in the number of publications over the regions, states and universities of Brazil. The Gini coefficient was initially proposed to describe the income inequality in a population, commonly between countries and within countries [8, 16]. It has found application in the study of inequalities in several other disciplines [39] and here we apply it to measure how the publications are distributed among the states of Brazil. It assumes values from 0, which expresses perfect equality, where all values are the same, to 1, which expresses maximal inequality among values, where all publications are concentrated in a single state. In Figs. 23d–f, we show the Gini coefficient for the SBRC network computed on an year basis as well as over the aggregated network, considering the distribution of the publications among the geographical regions, states and universities. Like the homophily, observe the Gini coefficient decreases over the years, indicating that the distribution of the number of publications is becoming more equal. In fact, it decreases practically at the same rate as the homophily decreases. The Pearson's correlation coefficient between the homophily and the Gini coefficient in the SBRC network is 0.90 and, yearly, 0.45, among regions; 0.95 and, yearly, 0.54, among states; and 0.92 and, yearly, 0.70, among universities. This fact strongly suggests that the increase in the collaborations between researchers from different backgrounds significantly contributes to diminishing the inequality in the number of publications in Brazil, indicating that the network is becoming more heterogeneous. Although researchers from different parts of Brazil are publishing more and more in SBRC, when we compute the Gini coefficient considering the number of publications per author, instead of per locality, we see that the inequality is increasing. First, observe in Fig. 24 that the Gini coefficient of the yearly SBRC network is considerably low through the 30 years of the symposium, varying from \(\approx 0.04\) to \(\approx 0.23\) and having mean \(0.13\). This suggests that researchers publish equally in SBRC each year. However, observe in Fig. 24 that the Gini coefficient of the aggregated SBRC network grows approximately \(0.5\) points in 30 years. Conversely, this suggests that while new researchers are constantly publishing in SBRC every year, there is also a group of researchers who are always publishing in the conference, increasing significantly their number of publications compared to the others. This, in fact, is not a surprise, since it is common to have in social networks a few "super nodes" while the majority are "ordinary" nodes, a consequence of the "rich gets richer" phenomenon. The inequality concerning publications in SBRC among researchers in Brazil is increasing over time, probably due to the "rich get richer" phenomenon This conclusion shows the importance of inter-state and inter-country collaboration programs, such as the recently created "Ciência Sem Fronteiras"Footnote 5 Brazilian program and the creation of graduate programs over the years. Such programs and other incentive mechanisms allow that regions with low research activity develop, mirroring their more productive partners. Cross analysis In Fig. 25, we show the Pearson's correlation coefficient between network metrics, namely the degree \(k\), clustering coefficient \(\text{ cc }\), betweenness \(B\) and closeness \(C\) centralities, and the number of papers \(p\) an author had published. We consider three snapshots of the SBRC network that divide time into three periods of 10 years. First, we observe that for some metrics, the correlation changes over the years, while for others, it remains constant. Note that the correlation between the number of papers published \(p\) and the network metrics degree \(k\) and the betweenness centrality \(B\) grows over time. In the year of 2012, for instance, the correlation between the degree and the number of papers published is \(0.89\), a very high correlation that strongly corroborates with the fact that the "rich gets richer" phenomenon is present in co-authorship networks, since high degree nodes tend to "attract" a higher number of publications. On the other hand, observe that the clustering coefficient is always negatively correlated (\(-0.27\) in 2012) with the number of papers published. This indicates that researchers who do not expand their collaborations, i.e., whose circle of collaborations remains constant over the years, tend to publish less in the SBRC. Finally, it is interesting to observe that the closeness centrality is becoming a more independent feature over the years, having in 2012 very small correlations with all the other metrics. The Pearson's correlation coefficient between network metrics (degree \(k\), clustering coefficient \(cc\), betweenness \(B\) and closeness \(C\) centralities) and the number of papers \(p\) an author had published In this paper we made an analysis of the collaboration network between authors who have published in the editions of the Brazilian Symposium on Computer Networks and Distributed Systems. From this analysis, we have shown why the symposium is so relevant for the Brazilian research community and the regions with the highest number of participations. Moreover, we showed that the main kind of co-authorship is between well-established authors and newcomers to the symposium, which represents the natural kind of co-authorship between student and advisor. The most prominent communities were presented in two visualizations, one by universities and another by the Brazilian states. Furthermore, we identified the researchers who have a strategic position within the collaboration network and, thus, the power to influence others. Finally, we presented some Brazilian historical aspects that may have had a great impact on the symposium success, by allowing the collaboration of geographically distant researchers, thus strengthening the creation and establishment of new communities. As future work, it would be interesting to analyse other Brazilian Symposiums, such as the SBBD, SBES and SIBGRAPI. By analysing these communities at the same level of detail as the study here performed, it would be possible to draw a bigger picture of the research community in Computer Science in Brazil. All metrics needed to perform this investigation are described in Sect. 3. See http://bibliotecadigital.sbc.org.br. The aggregated number of authors (universities and papers) for year \(y\) is the number of unique authors (universities and papers) in all years up to \(y\). Henceforth, all aggregated results follow the same logic. The number of points that can be explained by a power law is too small. http://www.cienciasemfronteiras.gov.br. www.tam.com.br. www.voegol.com.br. Albert R, Jeong H, Barabási AL (1999) Diameter of the world wide web. Nature 401:130–131 Backstrom L, Huttenlocher D, Kleinberg J, Lan X (2006) Group formation in large social networks: membership, growth, and evolution. In: KDD '06: proceedings of the 12th ACM SIGKDD, pp 44–54 Bayati M, Kim JH, Saberi A (2010) A sequential algorithm for generating random graphs. Algorithmica 58(4):860–910 MATH MathSciNet Article Google Scholar Bazzan A, Argenta V (2011) Network of collaboration among pc members of Brazilian computer science conferences. J Braz Comput Soc 17:133–139 Ben-Naim E, Frauenfelder H, Toroczkai Z (2004) Complex networks. Lecture notes in physics. Springer, Berlin Bonacich P (1987) Power and centrality: a family of measures. Am J Sociol 95(5):1170–1182 de Carvalho MSRM (2006) A trajetria da internet no brasil: do surgimento das redes de computadores ã instituicao dos mecanismos de governança. Coope, Federal University of Rio de Janeiro, Master's Thesis Ceriani L, Verme P (2012) The origins of the gini index: extracts from variabilità e mutabilità (1912) by corrado gini. J Econ Inequal 10(3):421–443 Clauset A, Shalizi CR, Newman MEJ (2009) Power-law distributions in empirical data. SIAM Rev 51(4):661–703 Costa LF, Rodrigues FA, Travieso G, Boas PRV (2007) Characterization of complex networks: a survey of measurements. Adv Phys 56:167–242 Crandall D, Cosley D, Huttenlocher D, Kleinberg J, Suri S (2008) Feedback effects between similarity and social influence in online communities. In: Proceedings of 14th ACM SIGKDD, pp 160–168 Du N, Faloutsos C, Wang B, Akoglu L (2009) Large human communication networks: patterns and a utility-driven generator. In: KDD '09: Proceedings of the 15th ACM SIGKDD, pp 269–278 Faloutsos M, Faloutsos P, Faloutsos C (1999) On power-law relationships of the internet topology. In: SIGCOMM, pp 251–262 Fortunato S, Lancichinetti A (2009) Community detection algorithms: a comparative analysis: invited presentation, extended abstract. In: Proceedings of the fourth international ICST conference on performance evaluation methodologies and tools, VALUETOOLS '09, pp 27:1–27:2 Freire V, Figueiredo D (2011) Ranking in collaboration networks using a group based metric. J Braz Comput Soc 17:255–266 MathSciNet Article Google Scholar Gini C (1912) Variabilità e mutabilità: contributo allo studio delle distribuzioni e delle relazioni statistiche. pt. 1. Tipogr. di P. Cuppini Guo Z, Zhang Z, Zhu S, Chi Y, Gong Y (2009) Knowledge discovery from citation networks. In: 2009 Ninth IEEE international conference on data mining, pp 800–805 Hassan AE, Holt RC (2004) The small world of software reverse engineering. In: Proceedings of the 11th working conference on reverse, engineering, pp 278–283 Hidalgo CA, Rodriguez-Sickert C (2008) The dynamics of a mobile phone network. Phys A Stat Mech Appl 387(12):3017–3024 Hill S, Nagle A (2009) Social network signatures: a framework for re-identification in networked data and experimental results. In: CASON '09: Proceedings of the 2009 international conference on computational aspects of social networks, pp 88–97 Jensen DD, Fast AS, Taylor BJ, Maier ME (2008) Automatic identification of quasi-experimental designs for discovering causal knowledge. In: KDD '08: Proceeding of the 14th ACM SIGKDD, pp 372–380 Kossinets G, Watts DJ (2006) Empirical analysis of an evolving social network. Science 311(5757):88–90 Kumar R, Novak J, Tomkins A (2006) Structure and evolution of online social networks. In: KDD '06: Proceedings of the 12th ACM SIGKDD, pp 611–617 Lazarsfeld PF, Merton RK (1954) Friendship as a social process: a substantive and methodological analysis. In freedom and control in modern society. 18(1):18–66 Leskovec J, Kleinberg JM, Faloutsos C (2007) Graph evolution: densification and shrinking diameters. TKDD 1(1):2 Lewis K, Kaufman J, Gonzalez M, Wimmer A, Christakis N (2008) Tastes, ties, and time: a new social network dataset using Facebook.com. Soc Netw 30(4):330–342 McPherson M, Lovin LS, Cook JM (2001) Birds of a feather: homophily in social networks. Ann Rev Sociol 27(1):415–444 Vaz de Melo POS, Almeida VAF, Loureiro AAF (2008) Can complex network metrics predict the behavior of nba teams? In: KDD '08: Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 695–703 Menezes GV, Ziviani N, Laender AH, Almeida V (2009) A geographical analysis of knowledge production in computer science. In: Proceedings of the 18th international conference on world wide web, WWW '09, pp 1041–1050 Nascimento MA, Sander J, Pound J (2003) Analysis of sigmod's co-authorship graph. SIGMOD Rec 32(3):8–10 Newman MEJ (2001) Scientific collaboration networks. I. Network construction and fundamental results. Phys Rev E 64(1):016131+ Newman MEJ (2001) Scientific collaboration networks. II. Shortest paths, weighted networks, and centrality. Phys Rev E 64(1):7 Newman MEJ (2001) The structure of scientific collaboration networks. Proc Natl Acad Sci USA 98(2):404–409 Newman MEJ (2003) The structure and function of complex networks. SIAM Rev 45(2):167–256 Newman MEJ (2004) Coauthorship networks and patterns of scientific collaboration. In: Proceedings of the national academy of sciences, pp 5200–5205 Newman MEJ (2011) Complex systems: a survey. Am J Phys 79(8):800–809 Palla G, Derényi I, Farkas I, Vicsek T (2005) Uncovering the overlapping community structure of complex networks in nature and society. Nature 435(7043):814–818 Procópio PS, Laender AHF, Moro MM (2011) Anãlise da rede de coautoria do simpósio brasileiro de bancos de dados. In: Simpósio Brasileiro de Banco de, Dados, pp 050–1-050-8 Sadras V, Bongiovanni R (2004) Use of Lorenz curves and gini coefficients to assess yield inequality within paddocks. Field Crops Res 90:303–310 Seshadri M, Machiraju S, Sridharan A, Bolot J, Faloutsos C, Leskove J (2008) Mobile call graphs: beyond power-law and lognormal distributions. In: KDD '08: Proceeding of the 14th ACM SIGKDD, pp 596–604 Silva TH, Celes CSFS, Mota VFS, Loureiro AAF (2012) Overview of ubicomp research based on scientific publications. In: Proceedings of IV Simpósio Brasileiro de Computação Ubíqua e Pervasiva, SBCUP 2012 Watts DJ (2004) Six degrees: the science of a connected age. W. W. Norton & Company, New York Watts DJ, Dodds PS, Newman MEJ (2002) Identity and search in social networks. Science 296(1):1302–1305 Watts DJ, Strogatz SH (1998) Collective dynamics of "small-world" networks. Nature 393:440–442 This work is partially supported by the authors' individual grants and scholarships from CNPq, CAPES, and FAPEMIG, as well as by the Brazilian National Institute of Science and Technology for Web Research (MCT/ CNPq/ INCT Web Grant Number 573871/2008-6). Federal University of Minas Gerais, Belo Horizonte, MG, Brazil Guilherme Maia, Pedro O. S. Vaz de Melo, Thiago H. Silva, Jussara M. Almeida & Antonio A. F. Loureiro Federal University of São João del-Rei, São João del-Rei, MG, Brazil Daniel L. Guidoni & Fernanda S.H. Souza Guilherme Maia Pedro O. S. Vaz de Melo Daniel L. Guidoni Fernanda S.H. Souza Thiago H. Silva Jussara M. Almeida Antonio A. F. Loureiro Correspondence to Guilherme Maia. G. Maia, P. O. S. Vaz de Melo, D. L. Guidoni, F. S. H. Souza andT. H. Silva have equally contributed to this research paper. Appendix: Historical aspects To better understand the collaboration network of the SBRC, it is important to point out some relevant events and technological advancements in the Brazilian history. For instance, the development of the Internet in the late 1980s and its public availability in 1995, the privatization of the telephony sector in the late 1990's and the increase on the number of graduate programs during the 2000s. Therefore, in the next few paragraphs we outline the main events that may have contributed to foster the research development in Brazil, with a special focus in the computer networks and distributed systems communities comprising the SBRC. Initially, we highlight historical events regarding the Internet development in Brazil from the 1980's until nowadays, divided in three periods [7]. Thereafter, we highlight key events in the telephony and transportation sectors. Finally, we present the main advancements in the educational front. 1980s: In the beginning of this decade, there were experimental computer networks inside universities, mainly connecting workstations. In 1985, the Embratel (Empresa Brasileira de Telecomunicações—Brazilian Telecommunications Company) released the RENPAC (Rede Nacional de Comunicação de Dados por Comutação de Pacotes—National Packet Switched Network) to interconnect workstations and mainframe computers located anywhere in the country and abroad. However, the research community wanted to interconnect the academic Brazilian network to some academic network in USA using the Bitnet, the predecessor to the Internet in Brazil. In the 1988, there were three links between Brazil and USA. The first link was created between the Federal University of Rio de Janeiro and the University of California, Los Angeles. The second link was created between the LNCC (Laboratório Nacional de Computação Científica—National Scientific Computation Laboratory) and University of Maryland. Later, the Fapesp (Fundação de Amparo à Pesquisa do Estado de São Paulo—Foundation for Research Support for the State of São Paulo) created a link to the Fermi National Laboratory. At the end of this decade, Fapesp deployed the Academic Network at São Paulo (ANSP), the first Brazilian academic network to connect universities in the São Paulo state to the Bitnet network using a 4,800 bps link. Em 1989, the Ministério da Ciência e Tecnologia (Science and Technology Ministry) created the RNP (Rede Nacional de Pesquisa—National Research Network) in order to create an Internet infrastructure to connect the academic community in Brazil. In summary, we can note that in the early development of the Internet in Brazil access was restricted to universities and research institutions, mainly in the South-east region of the country. 1990s: This decade was very important to increase the Internet usage among universities, researchers and the general public. In the beginning of this decade, the Brazilian government created incentives to foster the acquisition of computers, peripherals and telecommunications equipment by allowing the importation of electronics devices at lower taxation rates. In 1991, the RNP started to expand and in 1992 its backbone covered 11 cities with 9.6 and 64 kbps links. In 1994, the RNP backbone covered all the Brazilian regions. Due to the Internet expansion during the beginning of this decade, the Brazilian Government created the Internet Steering Committee in Brazil (CGI.br—Comitê Gestor da Internet no Brasil) to coordinate and integrate all Internet initiative services in the country, promoting technical quality, innovation and dissemination of the services offered. The year 1995 was a milestone for the development of the commercial Internet. The Ministries of Communications, Science and Technology allowed the establishment of private Internet Service Providers (ISP), thus enabling the first commercial operations in Brazil. In 1999 the UOL launched the first Brazilian instant messenger software, called ComVC. In the end of this decade, the number of Internet users was more than 2.5 million. We can note that during this decade the Internet access started to become more democratic in Brazil, when all regions of the country became covered by the main backbone and also due to the development of the commercial Internet. 2000s: In late 2000, Brazil had more than 150 ISPs. Due to the shortcomings of the current Internet, a new Internet, called Internet 2, with a higher performance was developed. During this period, the RNP network was fully updated to support advanced applications. Since then, the RNP backbone has points of presence in all Brazilian states. In 2005, the backbone was updated with optical links operating at multi-gigabits per second. Nowadays, there are some Internet providers that offer a link up to 100 Mbps. We can note that it was during the 2000s that the Internet reached all parts of Brazil, thus becoming fully democratic. Moreover, the increase in links bandwidth enabled the use of advanced applications, e.g., collaboration tools and VoIP telephony applications. These aspects combined certainly contributed to diminish the barriers imposed by geographical distances, thus enabling the interaction/collaboration of geographically apart groups of people. There are other important events that happened during these three decades that may have had a great impact on how people interact. For instance, in 1998 the Brazilian government privatized the phone sector. As a consequence, the price for making phone calls decreased substantially and the number of phones increased. Moreover, in the aviation sector we can highlight the following points: in 1996 the TAM (Transportes Aéreos Marilha) airline began to operate nationwide flights.Footnote 6 The GOL airline was established and started operations in 2001 with affordable ticket fares when compared to existing airline companies.Footnote 7 In a country with continental dimensions such as Brazil, this certainly contributed to attract researchers from all regions of Brazil to gather together every year at the symposium, thus increasing the chances of new collaborations. Another important historical aspect that may have had a significant influence on the temporal behavior of the SBRC network is the growth of the number of computer science graduate programs over the years, shown in Fig. 26a. Observe how the behavior of the curve corroborates all the temporal results we have shown in this paper. From the 1990s onward, the number of programs have grown significantly, approximately one new program each year. Moreover, Fig. 26b shows this growth for each region of Brazil. It is interesting to notice that the evolution on the number of publications in SBRC for each region, as shown in Fig. 5, follows almost the same pattern as the evolution of graduate programs for each region. This clearly reflects the fact that investments in educational development, especially graduate programs, leads to an increase in knowledge production. Number of CS graduate programs in Brazil over the years Maia, G., Vaz de Melo, P.O.S., Guidoni, D.L. et al. On the analysis of the collaboration network of the Brazilian symposium on computer networks and distributed systems. J Braz Comput Soc 19, 361–382 (2013). https://doi.org/10.1007/s13173-013-0109-7 Issue Date: September 2013 Collaboration networks
CommonCrawl
Interval-valued fuzzy \(\phi\)-tolerance competition graphs Tarasankar Pramanik1, Sovan Samanta2, Madhumangal Pal3, Sukumar Mondal4 & Biswajit Sarkar2 SpringerPlus volume 5, Article number: 1981 (2016) Cite this article This paper develops an interval-valued fuzzy \(\phi\)-tolerance competition graphs which is the extension of basic fuzzy graphs and \(\phi\) is any real valued function. Interval-valued fuzzy \(\phi\)-tolerance competition graph is constructed by taking all the fuzzy sets of a fuzzy \(\phi\)-tolerance competition graph as interval-valued fuzzy sets. Product of two IVFPTCGs and relations between them are defined. Here, some hereditary properties of products of interval-valued fuzzy \(\phi\)-tolerance competition graphs are represented. Application of interval-valued fuzzy competition graph in image matching is given to illustrate the model. Graphs can be considered as the bonding of objects. To emphasis on a real problem, those objects are being bonded by some relations such as, friendship is the bonding of pupil. If the vagueness in bonding arises, then the corresponding graph can be modelled as fuzzy graph model. There are many research available in literature like Bhutani and Battou (2003) and Bhutani and Rosenfeld (2003). Competition graph was defined in Cohen (1968). In ecology, there is a problem of food web which is modelled by a digraph \(\overrightarrow{D}=(V,\overrightarrow{E})\). In food web there is a competition between species (members of food web). A vertex \(x\in V(\overrightarrow{D})\) represents a species in the food web and arc \(\overrightarrow{(x,s)}\in \overrightarrow{E}(\overrightarrow{D})\) means that x kills the species s. If two species x and y have common prey s, they will compete for s. Based on this analogy, Cohen (1968) defined a graph model (competition graph of a digraph), which represents the relationship of competition through the species in the food web. The corresponding undirected graph \(G=(V,E)\) of a certain digraph \(\overrightarrow{D}=(V, \overrightarrow{E})\) is said to be a competition graph \(C(\overrightarrow{D})\) with the vertex set V and the edge set E, where \((x,y)\in E\) if and only if there exists a vertex \(s\in V\) such that \(\overrightarrow{(x,s)},\overrightarrow{(y,s)}\in \overrightarrow{E(\overrightarrow{D})}\) for any \(x,y\in V,\, (x\ne y)\). There are several variations of competition graphs in Cohen's contribution (Cohen 1968). After Cohen, some derivations of competition graphs have been found in Cho et al. (2000). In that paper, m-step competition graph of a digraph was defined. The p-competition graph of a digraph is defined in Kim et al. (1995). The p-competition means if two species have at least p-common preys, then they compete to each other. In graph theory, an intersection graph is a graph which represents the intersection of sets. An interval graph is the intersection of multiset of intervals on real line. Interval graphs are useful in resource allocation problem in operations research. Besides, interval graphs are used extensively in mathematical modeling, archaeology, developmental psychology, ecological modeling, mathematical sociology and organization theory. Tolerance graphs were originated in Golumbic and Monma (1982) to extend some of the applications associated with interval graphs. Their original purpose was to solve scheduling problems for arrangements of rooms, vehicles, etc. Tolerance graphs are generalization of interval graphs in which each vertex can be represented by an interval and a tolerance such that an edge occurs if and only if the overlap of corresponding intervals is at least as large as the tolerance associated with one of the vertices. Hence a graph \(G = (V,E)\) is a tolerance graph if there is a set \(I = \{I_v{:}\,v \in V\}\) of closed real intervals and a set \(\{T_v{:}\,v \in V\}\) of positive real numbers such that \((x,y) \in E\) if \(|I_x\cap I_y| \ge {{\rm min}} \{ T_x,T_y\}\). The collection <\(I,T\)> of intervals and tolerances is called tolerance representation of the graph G. Tolerance graphs were used in order to generalize some well known applications of interval graphs. In Brigham et al. (1995), tolerance competition graphs was introduced. Some uncertainty is included in that paper by assuming tolerances of competitions. A recent work on fuzzy k-competition graphs is available in Samanta and Pal (2013). In the paper, fuzziness is applied in the representation of competitions. Recently Pramanik et al. defined and studied fuzzy \(\phi\)-tolerance competition graph in Pramanik et al. (2016). But, fuzzy phi-tolerance targets only numbers between 0 and 1, but interval-valued numbers are more appropriate for uncertainty. Other many related works are found in Pramanik et al. (2014) and Samanta and Pal (2015). After (Rosenfeld 1975), the fuzzy graph theory increases with its various types of branches. Using these concept of fuzzy graphs, Koczy (1992) discussed fuzzy graphs to evaluate and to optimize any networks. Samanta and Pal (2013) showed that fuzzy graphs can be used in competition in ecosystems. After that, they introduced some different types of fuzzy graphs (Samanta and Pal 2015; Samanta et al. 2014). Bhutani and Battou (2003) and Bhutani and Rosenfeld (2003) discussed different arcs in fuzzy graphs. For further details of fuzzy graphs, readers may look in Mathew (2009), Mordeson and Nair (2000) and Pramanik et al. (2014). Applications of fuzzy graph include data mining, image segmentation, clustering, image capturing, networking, communication, planning, scheduling, etc. In this paper, interval valued fuzzy \(\phi\)-tolerance competition graph is introduced. Some relations on product of interval valued \(\phi\)-tolerance competition graphs are established. The authors' contributions to develop competition graphs and tolerance graphs are listed in the Table 1. Also, the flow chart of the research contribution towards this research is given in Fig. 1. Table 1 Contributions of the authors towards interval valued \(\phi\)-tolerance competition graphs Flow-chart of the research Preliminaries A function \(\alpha {:}\,X\rightarrow [0,1]\), called the membership function defined on the crisp set X is said to be a fuzzy set \(\alpha\) on X. The support of \(\alpha\) is \({{\mathrm{supp}}}(\alpha ) =\{x\in X| \alpha (x)\ne 0\}\) and the core of \(\alpha\) is \({\mathrm{core}}(\alpha ) = \{x\in X| \alpha (x)=1\}\). The support length is \(s(\alpha )=|{{\mathrm{supp}}}(\alpha )|\) and the core length is \(c(\alpha )=|{{\mathrm{core}}}(\alpha )|\). The height of \(\alpha\) is \(h(\alpha ) =\max \{\alpha (x)| x\in X\}\). The fuzzy set \(\alpha\) is said to be normal if \(h(\alpha )=1\). A fuzzy graph with a non-void finite set V is a pair \(G = (V, \sigma ,\mu )\), where \(\sigma {:}\,V \rightarrow [0,1]\) is a fuzzy subset of V and \(\mu {:}\,V\times V\rightarrow [0,1]\) is a fuzzy relation (symmetric) on the fuzzy subset \(\sigma\), such that \(\mu (x,y) \le \sigma (x) \wedge \sigma (y)\), for all \(x,y\in V\), where \(\wedge\) stands for minimum. The degree of a vertex v of a fuzzy graph \(G = (V, \sigma ,\mu )\) is \(\displaystyle d(v)=\sum \nolimits _{u\in V-\{v\}}\mu (v,u)\). The order of a fuzzy graph G is \(\displaystyle O(G)=\sum \nolimits _{u\in V}\sigma (u)\). The size of a fuzzy graph G is \(\displaystyle S(G)=\sum \mu (u,v)\). Let \({\mathcal {F}}=\{\alpha _1,\alpha _2,\ldots , \alpha _n\}\) be a finite family of fuzzy subsets on a set X. The fuzzy intersection of two fuzzy subsets \(\alpha _1\) and \(\alpha _2\) is a fuzzy set and defined by \(\alpha _1\wedge \alpha _2=\left\{ \min \{\alpha _1(x),\alpha _2(x)\}|x\in X\right\}\). The union of two fuzzy subsets \(\alpha _1\) and \(\alpha _2\) is a fuzzy set and is defined by \(\alpha _1\vee \alpha _2=\left\{ \max \{\alpha _1(x),\alpha _2(x)\}|x\in X\right\}\). \(\alpha _1\le \alpha _2\) for two fuzzy subsets \(\alpha _1\) and \(\alpha _2\), if \(\alpha _1(x)\le \alpha _2(x)\) for each \(x\in X\). The fuzzy intersection graph of \({\mathcal {F}}\) is the fuzzy graph \(Int({\mathcal {F}})=(V, \sigma ,\mu )\), where \(\sigma {:}\,{\mathcal {F}}\rightarrow [0,1]\) is defined by \(\sigma (\alpha _i)=h(\alpha _i)\) and \(\mu {:}\,{\mathcal {F}}\times {\mathcal {F}} \rightarrow [0,1]\) is defined by $$\begin{aligned} \mu (\alpha _i,\alpha _j)=\left\{ \begin{array}{ll} h(\alpha _i\wedge \alpha _j), &{}\quad {\text{if}}\, i\ne j\\ 0, &{}\quad {\text{if}}\, i=j. \end{array}\right. \end{aligned}$$ Here, \(\mu (\alpha _i,\alpha _i)=0\) for all \(\alpha _i\) implies that the said fuzzy graph is a loop less fuzzy intersection graph and the fuzzy graph has no parallel edges as \(\mu\) is uniquely defined. Let us consider a family of fuzzy intervals \({\mathcal {F}}_{\mathcal {I}}=\{{\mathcal {I}}_1, {\mathcal {I}}_2, \ldots , {\mathcal {I}}_n\}\) on X. Then the fuzzy interval graph is the fuzzy intersection graph of these fuzzy intervals \({\mathcal {I}}_1, {\mathcal {I}}_2, \ldots , {\mathcal {I}}_n\). Fuzzy tolerance of a fuzzy interval is denoted by \({\mathcal {T}}\) and is defined by an arbitrary fuzzy interval, whose core length is a positive real number. If the real number is taken as L and \(|i_k-i_{k-1}|=L\), where \(i_k,i_{k-1}\in R\), a set of real numbers, then the fuzzy tolerance is a fuzzy set of the interval \([i_{k-1},i_k]\). The fuzzy tolerance graph \({\mathcal {G}}=(V,\sigma ,\mu )\) as the fuzzy intersection graph of finite family of fuzzy intervals \({\mathcal {I}}=\{{\mathcal {I}}_1,{\mathcal {I}}_2,\ldots , {\mathcal {I}}_n\}\) on the real line along with tolerances \({\mathcal {T}}=\{{\mathcal {T}}_1,{\mathcal {T}}_2,\ldots ,{\mathcal {T}}_n\}\) associated to each vertex of \(v_i\in V\), where, \(\sigma {:}\, V\rightarrow [0,1]\) is defined by \(\sigma (v_i)=h({\mathcal {I}}_i)=1\) for all \(v_i\in V\) and \(\mu {:}\, V\times V\rightarrow [0,1]\) is defined by $$\begin{aligned} \mu (v_i,v_j)=\left\{ \begin{array}{ll} 1, &{}\quad {\text { if }}\, c({\mathcal {I}}_i\cap {\mathcal {I}}_j)\ge \min \{c({\mathcal {T}}_i),c({\mathcal {T}}_j)\}\\ \frac{s({\mathcal {I}}_i\cap {\mathcal {I}}_j)-\min \{s({\mathcal {T}}_i), s({\mathcal {T}}_j)\}}{s({\mathcal {I}}_i\cap {\mathcal {I}}_j)}h({\mathcal {I}}_i\cap {\mathcal {I}}_j), &{}\quad {\text { else if }}\,s({\mathcal {I}}_i\cap {\mathcal {I}}_j)\ge \\ &{}\quad \min \{s({\mathcal {T}}_i),s({\mathcal {T}}_j)\}\\ 0, &{}\quad {\text { otherwise}}. \end{array}\right. \end{aligned}$$ Fuzzy interval digraph is a directed fuzzy interval graph, whose edge membership function need not to be symmetric. An interval number (Akram and Dudek 2011) D is an interval \([a^-, a^+]\) with \(0\le a^-\le a^+\le 1\). For two interval numbers \(D_1=[a_1^-,a_1^+]\) and \(D_2=[a_2^-,a_2^+]\), the following properties are defined: \(D_1+D_2=[a_1^-,a_1^+]+[a_2^-,a_2^+]=[a_1^-+a_2^- -a_1^-\cdot a_2^-, a_1^+ +a_2^+ - a_1^+\cdot a_2^+],\) \(\min \{D_1,D_2\}=[\min \{a_1^-,a_2^-\}, \min \{a_1^+,a_2^+\}],\) \(\max \{D_1,D_2\}=[\max \{a_1^-,a_2^-\}, \max \{a_1^+,a_2^+\}],\) \(D_1\le D_2 \Leftrightarrow a_1^-\le a_2^-\) and \(a_1^+\le a_2^+\), \(D_1=D_2 \Leftrightarrow a_1^-= a_2^-\) and \(a_1^+= a_2^+\), \(D_1<D_2 \Leftrightarrow D_1\le D_2\) and \(D_1\ne D_2\), \(kD_1=[ka_1^-, ka_2^+]\), where \(0\le k\le 1\). An interval-valued fuzzy set A on a set X is a function \(\mu _A{:}\, X\rightarrow [0,1]\times [0,1]\), called the membership function, i.e. \(\displaystyle \mu _A(x)=[\mu _A^-(x), \mu _A^+(x)]\). The support of A is \({{\mathrm{supp}}}(A)=\{x\in X|\mu _A^-(x)\ne 0\}\) and the core of A is \({{\mathrm{core}}}(A)=\{x\in X | \mu _A^-(x)=1\}\). The support length is \(s(A)=|{{\mathrm{supp}}}(A)|\) and the core length is \(c(A)=|{{\mathrm{core}}}(A)|\). The height of A is \(\displaystyle h(A)=\max \{\mu _A (x)|x\in X\}=[\max \{\mu _A^-(x)\}, \max \{\mu _A^+(x)\}], \forall x\in X\). Let \(F=\{A_1, A_2, \ldots , A_n\}\) be a finite family of interval-valued fuzzy subsets on a set X. The fuzzy intersection of two interval-valued fuzzy sets (IVFSs) \(A_1\) and \(A_2\) is an interval-valued fuzzy set defined by $$\begin{aligned} A_1\cap A_2= \left\{ \left( x, \left[ \min \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\},\min \{\mu _{A_1}^+(x),\mu _{A_2}^+(x)\}\right] \right) {:}\,x\in X\right\} . \end{aligned}$$ The fuzzy union of two IVFSs \(A_1\) and \(A_2\) is a IVFS defined by $$\begin{aligned} A_1\cup A_2= \left\{ \left( x, \left[ \max \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\},\max \{\mu _{A_1}^+(x),\mu _{A_2}^+(x)\}\right] \right) {:}\,x\in X\right\} \end{aligned}$$ Fuzzy out-neighbourhood of a vertex \(v\in V\) of an interval-valued fuzzy directed graph (IVFDG) \(\overrightarrow{D}=(V,A,\overrightarrow{B})\) is the IVFS \({\mathcal {N}}^+(v)=(X_v^+, m_v^+)\), where \(X_v^+=\{u{:}\, \mu _B(\overrightarrow{v,u})>0\}\) and \(m_v^+{:}\,X_v^+\rightarrow [0,1]\times [0,1]\) defined by \(m_v^+=\mu _B(\overrightarrow{v,u})=[\mu _B^-(\overrightarrow{v,u}), \mu _B^+(\overrightarrow{v,u})]\) Here, B is an interval-valued fuzzy relation on a set X, is denoted by \(\mu _B{:}\,X\times X \rightarrow [0,1] \times [0,1]\) such that $$\begin{aligned}&\mu _B^-(x,y)\le \min \left\{ \mu _A^-(x), \mu _A^-(y)\right\} \\&\mu _B^+(x,y)\le \min \left\{ \mu _A^+(x), \mu _A^+(y)\right\} \end{aligned}$$ An interval-valued fuzzy graph of a graph \(G^*=(V,E)\) is a fuzzy graph \(G=(V, A, B)\), where \(A=[\mu _A^-, \mu _A^+]\) is an interval-valued fuzzy set on V and \(B=[\mu _B^-, \mu _B^+]\) is a symmetric interval-valued fuzzy relation on E. An interval-valued fuzzy digraph \(\overrightarrow{G}=(V, A, \overrightarrow{B})\) is an interval-valued fuzzy graph, where the fuzzy relation \(\overrightarrow{B}\) is antisymmetric. An interval-valued fuzzy graph \(\xi = (A,B)\) is said to be complete interval-valued fuzzy graph if \(\mu ^-(x,y)= \min \{\sigma ^-(x),\sigma ^-(y)\}\) and \(\mu ^+(x,y)=\) \(\min\) \(\{\sigma ^+(x),\) \(\sigma ^+(y)\}\), \(\forall x,y\in V\). An interval-valued fuzzy graph is defined to be bipartite, if there exists two sets \(V_1\) and \(V_2\) such that the sets \(V_1\) and \(V_2\) are partitions of the vertex set V, where \(\mu ^+(u,v)=0\) if \(u,v\in V_1\) or \(u, v \in V_2\) and \(\mu ^+(v_1, v_2) > 0\) if \(v_1\in V_1\) (or \(V_2\)) and \(v_2 \in V_2\) (or \(V_1\)). The Cartesian product (Akram and Dudek 2011) \(G_1\times G_2\) of two interval-valued fuzzy graphs \(G_1 =(V_1, A_1,B_1)\) and \(G_2 = (V_2,A_2,B_2)\) is defined as a pair \((V_1\times V_2, A_1\times A_2,B_1\times B_2)\) such that \(\left\{ \begin{array}{l} \mu _{A_1\times A_2}^-(x_1, x_2) = \min \{\mu _{A_1}^-(x_1), \mu _{A_2}^-(x_2)\}\\ \mu ^+_{A_1\times A_2}(x_1, x_2) = \min \{\mu ^+_{A_1}(x_1), \mu ^+_{A_2}(x_2)\} \end{array}\right\}\) for all \(x_1\in V_1, x_2\in V_2\), \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(x_2,y_2)\}\\ \mu _{B_1\times B_2}^+((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(x_2,y_2)\} \end{array}\right\}\) for all \(x\in V_1\) and \((x_2, y_2)\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^-(x_1,y_1), \mu _{A_2}^-(y)\}\\ \mu _{B_1\times B_2}^+((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^+(x_1,y_1), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((x_1,y_1)\in E_1\) and \(y \in V_2.\) The composition \(G_1[G_2]=(V_1\circ V_2, A_1\circ A_2, B_1\circ B_2)\) of two interval-valued fuzzy graphs \(G_1\) and \(G_2\) of the graphs \(G_1^*\) and \(G_2^*\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1\circ A_2}^-(x_1, x_2) = \min \{\mu _{A_1}^-(x_1), \mu _{A_2}^-(x_2)\}\\ \mu ^+_{A_1\circ A_2}(x_1, x_2) = \min \{\mu ^+_{A_1}(x_1), \mu ^+_{A_2}(x_2)\} \end{array}\right\}\) for all \(x_1\in V_1, x_2\in V_2\), \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(x_2,y_2)\}\\ \mu _{B_1\circ B_2}^+((x,x_2),(x,y_2)) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(x_2,y_2)\} \end{array}\right\}\) for all \(x\in V_1\) and \((x_2, y_2)\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^-(x_1,y_1), \mu _{A_2}^-(y)\}\\ \mu _{B_1\circ B_2}^+((x_1,y),(y_1,y)) = \min \{\mu _{B_1}^+(x_1,y_1), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((x_1,y_1)\in E_1\) and \(y \in V_2,\) \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-((x_1,x_2),(y_1,y_2)) = \min \{\mu _{A_2}^-(x_2), \mu _{A_2}^-(y_2),\mu _{B_1}^-(x_1,y_1)\}\\ \mu _{B_1\circ B_2}^+((x_1,x_2),(y_1,y_2)) = \min \{\mu _{A_2}^+(x_2), \mu _{A_2}^+(y_2),\mu _{B_1}(x_1,y_1)\} \end{array}\right\}\) otherwise. The union \(G_1\cup G_2=(V_1\cup V_2, A_1\cup A_2, B_1\cup B_2)\) of two interval-valued fuzzy graphs \(G_1\) and \(G_2\) of the graphs \(G_1^*\) and \(G_2^*\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^-(x) =\mu _{A_1}^-(x) {\text { if }}\,x\in V_1 {\text { and }}\, x\notin V_2\\ \mu _{A_1\cup A_2}^-(x) =\mu _{A_2}^-(x) {\text { if }}\,x\in V_2 {\text { and }}\,x\notin V_1\\ \mu _{A_1\cup A_2}^-(x) =\max \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\}\,{\text { if }}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^+(x) =\mu _{A_1}^+(x) {\text { if }}\, x\in V_1 {\text { and }}\,x\notin V_2\\ \mu _{A_1\cup A_2}^+(x) =\mu _{A_2}^+(x) {\text { if }}\,x\in V_2 {\text { and }}\,x\notin V_1\\ \mu _{A_1\cup A_2}^+(x) =\max \{\mu _{A_1}^+(x), \mu _{A_2}^+(x)\} {\text { if }}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(x,y) = \mu _{B_1}^-(x,y) {\text { if }}\,(x,y)\in E_1 {\text{and}}\,(x,y)\notin E_2\\ \mu _{B_1\times B_2}^-(x,y) = \mu _{B_2}^-(x,y) {\text{if}}\,(x,y)\in E_2 {\text{and}}\,(x,y)\notin E_1\\ \mu _{B_1\times B_2}^-(x,y) = \max \{\mu _{B_1}^-(x,y), \mu _{B_2}^-(x,y)\} {\text{if}}\,(x,y)\in E_1\cap E_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^+(x,y) = \mu _{B_1}^+(x,y) {\text{if}}\,(x,y)\in E_1 {\text{and}}\,(x,y)\notin E_2\\ \mu _{B_1\times B_2}^+(x,y) = \mu _{B_2}^+(x,y) {\text{if}}\,(x,y)\in E_2 {\text{and}}\,(x,y)\notin E_1\\ \mu _{B_1\times B_2}^+(x,y) = \max \{\mu _{B_1}^+(x,y), \mu _{B_2}^+(x,y)\} {\text{if}}\,(x,y)\in E_1\cap E_2. \end{array}\right.\) The join \(G_1+G_2=(V_1+V_2, A_1+A_2, B_1+B_2)\) of two interval-valued fuzzy graphs \(G_1\) and \(G_2\) of the graphs \(G_1^*\) and \(G_2^*\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1+ A_2}^-(x) = (\mu _{A_1}^-\cup \mu _{A_2}^-)(x)\\ \mu _{A_1+ A_2}^+(x) = (\mu _{A_1}^+\cup \mu _{A_2}^+)(x) \end{array}\right\}\) if \(x\in V_1\cup V_2\), \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(x,y) = (\mu _{B_1}^-\cup \mu _{B_2}^-)(x,y)\\ \mu _{B_1+ B_2}^+(x,y) = (\mu _{B_1}^+\cup \mu _{B_2}^+)(x,y) \end{array}\right\}\) if \((x,y)\in E_1\cap E_2\), \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(x,y) = \min \{\mu _{A_1}^-(x), \mu _{A_2}^-(y)\}\\ \mu _{B_1+ B_2}^+(x,y) = \min \{\mu _{A_1}^+(x), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((x,y)\in E'\), where \(E'\) is the set of edges connecting the vertices of \(V_1\) and \(V_2\). Interval-valued fuzzy \(\phi\)-tolerance competition graph In this section, the definition of interval-valued fuzzy \(\phi\)-tolerance competition graph is given and studied several properties. Definition 1 (Interval-valued fuzzy \(\phi\)-tolerance competition graph (IVFPTCG)) Let \(\phi {:}\,N\times N\rightarrow N\) be a mapping, where N is a set of natural numbers. Interval-valued fuzzy \(\phi\)-tolerance competition graph of an interval-valued fuzzy directed graph (IVFDG) \(\overrightarrow{D}=(V,A,\overrightarrow{B})\) is an undirected graph \(ITC_{\phi }(\overrightarrow{D}) = (V,A, B')\) such that $$\begin{aligned} \mu _{B'} (u,v) &= {} [\mu _{B'}^-(u,v), \mu _{B'}^+(u,v)]\\& = {} \left\{ \begin{array}{l} h({{\mathcal {N}}}^+(u)\cap {\mathcal {N}}^+(v)),\\ \,\quad \qquad \text{ if } c({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))\ge \phi \{c({\mathcal {T}}_u), c({\mathcal {T}}_v)\}\\ \frac{s({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))-\phi \{s({\mathcal {T}}_u), s({\mathcal {T}}_v))\}+1}{s({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))}\cdot h({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v)),\\ \,\quad \qquad \text{ if } s({\mathcal {N}}^+(u)\cap {\mathcal {N}}^+(v))\ge \phi \{s({\mathcal {T}}_u), s({\mathcal {T}}_v)\}\\ 0, \,\quad \text{ otherwise. } \end{array} \right. \end{aligned}$$ where, \({\mathcal {T}}_u, {\mathcal {T}}_v\) are the fuzzy tolerances corresponding to u and v, respectively. Taking \(\phi\) as \(\min\). An example of this graph is given below. Consider an interval-valued fuzzy digraph \(\overrightarrow{G}=(V,A,\overrightarrow{B})\) shown in Fig. 2 with each vertex have membership values [1, 1]. The edge membership values are taken as $$\begin{aligned} &\mu _B(\overrightarrow{v_1,v_2})=[0.8,0.9], \quad \mu _B(\overrightarrow{v_1,v_5})=[0.7,0.8],\\ &\mu _B(\overrightarrow{v_2,v_5})=[0.6,0.8], \quad \mu _B(\overrightarrow{v_3,v_2})=[0.5,0.7],\\ &\mu _B(\overrightarrow{v_3,v_4})=[0.3,0.5], \quad \mu _B(\overrightarrow{v_4,v_1})=[0.7,0.9],\\ &\mu _B(\overrightarrow{v_5,v_3})=[0.6,0.8],\quad \mu _B(\overrightarrow{v_5,v_4})=[0.5,0.6]. \end{aligned}$$ Let core and support lengths of fuzzy tolerances \({\mathcal {T}}_1,{\mathcal {T}}_2, {\mathcal {T}}_3,{\mathcal {T}}_4,{\mathcal {T}}_5\) corresponding to the vertices \(v_1, v_2,v_3,v_4,v_5\) be 1, 1, 3, 2, 0 and 1, 2, 4, 3, 1, respectively. Here, it is true that \(\phi \{c({\mathcal {T}}_u), c({\mathcal {T}}_v)\}=\min \{c({\mathcal {T}}_u), c({\mathcal {T}}_v)\}\). An interval-valued fuzzy digraph and its corresponding interval-valued fuzzy \(\phi\)-tolerance competition graph. a An interval-valued fuzzy digraph, b interval-valued fuzzy ϕ-tolerance competition graph Based on this consideration, the following computations have been made. $$\begin{aligned} {\mathcal {N}}^+(v_1)& = {} \{(v_2,[0.8,0.9]),(v_5,[0.7,0.8])\}\\ {\mathcal {N}}^+(v_2)& = {} \{(v_5,[0.6,0.8])\}\\ {\mathcal {N}}^+(v_3)& = {} \{(v_2,[0.5,0.7]),(v_4,[0.3,0.5])\}\\ {\mathcal {N}}^+(v_4)& = {} \{(v_1,[0.7,0.9])\}\\ {\mathcal {N}}^+(v_5)& = {} \{(v_3,[0.6,0.8]),(v_4,[0.5,0.6])\} \end{aligned}$$ $$\begin{aligned}&{\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2)=\{(v_5,[0.6,0.8])\}\\&{\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3)=\{(v_2,[0.5,0.7])\}\\&{\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5)=\{(v_4,[0.3,0.5])\} \end{aligned}$$ $$\begin{aligned}&h({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=[0.6,0.8]\\&h({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3))=[0.5,0.7]\\&h({\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5))=[0.3,0.5] \end{aligned}$$ $$\begin{aligned}&c({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=0; s({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=1\\&c({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3))=0; s({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_3))=1\\&c({\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5))=0; s({\mathcal {N}}^+(v_3)\cap {\mathcal {N}}^+(v_5))=1. \end{aligned}$$ Then by the definition of interval-valued fuzzy \(\phi\)-tolerance competition graph, the vertex membership function of the interval-valued fuzzy min-tolerance competition graph is that of interval-valued fuzzy digraph shown in Fig. 2 and the edge membership values are as follows: $$\begin{aligned} \begin{array}{ll} \mu _B({v_1,v_3})=[0.5,0.7], &{}\quad \mu _B({v_1,v_2})=[0.6,0.8],\\ \mu _B({v_3,v_5})=[0.3,0.5]. \end{array} \end{aligned}$$ A \(\phi\)-T-edge clique cover (\(\phi\)-T-ECC) of an interval-valued fuzzy graph \({\mathcal {G}}=(V,A,B)\) with vertices \(v_1,v_2,\ldots , v_n\) is a collection \(S_1,S_2,\ldots , S_k\) of subsets of V such that \(\mu _B^-(v_r,v_s)>0\) if and only if at least \(\phi (c(T_r), c(T_s))\) of the sets \(S_i\), contain both \(v_r\) and \(v_s\). The size k of a smallest \(\phi\)-T-ECC of \({\mathcal {G}}\) taken over all tolerances T is the \(\phi\)-T-edge clique cover number and is denoted by \(\theta _{\phi }({\mathcal {G}})\). Let \(\phi {:}\,N\times N\rightarrow N\) be a mapping. If \(\theta _{\phi }({\mathcal {G}})\le |V|\), then there exists an interval-valued fuzzy \(\phi\) -tolerance competition graph. Let us assume that \(\theta _{\phi }({\mathcal {G}})\le |V|\) and \(S_1,S_2,\ldots , S_k (k\le n)\) be a \(\phi\)-T-ECC of an interval-valued fuzzy graph \({\mathcal {G}}\). Each \(S_i\) is defined by \(S_i=\{v_j{:}\,\mu _B^-(v_i, v_j)>0\}\). Each \(S_i\) is chosen in such a way that in the interval-valued fuzzy digraph \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\), \(\mu _B^-(\overrightarrow{v_i,v_j})=\mu _{B'}^-(v_i,v_j)\) and \(\mu _B^+(\overrightarrow{v_i,v_j})=\mu _{B'}^+(v_i,v_j)\), if \(v_j\in S_i\). Now, in IVFG \({\mathcal {G}}\), either \(c({\mathcal {N}}^+(v_i)\cap {\mathcal {N}}^+(v_j))\ge \phi \{c({\mathcal {T}}_{v_i}), c({\mathcal {T}}_{v_j})\}\) or, \(s({\mathcal {N}}^+(v_i)\cap {\mathcal {N}}^+(v_j))\ge \phi \{s({\mathcal {T}}_{v_i}), s({\mathcal {T}}_{v_j})\}\) must satisfy. Hence, \({\mathcal {G}}\) is an interval-valued fuzzy \(\phi\)-tolerance competition graph. \(\square\) For an interval-valued fuzzy digraph \({\mathcal {G}}=(V,A,\overrightarrow{B})\), if there exists an interval-valued fuzzy \(\phi\)-tolerance competition graph, then \(\theta _{\phi }(\overrightarrow{{\mathcal {G}}})\le |V|=n.\) Let \({\mathcal {G}}=(V,A,B')\) be an interval-valued fuzzy \(\phi\)-tolerance competition graph of \(\overrightarrow{G}\) and \(V=\{v_1,v_2,\ldots , v_n\}\) and \(S_i=\{v_j{:}\,\mu _{B'}^-(v_i,v_j)>0\}\). It is clear that there can be at most n numbers of \(S_i\)'s. Let \({\mathcal {T}}_1,{\mathcal {T}}_2,\ldots , {\mathcal {T}}_n\) be the fuzzy tolerances associated to each vertex of V. Now, \(\mu (v_r,v_s)>0\) if and only if either \(c({\mathcal {N}}^+(v_r)\cap {\mathcal {N}}^+(v_s))\ge \phi \{c({\mathcal {T}}_{r}), c({\mathcal {T}}_{s})\}\) or, \(s({\mathcal {N}}^+(v_r)\cap {\mathcal {N}}^+(v_s))\ge \phi \{s({\mathcal {T}}_{r}), s({\mathcal {T}}_{s})\}\). Thus, at most n sets \(S_1,S_2,\ldots , S_n\) make a family of \(\phi\)-T-ECC of size at most \(n=|V|\), i.e. \(\theta _{\phi }(\overrightarrow{{\mathcal {G}}})\le |V|=n.\) \(\square\) Interval-valued fuzzy \(\phi\)-tolerance competition graph \(G=(V,A,B)\) cannot be complete. Suppose, G be an interval-valued fuzzy \(\phi\)-tolerance competition graph with 2 vertices, x and y (say). For this graph there is no interval digraph with 2 vertices with some common preys. Hence, it cannot be complete. If possible let, an IVFPTCG with 3 vertices be complete. Without any loss of generality, consider the graph of Fig. 3. This graph is nothing but a clique of order 3. As \(\mu _B(x,y)\ne [0,0]\), x, y has a common prey and it must be z. Thus, x, y is directed to z. Again \(\mu _B(y,z)\ne [0,0]\) implies that, y, z is directed to x. But in IVFDG, it is not possible to have two directed edges (x, z) and (z, x) simultaneously. This concludes that there is no valid IVFDG for this IVFPTCG. As, every complete IVFPTCG contains a clique of order 3, there does not exist any valid IVFDG. Hence, any interval-valued fuzzy \(\phi\)-tolerance competition graph \(G=(V,A,B)\) cannot be complete. \(\square\) A complete IVFPTCG The interval-valued fuzzy \(\min\)-tolerance competition graph of an irregular interval-valued fuzzy digraph need not be irregular. This can be shown by giving a counter-example. Suppose an interval-valued fuzzy digraph with 3 vertices shown in Fig. 4. Irregular interval-valued fuzzy digraph and its corresponding interval-valued fuzzy min-tolerance competition graph Consider the core and support lengths of fuzzy tolerances associated to each of the vertices of the irregular interval-valued fuzzy digraph shown in Fig. 4 are 1, 1, 1 and 1, 1, 1 respectively. The interval-valued fuzzy \(\min\)-tolerance competition graph of a regular interval-valued fuzzy digraph need not be regular. To prove this, a counter-example is given in the Fig. 5. A regular interval-valued fuzzy digraph and its corresponding interval-valued fuzzy min-tolerance competition graph In Fig. 5, the regular interval-valued fuzzy digraph has the degrees \(\deg (v_1)=\deg (v_2)=\cdots = \deg (v_5)=[0.7,0.9]\), but the degree of the vertices of interval-valued fuzzy min-tolerance competition graph of the digraph shown in Fig. 5 are \(\deg (v_1)=[0.4,0.5]\), \(\deg (v_2)=[0.6,0.8]\), \(\deg (v_3)=[0.2,0.3]\). Hence, it is not regular. The size of an interval-valued fuzzy graph \({\mathcal {G}}=(V,A, B)\) is denoted by \(S({\mathcal {G}})\) and is defined by $$\begin{aligned} S({\mathcal {G}})= \sum \mu _B(u,v)=\left[ \sum \mu _B^-(u,v), \sum \mu _B^+(u,v)\right] . \end{aligned}$$ Let \(\overrightarrow{{\mathcal {G}}}\) be an interval-valued fuzzy digraph and \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})\) be its interval-valued fuzzy \(\phi\) -tolerance competition graph. Then $$\begin{aligned} S(ITC_{\phi }(\overrightarrow{{\mathcal {G}}}))\le S(\overrightarrow{{\mathcal {G}}}). \end{aligned}$$ Let \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})=(V,A,B')\) be the interval-valued fuzzy \(\phi\)-tolerance competition graph of an interval-valued fuzzy digraph \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\). As for every triangular orientation of three vertices in \(\overrightarrow{{\mathcal {G}}}\), as shown in Fig. 4, there is atmost one edge in \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})\), it is obvious that, an interval-valued fuzzy \(\phi\)-tolerance competition graph has less number of edges than that of the interval-valued fuzzy digraph. Now, consider \(\mu _{B'}(v_1,v_2)>0\) in \(ITC_{\phi }(\overrightarrow{{\mathcal {G}}})\) and \({\mathcal {N}}^+(v_1)\) and \({\mathcal {N}}^+(v_2)\) has at least one vertex in common and also \(h({\mathcal {N}}^+(v_1)\cap {\mathcal {N}}^+(v_2))=[1,1]\) (as much as possible). Then there exist at least one vertex, say \(v_i\) so that the edge membership value between \(v_1\), \(v_i\) or \(v_2\), \(v_i\) is [1, 1]. Then \(S(\overrightarrow{{\mathcal {G}}})>[1,1]\) whereas, \(S(ITC_{\phi }(\overrightarrow{{\mathcal {G}}}))\le [1,1]\). Hence, \(S(ITC_{\phi }(\overrightarrow{{\mathcal {G}}}))\le S(\overrightarrow{{\mathcal {G}}}).\) \(\square\) If \(C_1,C_2,\ldots , C_p\) be the cliques of order 3 of underlying undirected crisp graph of a IVFDG \(\overrightarrow{G}=(V,A,\overrightarrow{B})\) such that \(C_1\cup C_2\cup \ldots C_p=V\) and \(|C_i\cap C_j|\le 1\) \(\forall i,j=1,2,\ldots , p\). Then the corresponding IVFPTCG of \(\overrightarrow{G}\) cannot have cliques of order 3 or more. From the given conditions of clique sets, i.e. \(C_1\cup C_2\cup \ldots C_p=V\) and \(|C_i\cap C_j|\le 1 \forall i,j=1,2,\ldots , p\), it is clear that the interval-valued fuzzy digraph has only triangular orientation and no two triangular orientation has a common edge. That is, the IVFDG has no orientation shown in Fig. 6b. The IVFDG only have the orientations of type shown in Fig. 6a. As for every triangular orientation, there have only one edge in interval-valued fuzzy \(\phi\)-tolerance competition graph, the said graph does not have a clique of order 3 or more. Hence, interval-valued fuzzy \(\phi\)-tolerance competition graph cannot have cliques of order 3 or more. \(\square\) Types of triangular orientation. a Two triangular orientation has a common edge, b two triangular orientation has no common edge If the clique number of an underlying undirected crisp graph of an interval-valued fuzzy digraph \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\) is p, then the underlying crisp graph of the interval-valued fuzzy \(\phi\)-tolerance competition graph has the clique number less than or equal to p. Let us assume that the maximum clique of \(\overrightarrow{{\mathcal {G}}}=(V,A,\overrightarrow{B})\) induces a subgraph \(\overrightarrow{\mathcal {G'}}\) which is also an interval-valued fuzzy directed graph. From Theorem 4, the size of interval-valued fuzzy \(\phi\)-tolerance competition graph is always less than or equal to the size of interval-valued fuzzy directed graph, then the clique number of the interval-valued fuzzy \(\phi\)-tolerance competition graph cannot be greater than p. Hence the theorem follows. Interval-valued fuzzy \(\phi\)-tolerance competition graph of a complete interval-valued fuzzy digraph has maximum \(^nC_3\) number of fuzzy edges. It is obvious that every triangular orientation there exists an edge in IVFPTCG. Now, in a complete interval-valued fuzzy digraph \(\mu _B^-(x,y)=\min \{\mu _A^-(x),\,\mu _A^-(y)\}\), and \(\mu _B^+(x,y)=\min \{\mu _A^+(x),\mu _A^+(y)\}\), \(\forall x, y \in V\). Hence, every vertex is assigned to some vertex in V. Therefore, there are maximum \(^nC_3\) number of orientations. Therefore, there exists maximum \(^nC_3\) number of fuzzy edges in IVFPTCG. \(\square\) Application of interval-valued fuzzy max-tolerance competition graph in image matching Computer world advances rapidly in this modern age. Yet, it is till now a dull thing to us. The major difference for image matching by human and computer is that computer could not match two or more images by saying that they are likely same, but human can. Here, we present an arbitrary example by considering that the images are distorted by some way and they have some distortion values like an image of an object without 20% distorted (here, it is taken as arbitrary, it can be calculated by some pixel matching algorithm, which should be developed). For convenience, let us consider five types of different fonts \(A_1,A_2,A_3,A_4,A_5\) of the alphabet A as shown in Fig. 7. Taking each fonts \(A_1,A_2,A_3,A_4,A_5\) as vertices \(v_1,v_2,v_3,v_4,v_5\) respectively and there exists an edge between the vertices if two fonts have two different distortion values (d.v.). The corresponding graph model is shown in Fig. 8. Let the distortion values of fonts \(A_1,A_2,A_3,A_4,A_5\) be 70, 20, 50, 80, 0% respectively. This can be modeled as the interval-valued fuzzy digraph (see Fig. 8) with a direction to the vertex, which has the minimum distortion value. The edge membership value of an edge between two vertices \(v_1\), \(v_2\) of this graph is calculated as \(\mu _B(v_1,v_2)=[\min \{\frac{\text {d.v. of }v_1}{100},\) \(\frac{\text {d.v. of }v_2}{100}\},\) \(\max \{\frac{\text {d.v. of }v_1}{100},\,\frac{\text {d.v. of }v_2}{100}\} ]\). Each fonts have some tolerances i. e., the fonts can be distorted to a certain percentage. Arbitrarily, let us consider the tolerance core and tolerance support lengths of the vertices \(v_1,v_2,v_3,v_4,v_5\) are 0, 1, 0, 1, 2 and 1, 1, 1, 2, 3, respectively. Natural computations can be made and the max-tolerance competition graph is obtained as shown in Fig. 9, which shows that the fonts \(A_1,A_4\) are closely related and the closeness is approximately \((0.35-0.25)\cdot 100\%=10\%\). Different fonts of A and their distortion values Interval-valued fuzzy digraph model of image matching Interval-valued fuzzy max-tolerance competition graph of Fig. 8 Product of two IVFPTCGs and relations between them Throughout this paper, \(\theta\) is taken as the null set in crisp sense and \(\overrightarrow{G_1^*}\), \(\overrightarrow{G_2^*}\) are the crisp digraphs. The Cartesian product \(G_1\times G_2\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1} =(A_1,\overrightarrow{B_1})\) and \(\overrightarrow{G_2} = (A_2,\overrightarrow{B_2})\) of the graphs \(\overrightarrow{G^*_1} = (V_1,\overrightarrow{E_1})\) and \(\overrightarrow{G^*_2} = (V_2,\overrightarrow{E_2})\) is defined as a pair \((A_1\times A_2,\overrightarrow{B_1\times B_2})\) such that \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(\overrightarrow{x_2,y_2})\}\\ \mu _{B_1\times B_2}^+(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(\overrightarrow{x_2,y_2})\} \end{array}\right\}\) for all \(x\in V_1\) and \((\overrightarrow{x_2, y_2})\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^-(\overrightarrow{x_1,y_1}), \mu _{A_2}^-(y)\}\\ \mu _{B_1\times B_2}^+(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^+(\overrightarrow{x_1,y_1}), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((\overrightarrow{x_1,y_1})\in E_1\) and \(y \in V_2\). For any two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), $$\begin{aligned} ITC_{\phi }(\overrightarrow{G_1}\times \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\times ITC_{\phi }(\overrightarrow{G_2}), \end{aligned}$$ considering tolerances \({\mathcal {T}}_{(x,y)}\) corresponding to each vertex (x, y) of \(\overrightarrow{G_1}\times \overrightarrow{G_2}\) as \(c({\mathcal {T}}_{(x,y)})=\min \{c({\mathcal {T}}_x),c({\mathcal {T}}_y)\}\) and \(s({\mathcal {T}}_{(x,y)})=\min \{s({\mathcal {T}}_x),s({\mathcal {T}}_y)\}\). It is easy to understand from the definition of IVFPTCG that all vertices and their membership values remain unchanged, but fuzzy edges and their membership values have been changed. Thus, there is no need to clarify about vertices. Now, according to the definition of Cartesian product of two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), there are two types of edges in \(\overrightarrow{G_1}\times \overrightarrow{G_2}\). The two cases are as follows. Suppose, all edges are of type \(((x,x_2),(x,y_2))\), \(\forall x\in V_1\) and \((x_2,y_2)\in E_2\). Obviously, from the definition of the Cartesian products of two directed graphs that, if \(x_2, y_2\) have a common prey \(z_2\) in \(\overrightarrow{G_2}\), then \((x,x_2),(x,y_2)\) have a common prey \((x,z_2)\) in \(\overrightarrow{G_1}\times \overrightarrow{G_2}\), \(\forall x\in V_1\). Now, it has to show if \(\mu _{B_2}^-(x_2,y_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_2})\), then \(\mu _{B_1\times B_2}^-((x,x_2),(x,y_2))>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\) \(\times \overrightarrow{G_2})\) is true. If \(\mu _{B_2}^-(x_2,y_2)\) \(>0\), then either \(c({\mathcal {N}}^+(x_2)\cap\) \({\mathcal {N}}^+(y_2))\ge\) \(\phi \{c({\mathcal {T}}_{x_2}),\) \(c({\mathcal {T}}_{y_2})\}\) or \(s({\mathcal {N}}^+(x_2)\cap\) \({\mathcal {N}}^+(y_2))\) \(\ge\) \(\phi \{s({\mathcal {T}}_{x_2}),\) \(s({\mathcal {T}}_{y_2})\}\) is true. From the previous claim, if \(z_2\) is the common prey of \(x_2, y_2\) in \(\overrightarrow{G_2}\), \((x,z_2)\) is also a common prey of \((x,x_2)\) and \((x,y_2)\) in \(\overrightarrow{G_1}\times \overrightarrow{G_2}\). Thus, $$\begin{aligned} s({\mathcal {N}}^+(x,x_2)\cap {\mathcal {N}}^+(x,y_2))& = {} s\left( {\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2)\right) \\ &\ge \phi \left( s({\mathcal {T}}_{x_2}), s({\mathcal {T}}_{y_2})\right) \\ &\ge \phi \left( \min \left\{ s({\mathcal {T}}_x),s( {\mathcal {T}}_{x_2})\right\} ,\min \left\{ s({\mathcal {T}}_x),s({\mathcal {T}}_{y_2})\right\} \right) \\& = {} \phi \left( s({\mathcal {T}}_{(x,x_2)}), s({\mathcal {T}}_{(x,y_2)})\right) . \end{aligned}$$ As, the either case is satisfied, therefore \(\mu _{B_1\times B_2}^-((x,x_2),(x,y_2))>0\). If all edges of type \(((x_1,y),(y_1,y))\), \(\forall y\in V_2\) and \((x_1,y_1)\in E_1\), then the proof is similar to above case. Hence, \(ITC_{\phi }(\overrightarrow{G_1}\times \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\times ITC_{\phi }(\overrightarrow{G_2})\) is proved. \(\square\) The composition \(\overrightarrow{G_1}[\overrightarrow{G_2}]=(A_1\circ A_2, \overrightarrow{B_1\circ B_2})\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\) of the graphs \(\overrightarrow{G_1^*}\) and \(\overrightarrow{G_2^*}\) is given as follows: \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^-(x), \mu _{B_2}^-(\overrightarrow{x_2,y_2})\}\\ \mu _{B_1\circ B_2}^+(\overrightarrow{(x,x_2),(x,y_2)}) = \min \{\mu _{A_1}^+(x), \mu _{B_2}^+(\overrightarrow{x_2,y_2})\} \end{array}\right\}\) for all \(x\in V_1\) and \((\overrightarrow{x_2, y_2})\in E_2\), \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^-(\overrightarrow{x_1,y_1}), \mu _{A_2}^-(y)\}\\ \mu _{B_1\circ B_2}^+(\overrightarrow{(x_1,y),(y_1,y)}) = \min \{\mu _{B_1}^+(\overrightarrow{x_1,y_1}), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((\overrightarrow{x_1,y_1})\in E_1\) and \(y \in V_2\) \(\left\{ \begin{array}{l} \mu _{B_1\circ B_2}^-(\overrightarrow{(x_1,x_2),(y_1,y_2)}) = \min \{\mu _{A_2}^-(x_2), \mu _{A_2}^-(y_2),\mu _{B_1}^-(\overrightarrow{x_1,y_1})\}\\ \mu _{B_1\circ B_2}^+(\overrightarrow{(x_1,x_2),(y_1,y_2)}) = \min \{\mu _{A_2}^+(x_2), \mu _{A_2}^+(y_2),\mu _{B_1}(\overrightarrow{x_1,y_1})\} \end{array}\right\}\) otherwise. $$\begin{aligned} ITC_{\phi }(\overrightarrow{G_1}\circ \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\circ ITC_{\phi }(\overrightarrow{G_2}), \end{aligned}$$ considering tolerances \({\mathcal {T}}_{(x,y)}\) corresponding to each vertices (x, y) of \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\) as \(c({\mathcal {T}}_{(x,y)})=\min \{c({\mathcal {T}}_x),c({\mathcal {T}}_y)\}\) and \(s({\mathcal {T}}_{(x,y)})=\min \{s({\mathcal {T}}_x),s({\mathcal {T}}_y)\}\). According to the same interpretation drawn in Theorem 8, the membership values of the vertices of \(\overrightarrow{G_1}[\overrightarrow{G_2}]\) remains unchanged under the composition \(\circ\). Now, according to the definition of composition \(\overrightarrow{G_1}[\overrightarrow{G_2}]=(A_1\circ A_2, B_1\circ B_2)\) of two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), there are three types of edges in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\). The three cases are as follows: Case I : For all edges of type \(((x,x_2),(x,y_2))\), \(\forall x\in V_1\) and \((x_2,y_2)\in E_2\). Obviously, from the definition of the Cartesian products of two directed graphs that, if \(x_2, y_2\) have a common prey \(z_2\) in \(\overrightarrow{G_2}\) then, \((x,x_2),(x,y_2)\) have also a common prey \((x,z_2)\) in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\), \(\forall x\in V_1\). Now, if \(\mu _{B_2}^-(x_2,y_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_2})\), then \(\mu _{B_1\circ B_2}^-((x,x_2),(x,y_2))>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\circ \overrightarrow{G_2})\). If \(\mu _{B_2}^-(x_2,y_2)>0\), then either \(c({\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2))\ge \phi \{c({\mathcal {T}}_{x_2}), c({\mathcal {T}}_{y_2})\}\) or \(s({\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2))\ge \phi \{s({\mathcal {T}}_{x_2}), s({\mathcal {T}}_{y_2})\}\) is true. From the previous claim that if \(z_2\) is the common prey of \(x_2, y_2\) in \(\overrightarrow{G_2}\), \((x,z_2)\) is also a common prey of \((x,x_2)\) and \((x,y_2)\) in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\), then $$\begin{aligned} s({\mathcal {N}}^+(x,x_2)\cap {\mathcal {N}}^+(x,y_2))& = {} s({\mathcal {N}}^+(x_2)\cap {\mathcal {N}}^+(y_2))\\ &\ge \phi (s({\mathcal {T}}_{x_2}), s({\mathcal {T}}_{y_2}))\\ &\ge \phi (\min \{s({\mathcal {T}}_x),s( {\mathcal {T}}_{x_2})\},\min \{s({\mathcal {T}}_x),s({\mathcal {T}}_{y_2})\})\\& = {} \phi (s({\mathcal {T}}_{(x,x_2)}), s({\mathcal {T}}_{(x,y_2)})). \end{aligned}$$ As, the either case is satisfied, \(\mu _{B_1\circ B_2}((x,x_2),(x,y_2))>0\) is true. Case II : For all edges of type \(((x_1,y),(y_1,y))\), \(\forall y\in V_2\) and \((x_1,y_1)\in E_1\). This is similar as the Case I. Case III : For all edges of type \(((x_1,x_2),(y_1,y_2))\), where \(x_1\ne y_1\) and \(x_2\ne y_2\). In this case, \((x_1,x_2)\) and \((y_1,y_2)\) have a common prey \((z_1,z_2)\) in \(\overrightarrow{G_1}\circ \overrightarrow{G_2}\) if \(x_1, y_1\) has a common prey \(z_1\) in \(\overrightarrow{G_1}\). In the similar way as in Case I, we can obtain $$\begin{aligned} s\left( {\mathcal {N}}^+(x_1,x_2)\cap {\mathcal {N}}^+(y_1,y_2)\right)& = {} s\left( {\mathcal {N}}^+(x_1)\cap {\mathcal {N}}^+(y_1)\right) \\ &\ge \phi \left( s({\mathcal {T}}_{x_1}), s({\mathcal {T}}_{y_1})\right) \\ &\ge \phi \left( \min \{s({\mathcal {T}}_{x_1}),s( {\mathcal {T}}_{x_2})\},\min \left\{ s({\mathcal {T}}_{y_1}),s({\mathcal {T}}_{y_2})\right\} \right) \\& = {} \phi \left( s({\mathcal {T}}_{(x_1,x_2)}), s({\mathcal {T}}_{(y_1,y_2)})\right) . \end{aligned}$$ If, either case is satisfied, then \(\mu _{B_1\circ B_2}^-((x_1,x_2),(y_1,y_2))>0\) is valid. Hence, \(ITC_{\phi }(\overrightarrow{G_1}\circ \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\circ ITC_{\phi }(\overrightarrow{G_2})\) is proved. \(\square\) The union \(\overrightarrow{G_1}\cup \overrightarrow{G_2}=(A_1\cup A_2, \overrightarrow{B_1\cup B_2})\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\) of the graphs \(\overrightarrow{G_1^*}\) and \(\overrightarrow{G_2^*}\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^-(x) =\mu _{A_1}^-(x) {\text{if}}\,x\in V_1 {\hbox{and}} x\notin V_2\\ \mu _{A_1\cup A_2}^-(x) =\mu _{A_2}^-(x) {\text{if}}\,x\in V_2 {\hbox{and}} x\notin V_1\\ \mu _{A_1\cup A_2}^-(x) =\max \{\mu _{A_1}^-(x), \mu _{A_2}^-(x)\} {\text{if}}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{A_1\cup A_2}^+(x) =\mu _{A_1}^+(x) {\text{if}}\,x\in V_1 {\hbox{and}} x\notin V_2\\ \mu _{A_1\cup A_2}^+(x) =\mu _{A_2}^+(x) {\text{if}}\,x\in V_2 {\hbox{and}} x\notin V_1\\ \mu _{A_1\cup A_2}^+(x) =\max \{\mu _{A_1}^+(x), \mu _{A_2}^+(x)\} {\text{if}}\,x\in V_1\cap V_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^-(\overrightarrow{x,y}) = \mu _{B_1}^-(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_1 {\text{and}}\,(\overrightarrow{x,y})\notin E_2\\ \mu _{B_1\times B_2}^-(\overrightarrow{x,y}) = \mu _{B_2}^-(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_2 {\text{and}}\,(\overrightarrow{x,y})\notin E_1\\ \mu _{B_1\times B_2}^-(\overrightarrow{x,y}) = \max \{\mu _{B_1}^-(\overrightarrow{x,y}), \mu _{B_2}^-(\overrightarrow{x,y})\} {\text{if}}\,(\overrightarrow{x,y})\in E_1\cap E_2. \end{array}\right.\) \(\left\{ \begin{array}{l} \mu _{B_1\times B_2}^+(\overrightarrow{x,y}) = \mu _{B_1}^+(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_1 {\text{and}}\,(\overrightarrow{x,y})\notin E_2\\ \mu _{B_1\times B_2}^+(\overrightarrow{x,y}) = \mu _{B_2}^+(\overrightarrow{x,y}) {\text{if}}\,(\overrightarrow{x,y})\in E_2 {\text{and}}\,(\overrightarrow{x,y})\notin E_1\\ \mu _{B_1\times B_2}^+(\overrightarrow{x,y}) = \max \{\mu _{B_1}^+(\overrightarrow{x,y}), \mu _{B_2}^+(\overrightarrow{x,y})\} {\text{if}}\,(\overrightarrow{x,y})\in E_1\cap E_2. \end{array}\right.\) Theorem 10 $$\begin{aligned} ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\cup ITC_{\phi }(\overrightarrow{G_2}). \end{aligned}$$ There are four cases as follows: \(V_1\cap V_2=\theta\) In this case, \(\overrightarrow{G_1}\cup \overrightarrow{G_2}\) is a disconnected interval-valued fuzzy directed graphs with two components \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\). Thus, there is nothing to prove that \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\cup ITC_{\phi }(\overrightarrow{G_2}).\) \(V_1\cap V_2=\theta\), \((x_1,x_2)\in E_1\) and \((x_1,x_2)\notin E_2\) \(\mu _{B_1\cup B_2}^-(x_1,x_2)=\mu _{B_1}^-(x_1,x_2)\) and it is obvious that if \(\mu _{B_1}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1})\), then \(\mu _{B_1\cup B_2}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})\). \(V_1\cap V_2=\theta\), \((x_1,x_2)\notin E_1\) and \((x_1,x_2)\in E_2\) This is similar as in Case II. Case IV : \(V_1\cap V_2=\theta\), \((x_1,x_2)\in E_1\cap E_2\) In this case, consider \(x_1\) and \(x_2\) have a common prey \(y_1\) in \(\overrightarrow{G_1}\) and \(y_2\) in \(\overrightarrow{G_2}\). This shows that \(s({\mathcal {N}}^+(x_1)\cap {\mathcal {N}}^+(x_2))\) in \(\overrightarrow{G_1}\cup \overrightarrow{G_2}\) is greater than or equal to \(s({\mathcal {N}}^+(x_1)\cap {\mathcal {N}}^+(x_2))\) in \(\overrightarrow{G_1}\) or \(\overrightarrow{G_2}\). Hence, it can be found that if \(\mu _{B_1}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1})\) and \(\mu _{B_2}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_2})\), then \(\mu _{B_1\cup B_2}^-(x_1,x_2)>0\) in \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})\). Hence, \(ITC_{\phi }(\overrightarrow{G_1}\cup \overrightarrow{G_2})= ITC_{\phi }(\overrightarrow{G_1})\cup ITC_{\phi }(\overrightarrow{G_2})\) is proved. \(\square\) The join \(\overrightarrow{G_1}+\overrightarrow{G_2}=(A_1+A_2, \overrightarrow{B_1+B_2})\) of two interval-valued fuzzy digraphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\) of the graphs \(\overrightarrow{G_1^*}\) and \(\overrightarrow{G_2^*}\) is defined as follows: \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(\overrightarrow{x,y}) = (\mu _{B_1}^-\cup \mu _{B_2}^-)(\overrightarrow{x,y})\\ \mu _{B_1+ B_2}^+(\overrightarrow{x,y}) = (\mu _{B_1}^+\cup \mu _{B_2}^+)(\overrightarrow{x,y}) \end{array}\right\}\) if \((\overrightarrow{x,y})\in E_1\cap E_2\), \(\left\{ \begin{array}{l} \mu _{B_1+ B_2}^-(\overrightarrow{x,y}) = \min \{\mu _{A_1}^-(x), \mu _{A_2}^-(y)\}\\ \mu _{B_1+ B_2}^+(\overrightarrow{x,y}) = \min \{\mu _{A_1}^+(x), \mu _{A_2}^+(y)\} \end{array}\right\}\) for all \((\overrightarrow{x,y})\in E'\), where \(E'\) is the set of edges connecting the vertices (nodes) of \(V_1\) and \(V_2\). For any two interval-valued fuzzy directed graphs \(\overrightarrow{G_1}\) and \(\overrightarrow{G_2}\), \(ITC_{\phi }(\overrightarrow{G_1}+ \overrightarrow{G_2})\) has less number of edges than that in \(ITC_{\phi }(\overrightarrow{G_1})+ ITC_{\phi }(\overrightarrow{G_2}).\) In \(ITC_{\phi }(\overrightarrow{G_1})+ITC_{\phi }(\overrightarrow{G_2})\), \((\mu _{B_1}^-+\mu _{B_2}^-)(x_1,x_2)>0\) is true for all \(x_1\in V_1\) and \(x_2\in V_2\). But, in \(\overrightarrow{G_1}+\overrightarrow{G_2}\), \(x_1\) and \(x_2\) have no common prey, then \(\mu _{B_1+B_2}^-(x_1,x_2)=0\) is valid for all \(x_1\in V_1\) and \(x_2\in V_2\). Thus, for all \(x_1, x_2\in V_1 \cup V_2\), \(\mu _{B_1+B_2}^-(x_1,x_2)=0<(\mu _{B_1}^-+\mu _{B_2}^-)(x_1,x_2)\) is true always. Hence, the result follows. \(\square\) Insights of this study Interval-valued fuzzy \(\phi\)-tolerance competition graphs are introduced. The real life competitions in food web are perfectly represented by interval-valued fuzzy \(\phi\)-tolerance competition graphs. An application of fuzzy \(\phi\)-tolerance competition graph on image matching is provided. Particularly, interval-valued fuzzy max-tolerance competition graph is used for this. Here, distorted images are matched for computer usages. Product of two IVFPTCGs and relations between them are defined. These results will develop the theory of interval-valued fuzzy graph literature. Some important results (Theorem 2, 3, 5, 9, 10) are proved. Adding more uncertainty to fuzzy \(\phi\)-tolerance competition graph, the interval-valued fuzzy \(\phi\)-tolerance competition graph was introduced here. Some interesting properties was investigated. Interesting properties of the IVFPTCG were proved such that the IVFPTCG of a IVFDG behaved like a homomorphic function under some operations. Generally, competition graphs represent some competitions in food webs. But, it can be also used in every competitive systems. These competitive systems can be represented by bipolar fuzzy graphs, intuitionistic fuzzy graphs, etc. But, interval valued fuzzy sets are perfect to represent uncertainties. An application of IVFPTCG in image matching was illustrated. Also, it can be applied in various types of fields such as database management system, network designing, neural network, image searching in computer application, etc. Akram M, Dudek WA (2011) Interval valued fuzzy graphs. Comput Math Appl 61:289–299 Article MathSciNet MATH Google Scholar Bhutani KR, Battou A (2003) On M-strong fuzzy graphs. Inf Sci 155(1–2):103–109 Bhutani KR, Rosenfeld A (2003) Strong arcs in fuzzy graphs. Inf Sci 152:319–322 Brigham RC, McMorris FR, Vitray RP (1995) Tolerance competition graphs. Linear Algebra Appl 217:41–52 Cho HH, Kim SR, Nam Y (2000) The \(m\)-step competition graph of a digraph. Discrete Appl Math 105:115–127 Cohen JE (1968) Interval graphs and food webs: a finding and a problem, Document 17696-PR. RAND Corporation, Santa Monica Golumbic MC, Monma CL (1982) A generalization of interval graphs with tolerances. In: Proceedings of the 13th Southeastern conference on combinatories, graph theory and computing, Congressus Numerantium Utilitas Math, Winnipeg, pp 321–331 Kauffman A (1973) Introduction a la Theorie des Sous-emsembles Flous. Masson et Cie Editeurs, Paris Kim SR, McKee TA, McMorris FR, Roberts FS (1995) \(p\)-Competition graphs. Discrete Appl Math 217:167–178 MathSciNet MATH Google Scholar Koczy LT (1992) Fuzzy graphs in the evaluation and optimization of networks. Fuzzy Sets Syst 46:307–319 Mathew S, Sunitha MS (2009) Types of arcs in a fuzzy graph. Inf Sci 179:1760–1768 Mordeson JN, Nair PS (2000) Fuzzy graphs and fuzzy hypergraphs. Physica, Heidelberg Book MATH Google Scholar Pramanik T, Samanta S, Pal M (2014) Interval-valued fuzzy planar graphs. Int J Mach Learn Cybern. doi:10.1007/s13042-014-0284-7 Pramanik T, Samanta S, Sarkar B, Pal M (2016) Fuzzy phi-tolerance competition graphs. Soft Comput. doi:10.1007/s00500-015-2026-5 Samanta S, Pal M (2015) Fuzzy planar graphs. IEEE Trans Fuzzy Syst 23(6):1936–1942 Samanta S, Pal M, Akram M (2014) \(m\)-step fuzzy competition graphs. J Appl Math Comput. doi:10.1007/s12190-014-0785-2 Samanta S, Pal M (2013) Fuzzy \(k\)-competition graphs and \(p\)-competition fuzzy graphs. Fuzzy Eng Inf 5(2):191–204 Article MathSciNet Google Scholar Samanta S, Pal M (2011) Fuzzy tolerance graphs. Int J Latest Trends Math 1(2):57–67 Rosenfeld A (1975) Fuzzy graphs. In: Zadeh LA, Fu KS, Shimura M (eds) Fuzzy sets and their applications. Academic Press, New York, pp 77–95 The authors contributed equally to each parts of the paper. All authors read and approved the final manuscript. The authors are grateful to the Editor in Chief and Honorable reviewers of the journal "Springer Plus" for their suggestions to improve the quality and presentation of the paper. Department of Mathematics, Khanpur Gangche High School (H.S.), Khanpur, Pandua, India Tarasankar Pramanik Department of Industrial and Management Engineering, Hanyang University, Ansan, Gyeonggi, 15588, South Korea Sovan Samanta & Biswajit Sarkar Department of Applied Mathematics with Oceanology and Computer Programming, Vidyasagar University, Midnapore, 721102, India Madhumangal Pal Department of Mathematics, Raja N.L. Khan Women's College, Midnapore, 721102, India Sukumar Mondal Sovan Samanta Biswajit Sarkar Correspondence to Biswajit Sarkar. Tarasankar Pramanik and Sovan Samanta have contributed equally to this work Pramanik, T., Samanta, S., Pal, M. et al. Interval-valued fuzzy \(\phi\)-tolerance competition graphs. SpringerPlus 5, 1981 (2016). https://doi.org/10.1186/s40064-016-3463-z DOI: https://doi.org/10.1186/s40064-016-3463-z Interval-valued fuzzy graphs Mathematics (Theoretical)
CommonCrawl
Sectors & Industries Analysis Fundamental Analysis Tools for Fundamental Analysis How Do Gross Profit and EBITDA Differ? By Chris B. Murphy Gross profit and EBITDA (earnings before interest, taxes, depreciation and amortization) each show the earnings of a company. However, the two metrics calculate profit in different ways. Investors and analysts may want to look at both profit metrics to peer into the workings of a company. What is Gross Profit? Gross profit is the income earned by a company after deducting the direct costs of producing its products. It measures how well a company generates profit from their direct labor and direct materials. Gross profit does not include non-production costs such as costs for the corporate office. Only the revenue and costs of the production facility are included in gross profit. The Formula for Gross Profit Gross Profit=Revenue−Cost of Goods Sold\text{Gross Profit}=\text{Revenue}-\text{Cost of Goods Sold}Gross Profit=Revenue−Cost of Goods Sold Revenue is the total amount of income earned from sales in a period. Revenue can also be called net sales because discounts and deductions from returned merchandise may have been deducted from it. Revenue is considered the top-line earnings number for a company since it's located at the top of the income statement. Cost of goods sold (COGS) is the direct costs associated with producing goods. Some of the costs included in gross profit include: Direct materials Direct labor Equipment costs involved in production Utilities for the production facility Example of Gross Profit Calculation Below is a portion of the income statement for J.C. Penney Company Inc. (JCP) on May 5, 2018. Total revenue was $2.67 billion (highlighted in green). COGS was $1.71 billion (highlighted in red). Gross profit was $960 million for the period. As we can see from the example, gross profit does not include operating expenses such as overhead. It also doesn't include interest, taxes, depreciation, and amortization. Because of this, gross profit is effective if an investor wants to analyze the financial performance of revenue from production and management's ability to manage the costs involved in production. However, if the goal is to analyze operating performance while including operating expenses, EBITDA is a better financial metric. What is EBITDA? EBITDA is one indicator of a company's financial performance and is used as a proxy for the earning potential of a business. EBITDA strips out the cost of debt capital and its tax effects by adding back interest and taxes to earnings. EBITDA also removes depreciation and amortization, a non-cash expense, from earnings. It also helps to show the operating performance of a company before taking into account the capital structure, such as debt financing. EBITDA can be used to analyze and compare profitability among companies and industries, as it eliminates the effects of financing and accounting decisions. The Formula for EBITDA EBITDA=OI+Depreciation+Amortizationwhere:\begin{aligned} &\text{EBITDA}=\text{OI} + \text{Depreciation} + \text{Amortization}\\ &\textbf{where:}\\ &\text{OI}=\text{Operating Income} \end{aligned}​EBITDA=OI+Depreciation+Amortizationwhere:​ Operating income is a company's profit after subtracting operating expenses or the costs of running the daily business. Operating income helps investors separate out the earnings for the company's operating performance by excluding interest and taxes. Example of EBITDA Calculation Let's use the same income statement from the gross profit example for JC Penney above: Operating income was $3 million. Depreciation was $141 million, but the $3 million in operating income includes subtracting the $141 million in depreciation. As a result, depreciation and amortization need to be added back into the operating income number during the EBITDA calculation. EBITDA was $144 million for the period ($141 million + $3 million). We can see that interest expenses and taxes are not included in operating income, but instead are included in net income or the bottom line. The above examples shows that the EBITDA figure of $144 million was quite different from the $970 million gross profit figure during the same period. One metric is not better than the other. Instead, they both show the profit of the company in different ways by stripping out different items. Operating expenses are removed with gross profit. Non-cash items like depreciation, as well as taxes and the capital structure or financing, are stripped out with EBITDA. EBITDA helps to strip out management decisions or possible manipulation by removing debt financing, for example, while gross profit can help analyze the production efficiency of a retailer that might have a lot of cost of goods sold, as in the case of JC Penney. Since depreciation is not captured in EBITDA, it has some drawbacks when analyzing a company with a significant amount of fixed assets. For example, an oil company might have large investments in property, plant, and equipment. As a result, the depreciation expense would be quite large, and with depreciation expenses removed, the earnings of the company would be inflated. How are operating income and EBITDA different? The Formula for Calculating Ebitda (With Examples) The Difference Between EBIT and EBITDA How Is Operating Margin And EBITDA Different? How do gross profit and gross margin differ? Are depreciation and amortization included in gross profit? Earnings Before Interest, Taxes, Depreciation and Amortization – EBITDA Definition EBITDA, or earnings before interest, taxes, depreciation and amortization, is a measure of a company's overall financial performance and is used as an alternative to simple earnings or net income in some circumstances. Earnings Before Interest and Taxes – EBIT Definition Earnings before interest and taxes is an indicator of a company's profitability and is calculated as revenue minus expenses, excluding taxes and interest. What Gross Profit Tells Us Gross profit is the profit a company makes after deducting the costs of making and selling its products, or the costs of providing its services. What EBITDAR Tells Us EBITDAR—an acronym for earnings before interest, taxes, depreciation, amortization, and restructuring or rent costs—is a non-GAAP measure of a company's financial performance. Operating profit is the profit from a firm's core business operations, excluding deductions of interest and tax. What the EBITDA-to-Sales Ratio Tells Us The EBITDA-to-sales ratio is a financial metric used to assess a company's profitability by comparing its revenue with its operating income before interest, taxes, depreciation, and amortization.
CommonCrawl
# Lecture Notes 0 ## Basics of Euclidean Geometry By $\mathbf{R}$ we shall always mean the set of real numbers. The set of all $n$-tuples of real numbers $\mathbf{R}^{n}:=\left\{\left(p^{1}, \ldots, p^{n}\right) \mid p^{i} \in \mathbf{R}\right\}$ is called the Euclidean $n$-space. So we have $$ p \in \mathbf{R}^{n} \Longleftrightarrow p=\left(p^{1}, \ldots, p^{n}\right), \quad p^{i} \in \mathbf{R} . $$ Let $p$ and $q$ be a pair of points (or vectors) in $\mathbf{R}^{n}$. We define $p+q:=\left(p^{1}+\right.$ $\left.q^{1}, \ldots, p^{n}+q^{n}\right)$. Further, for any scalar $r \in \mathbf{R}$, we define $r p:=\left(r p^{1}, \ldots, r p^{n}\right)$. It is easy to show that the operations of addition and scalar multiplication that we have defined turn $\mathbf{R}^{n}$ into a vector space over the field of real numbers. Next we define the standard inner product on $\mathbf{R}^{n}$ by $$ \langle p, q\rangle=p^{1} q^{1}+\ldots+p^{n} q^{n} $$ Note that the mapping $\langle\cdot, \cdot\rangle: \mathbf{R}^{n} \times \mathbf{R}^{n} \rightarrow \mathbf{R}$ is linear in each variable and is symmetric. The standard inner product induces a norm on $\mathbf{R}^{n}$ defined by $$ \|p\|:=\langle p, p\rangle^{\frac{1}{2}} $$ If $p \in \mathbf{R}$, we usually write $|p|$ instead of $\|p\|$. The first nontrivial fact in Euclidean geometry, is the following important result which had numerous applications: Theorem 1. (The Cauchy-Schwartz inequality) For all $p$ and $q$ in $\mathbf{R}^{n}$ $$ |\langle p, q\rangle| \leqslant\|p\|\|q\| $$ The equality holds if and only if $p=\lambda q$ for some $\lambda \in \mathbf{R}$. ${ }^{1}$ Last revised: August 22, 2021 Since this is such a remarkable and far reaching result we will include here three different proofs. The first proof is quite short and slick, but also highly nontransparent, i.e., it is not easy to see how someone could come up with that. The second proof is perhaps more reasonable, but also more advanced. The third proof is the most elementary, but then again it is quite tricky. Proof I (Quadratic Formula). If $p=\lambda q$ it is clear that equality holds. Otherwise, let $f(\lambda):=\langle p-\lambda q, p-\lambda q\rangle$. Then $f(\lambda)>0$. Further, note that $f(\lambda)$ may be written as a quadratic equation in $\lambda$ : $$ f(\lambda)=\|p\|^{2}-2 \lambda\langle p, q\rangle+\lambda^{2}\|q\|^{2} . $$ Hence its discriminant must be negative: $$ 4\langle p, q\rangle^{2}-4\|p\|^{2}\|q\|^{2}<0 $$ which completes the proof. Proof II (Lagrange Multipliers). Again suppose that $p \neq \lambda q$. Then $$ \langle p, q\rangle=\|p\|\|q\|\left\langle\frac{p}{\|p\|}, \frac{q}{\|q\|}\right\rangle . $$ Thus it suffices to prove that for all unit vectors $\bar{p}$ and $\bar{q}$ we have $$ |\langle\bar{p}, \bar{q}\rangle| \leq 1 $$ and equality holds if and only if $p= \pm q$. This may be proved by using the method of lagrangne multipliers to find the maximum of the function $\langle x, y\rangle$ subject to the constraints $\|x\|=1$ and $\|y\|=1$. More explicitly we need to find the critical points of $$ \begin{aligned} f\left(x, y, \lambda_{1}, \lambda_{2}\right) & :=\langle x, y\rangle+\lambda_{1}\left(\|x\|^{2}-1\right)+\lambda_{2}\left(\|y\|^{2}-1\right) \\ & =\sum_{i=1}^{n}\left(x_{i} y_{i}+\lambda_{1} x_{i}^{2}+\lambda_{2} y_{i}^{2}\right)-\lambda_{1}-\lambda_{2} . \end{aligned} $$ At a critical point we must have $0=\partial f / \partial x_{i}=y_{i}+2 \lambda_{1} x_{i}$, which yields that $y= \pm x$. Proof III (Induction). First note that the case $n=1$ is trivial. For $n=2$, the proof amounts to showing that $$ \left(p_{1} q_{1}+p_{2} q_{2}\right)^{2} \leq\left(p_{1}^{2}+p_{2}^{2}\right)\left(q_{1}^{2}+q^{2}\right)^{2} . $$ This is also easily verified by the expansion and simplification of both sides which reduces the above inequality to $\left(p_{1} q_{2}-q_{2} p_{1}\right)^{2} \geq 0$. Now suppose that the inequality we like to prove holds for $n$. Then to prove this for $n+1$ note that $$ \begin{aligned} & \sum_{i=1}^{n+1} p_{i} q_{i}=\sum_{i=1}^{n} p_{i} q_{i}+p_{n+1} q_{n+1} \\ & \leq \sqrt{\sum_{i=1}^{n} p_{i}^{2}} \sqrt{\sum_{i=1}^{n} q_{i}^{2}}+p_{n+1} q_{n+1} \\ & \leq \sqrt{\sum_{i=1}^{n} p_{i}^{2}+p_{n+1}^{2}} \sqrt{\sum_{i=1}^{n} q_{i}^{2}+q_{n+1}^{2}} \\ & =\sqrt{\sum_{i=1}^{n+1} p_{i}^{2}} \sqrt{\sum_{i=1}^{n+1} q_{i}^{2}} . \end{aligned} $$ The first inequlity above is just the inductive step, i.e., the assumption that the inequality we want to prove holds for $n$, and the second inequality above is just an application of the case $n=2$ which we established earlier. There is yet another proof of the Cauchy-Schwartz ineqaulity which combines ideas from the first and second proofs mentioned above, but avoids using either the quadratic formula or the Lagrange multipliers: Exercise 2. (The simplest proof of the Cauchy-Schwartz inequality) Assume, as in the second proof above, that $\|p\|=1=\|q\|$ and note that $$ 0<\|p-\langle p, q\rangle q\|^{2}=\langle p-\langle p, q\rangle q, p-\langle p, q\rangle q\rangle . $$ whenever $p \neq \lambda q$. Expanding the right hand side yields the desired result. The standard Euclidean distance in $\mathbf{R}^{n}$ is given by $$ \operatorname{dist}(p, q):=\|p-q\| . $$ An immediate application of the Cauchy-Schwartz inequality is the following Exercise 3. (The triangle inequality) Show that $$ \operatorname{dist}(p, q)+\operatorname{dist}(q, r) \geqslant \operatorname{dist}(p, r) $$ for all $p, q$ in $\mathbf{R}^{n}$. By a metric on a set $X$ we mean a mapping $d: X \times X \rightarrow \mathbf{R}$ such that 1. $d(p, q) \geqslant 0$, with equality if and only if $p=q$. 2. $d(p, q)=d(q, p)$. 3. $d(p, q)+d(q, r) \geqslant d(p, r)$. These properties are called, respectively, positive-definiteness, symmetry, and the triangle inequality. The pair $(X, d)$ is called a metric space. Using the above exercise, one immediately checks that $\left(\mathbf{R}^{n}\right.$, dist) is a metric space. Geometry, in its broadest definition, is the study of metric spaces, and Euclidean Geometry, in the modern sense, is the study of the metric space ( $\mathbf{R}^{n}$, dist). Finally, we define the angle between a pair of nonzero vectors in $\mathbf{R}^{n}$ by $$ \operatorname{angle}(p, q):=\cos ^{-1} \frac{\langle p, q\rangle}{\|p\|\|q\|} . $$ Note that the above is well defined by the Cauchy-Schwartz inequality. Now we have all the necessary tools to prove the most famous result in all of mathematics: Exercise 4. (The Pythagorean theorem) Show that in a right triangle the square of the length of the hypotenuse is equal to the sum of the squares of the length of the sides. Next let us define the angle defined by 3 points $p, o, q$ as $$ \angle p o q:=\operatorname{angle}(p-o, q-o)=\cos ^{-1} \frac{\langle p-o, q-o\rangle}{\|p-o\|\|q-o\|} . $$ Exercise 5. Suppose that $p, o, q$ lie on a line and $o$ lies between $p$ and $q$. Show that then $\angle p o q=\pi$. We say that a set $L \subset \mathbf{R}^{n}$ is a line if there exists a pair of points $p$, $q \in \mathbf{R}^{n}$ such that $$ L=\{p+t(p-q) \mid t \in \mathbf{R}\} . $$ A pair of lines in $\mathbf{R}^{2}$ are called parallel if they do not intersect: Exercise 6. (Playfair's Axiom) Show that, in $\mathbf{R}^{2}$, through every point $p$ outside a line $L$ there passes one and only one line parallel to $L$. The Playfair's Axiom is equivalent to Exercise 7. (Euclid's Fifth Postulate) Suppose that two parallel lines $L, L^{\prime}$ in $\mathbf{R}^{2}$ are intersected by a third line $L^{\prime \prime}$. Then the acute angles formed by these three lines are equal. The next exercise is concerned with another corner stone of Euclidean Geometry, which may be proved using the last three exercises: Exercise 8. (Sum of the angles in a triangle) Show that the sum of the angles in a triangle is $\pi$ (Hint: through one of the vertices draw a line parallel to the opposite side). The most important result in classical differential geometry is the GaussBonnet theorem, which generalizes the fact proved in the last exercises to regions which lie on a curved surface. ## Lecture Notes 1 ## Curves ### Definition and Examples A (parametrized) curve (in Euclidean space) is a mapping $\alpha: I \rightarrow \mathbf{R}^{n}$, where $I$ is an interval in the real line. We also use the notation $$ I \ni t \stackrel{\alpha}{\longmapsto} \alpha(t) \in \mathbf{R}^{n} $$ which emphasizes that $\alpha$ sends each element of the interval $I$ to a certain point in $\mathbf{R}^{n}$. We say that $\alpha$ is (of the class of) $C^{k}$ provided that it is $k$ times continuously differentiable. We shall always assume that $\alpha$ is continuous $\left(C^{0}\right)$, and whenever we need to differentiate it we will assume that $\alpha$ is differentiable up to however many orders that we may need. Some standard examples of curves are a line which passes through a point $p \in \mathbf{R}^{n}$, is parallel to the vector $v \in \mathbf{R}^{n}$, and has constant speed $\|v\|$ $$ [0,2 \pi] \ni t \stackrel{\alpha}{\longmapsto} p+t v \in \mathbf{R}^{n} $$ a circle of radius $\mathbf{R}$ in the plane, which is oriented counterclockwise, $$ [0,2 \pi] \ni t \stackrel{\alpha}{\longmapsto}(r \cos (t), r \sin (t)) \in \mathbf{R}^{2} $$ and the right handed helix (or corkscrew) given by $$ \mathbf{R} \ni t \stackrel{\alpha}{\longmapsto}(r \cos (t), r \sin (t), t) \in \mathbf{R}^{3} . $$ Other famous examples include the figure-eight curve $$ [0,2 \pi] \ni t \stackrel{\alpha}{\longmapsto}(\sin (t), \sin (2 t)) \in \mathbf{R}^{2}, $$ ${ }^{1}$ Last revised: August 22, 2021 the parabola $$ \mathbf{R} \ni t \stackrel{\alpha}{\longmapsto}\left(t, t^{2}\right) \in \mathbf{R}^{2} $$ and the cubic curve $$ \mathbf{R} \ni t \stackrel{\alpha}{\longmapsto}\left(t, t^{2}, t^{3}\right) \in \mathbf{R}^{3} $$ Exercise 1. Sketch the cubic curve (Hint: First draw each of the projections into the $x y, y z$, and $z x$ planes). Exercise 2. Find a formula for the curve which is traced by the motion of a fixed point on a wheel of radius $r$ rolling with constant speed on a flat surface (Hint: Add the formula for a circle to the formula for a line generated by the motion of the center of the wheel. You only need to make sure that the speed of the line correctly matches the speed of the circle). Exercise 3. Let $\alpha: I \rightarrow \mathbf{R}^{n}$, and $\beta: J \rightarrow \mathbf{R}^{n}$ be a pair of differentiable curves. Show that $$ (\langle\alpha(t), \beta(t)\rangle)^{\prime}=\left\langle\alpha^{\prime}(t), \beta(t)\right\rangle+\left\langle\alpha(t), \beta^{\prime}(t)\right\rangle $$ and $$ (\|\alpha(t)\|)^{\prime}=\frac{\left\langle\alpha(t), \alpha^{\prime}(t)\right\rangle}{\|\alpha(t)\|} . $$ (Hint: The first identity follows immediately from the definition of the innerproduct, together with the ordinary product rule for derivatives. The second identity follows from the first once we recall that $\left.\|\cdot\|:=\langle\cdot, \cdot\rangle^{1 / 2}\right)$. Exercise 4. Show that if $\alpha$ has unit speed, i.e., $\left\|\alpha^{\prime}(t)\right\|=1$, then its velocity and acceleration are orthogonal, i.e., $\left\langle\alpha^{\prime}(t), \alpha^{\prime \prime}(t)\right\rangle=0$. Exercise 5. Show that if the position vector and velocity of a planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$ are always perpendicular, i.e., $\left\langle\alpha(t), \alpha^{\prime}(t)\right\rangle=0$, for all $t \in I$, then $\alpha(I)$ lies on a circle centered at the origin of $\mathbf{R}^{2}$. Exercise 6. Use the fundamental theorem of Calculus for real valued functions to show: $$ \alpha(b)-\alpha(a)=\int_{a}^{b} \alpha^{\prime}(t) d t $$ Exercise 7. Prove that $$ \|\alpha(b)-\alpha(a)\| \leq \int_{a}^{b}\left\|\alpha^{\prime}(t)\right\| d t $$ (Hint: Use the fundamental theorem of calculus and the Cauchy-Schwarts inequality to show that for any unit vector $u \in \mathbf{R}^{n}$, $$ \langle\alpha(b)-\alpha(a), u\rangle=\int_{a}^{b}\left\langle\alpha^{\prime}(t), u\right\rangle d t \leq \int_{a}^{b}\left\|\alpha^{\prime}(t)\right\| d t . $$ Then set $u:=(\alpha(b)-\alpha(a)) /\|\alpha(b)-\alpha(a)\|$. The previous exercise immediately yields the following theorem. Here 'sup' denotes supremum or the least upper bound. Theorem 8 (Mean Value Theorem for curves). If $\alpha: I \rightarrow \mathbf{R}^{n}$ is a $C^{1}$ curve, then for every $t, s \in I$, $$ \|\alpha(t)-\alpha(s)\| \leq \sup _{[t, s]}\left\|\alpha^{\prime}\right\||t-s| $$ ### Reparametrization We say that $\beta: J \rightarrow \mathbf{R}^{n}$ is a reparametrization of $\alpha: I \rightarrow \mathbf{R}^{n}$ provided that there exists a smooth bijection $\theta: I \rightarrow J$ such that $\alpha(t)=\beta(\theta(t))$. In other words, the following diagram commutes: For instance $\beta(t)=(\cot (2 t), \sin (2 t)), 0 \leq t \leq \pi$, is a reparametrization of $\alpha(t)=(\sin (t), \cos (t)), 0 \leq t \leq 2 \pi$, with $\theta:[0,2 \pi] \rightarrow[0, \pi]$ given by $\theta(t)=t / 2$. The geometric quantities associated to a curve do not change under reparametrization. These include length and curvature as we define below. ### Length and Arclength By a partition $P$ of an interval $[a, b]$ we mean a collection of points $\left\{t_{0}, \ldots, t_{n}\right\}$ of $[a, b]$ such that $$ a=t_{0}<t_{1}<\cdots<t_{n}=b . $$ The approximation of the length of $\alpha$ with respect to $P$ is defined as $$ \operatorname{length}[\alpha, P]:=\sum_{i=1}^{n}\left\|\alpha\left(t_{i}\right)-\alpha\left(t_{i-1}\right)\right\|, $$ and if Partition $[a, b]$ denotes the set of all partitions of $[a, b]$, then the length of $\alpha$ is given by $$ \text { length }[\alpha]:=\sup \{\operatorname{length}[\alpha, P] \mid P \in \operatorname{Partition}[a, b]\} \text {. } $$ Exercise 9. Show that the shortest curve between any pairs of points in $\mathbf{R}^{n}$ is the straight line segment joining them. (Hint: Use the triangle inequality). We say that a curve is rectifiable if it has finite length. Exercise* 10 (Nonrectifiable curves). Show that there exists a curve $\alpha:[0,1] \rightarrow \mathbf{R}^{2}$ which is not rectifiable (Hint: One such curve, known as the Koch curve (Figure 1), may be obtained as the limit of a sequence of curves $\alpha_{i}:[0,1] \rightarrow \mathbf{R}$ defined as follows. Let $\alpha_{0}$ trace the line segment $[0,1]$. Consider an equilateral triangle of sides $1 / 3$ whose base rests on the middle third of $[0,1]$. Deleting this middle third from the interval and the triangle yields the curve traced by $\alpha_{1}$. Figure 1: Repeating this procedure on each of the 4 subsegments of $\alpha_{1}$ yields $\alpha_{2}$. Similarly $\alpha_{i+1}$ is obtained from $\alpha_{i}$. You need to show that $\alpha_{i}$ converge to a (continuos) curve, which may be done using the Arzela-Ascoli theorem. It is easy to see that this limit has infinite length, because the length of $\alpha_{i}$ is $(4 / 3)^{i}$. Another example of a nonrectifiable curve $\alpha:[0,1] \rightarrow \mathbf{R}^{2}$ is given by $\alpha(t):=(t, t \sin (\pi / t))$, when $t \neq 0$, and $\alpha(t):=(0,0)$ otherwise. The difficulty here is to show that the length is infinite.) If a curve is $C^{1}$, then its length may be computed as the following theorem shows. Note also that the following theorem shows that a $C^{1}$ curve over a compact domain is rectifiable. First we need the following fact: Theorem 11 (Length of $C^{1}$ curves). Show that if $\alpha: I \rightarrow \mathbf{R}^{n}$ is a $C^{1}$ curve, then $$ \text { length }[\alpha]=\int_{I}\left\|\alpha^{\prime}(t)\right\| d t . $$ Proof. It suffices to show that (i) length $[\alpha, P]$ is not greater than the above integral, for any $P \in$ Partition $[a, b]$, and (ii) there exists a sequence $P_{N}$ of partitions such that $\lim _{N \rightarrow \infty} \operatorname{length}\left[\alpha, P_{N}\right]$ is equal to the integral. The first part follows quickly from Exercise 7 . To prove the second part, let $P_{N}$ be a partition given by $t_{i}:=a+i(b-a) / N$. Recall that, by the definition of integral, for any $\epsilon>0$, we may choose $N$ large enough so that $$ \left|\int_{I}\left\|\alpha^{\prime}(t)\right\| d t-\sum_{i=1}^{N}\left\|\alpha^{\prime}\left(t_{i}\right)\right\| \frac{b-a}{N}\right| \leq \frac{\epsilon}{2} . $$ Next note that the mean value theorem for curves (Theorem 8), yields that $$ \mid \text { length }\left[\alpha, P_{N}\right]-\sum_{i=1}^{N}\left\|\alpha^{\prime}\left(t_{i}\right)\right\| \frac{b-a}{N}|\leq| \sum_{i=1}^{N}\left(\sup _{s_{i} \in\left[t_{i-1}, t_{i}\right]}\left\|\alpha^{\prime}\left(s_{i}\right)\right\|-\left\|\alpha^{\prime}\left(t_{i}\right)\right\|\right) \frac{b-a}{N} \mid \text {. } $$ But, by the triangle inequality, $$ \sup _{s_{i} \in\left[t_{i-1}, t_{i}\right]}\left\|\alpha^{\prime}\left(s_{i}\right)\right\|-\left\|\alpha^{\prime}\left(t_{i}\right)\right\| \leq \sup _{s_{i} \in\left[t_{i-1}, t_{i}\right]}\left\|\alpha^{\prime}\left(s_{i}\right)-\alpha^{\prime}\left(t_{i}\right)\right\| . $$ Finally since $\alpha^{\prime}$ is continuos on the closed interval $[a, b]$, we may suppose that $N$ is so large that $$ \sup _{s_{i} \in\left[t_{i-1}, t_{i}\right]}\left\|\alpha^{\prime}\left(s_{i}\right)-\alpha^{\prime}\left(t_{i}\right)\right\| \leq \frac{\epsilon}{2(b-a)} . $$ The last three inequalities yield that $$ \left|\operatorname{length}\left[\alpha, P_{N}\right]-\sum_{i=1}^{N}\left\|\alpha^{\prime}\left(t_{i}\right)\right\| \frac{b-a}{N}\right| \leq \frac{\epsilon}{2} $$ Inequalities (1) and (2), together with triangle inequality yield that, $$ \left|\int_{I}\left\|\alpha^{\prime}(t)\right\| d t-\operatorname{length}\left[\alpha, P_{N}\right]\right| \leq \epsilon $$ which completes the proof. Exercise 12. Compute the length of a circle of radius $r$, and the length of one cycle of the curve traced by a point on a circle of radius $r$ rolling on a straight line. Exercise 13 (Invariance of length under reparametrization). Show that if $\beta$ is a reparametrization of a $C^{1}$ curve $\alpha$, then length $[\beta]=\operatorname{length}[\alpha]$, i.e., length is invariant under reparametrization (Hint: you only need to recall the chain rule together with the integration by substitution.) Let $L:=$ length $[\alpha]$. The arclength function of $\alpha$ is a mapping $s:[a, b] \rightarrow$ $[0, L]$ given by $$ s(t):=\int_{a}^{t}\left\|\alpha^{\prime}(u)\right\| d u . $$ Thus $s(t)$ is the length of the subsegment of $\alpha$ which stretches from the initial time $a$ to time $t$. Exercise 14 (Regular curves). Show that if $\alpha$ is a regular curve, i.e., $\left\|\alpha^{\prime}(t)\right\| \neq 0$ for all $t \in I$, then $s(t)$ is an invertible function, i.e., it is one-toone (Hint: compute $\left.s^{\prime}(t)\right)$. Exercise 15 (Reparametrization by arclength). Show that every regular curve $\alpha:[a, b] \rightarrow \mathbf{R}^{n}$, may be reparametrized by arclength ( Hint: Define $\beta:[0, L] \rightarrow \mathbf{R}^{n}$ by $\beta(t):=\alpha\left(s^{-1}(t)\right)$, and use the chain rule to show that $\left\|\beta^{\prime}\right\|=1$; you also need to recall that since $f\left(f^{-1}(t)\right)=t$, then, again by chain rule, we have $\left(f^{-1}\right)^{\prime}(t)=1 / f^{\prime}\left(f^{-1}(t)\right)$ for any smooth function $f$ with nonvanishing derivative.) ### Cauchy's integral formula and curves of constant width Let $\alpha: \rightarrow \mathbf{R}^{2}$ be a curve and $u(\theta):=(\cos (\theta), \sin (\theta))$ be a unit vector. The projection of $\alpha$ into the line passing through the origin and parallel to $u$ is given by $\alpha_{u}(t):=\langle\alpha(t), u\rangle u$. Exercise 16 (Cauchy's integral formula). Show that if $\alpha: I \rightarrow \mathbf{R}^{2}$ has length $L$, then the average length of the projections $\alpha_{u}$, over all directions, is $2 L / \pi$, i.e., $$ \frac{1}{2 \pi} \int_{0}^{2 \pi} \text { length }\left[\alpha_{u(\theta)}\right] d \theta=\frac{2 L}{\pi} . $$ (Hint: First prove this fact for the case when $\alpha$ traces a line segment. Then a limiting argument settles the general case, once you recall the definition of length.) As an application of the above formula we may obtain a sharp inequality involving width of closed curves. The width of a set $X \subset \mathbf{R}^{2}$ is the distance between the closest pairs of parallel lines which contain $X$ in between them. For instance the width of a circle of radius $r$ is $2 r$. A curve $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ is said to be closed provided that $\alpha(a)=\alpha(b)$. We should also mention that $\alpha$ is a $C^{k}$ closed curve provided that the (one-sided) derivatives of $\alpha$ match up at $a$ and $b$. Exercise 17 (Width and length). Show that if $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ is a closed curve with width $w$ and length $L$, then $$ w \leq \frac{L}{\pi} $$ Note that the above inequality is sharp, since for circles $w=L / \pi$. Are there other curves satisfying this property? The answer may surprise you. For any unit vector $u(\theta)$, the width of a set $X \subset \mathbf{R}^{2}$ in the direction $u, w_{u}$, is defined as the distance between the closest pairs of lines which contain $X$ in between them. We say that a closed curve in the plane has constant width provided that $w_{u}$ is constant in all directions. Exercise 18. Show that if the equality in Exercise 17 holds then $\alpha$ is a curve of constant width. The last exercise would have been insignificant if circles were the only curves of constant width, but that is not the case: Exercise 19 (Reuleaux triangle). Consider three disks of radius $r$ whose centers are on an equilateral triangle of sides $r$, see Figure 2. Show that the curve which bounds the intersection of these disks has constant width. Also show that similar constructions for any regular polygon yield curves of constant width. Figure 2: It can be shown that of all curves of constant width $w$, Reuleaux triangle has the least are. This is known as the Blaschke-Lebesque theorem. A recent proof of this result has been obtained by Evans Harrell. Note that the Reuleaux triangle is not a $C^{1}$ regular curve for it has sharp corners. To obtain a $C^{1}$ example of a curve of constant width, we may take a curve which is a constant distance away from the Reuleaux triangle. Further, a $C^{\infty}$ example may be constructed by taking an evolute of a deltoid, see Gray p. 177. ## Lecture Notes 2 ### Isometries of the Euclidean Space Let $M_{1}$ and $M_{2}$ be a pair of metric space and $d_{1}$ and $d_{2}$ be their respective metrics. We say that a mapping $f: M_{1} \rightarrow M_{2}$ is an isometry provided that $$ d_{1}(p, q)=d_{2}(f(p), f(q)) $$ for all pairs of points in $p, q \in M_{1}$. An orthogonal transformation $A: \mathbf{R}^{n} \rightarrow$ $\mathbf{R}^{n}$ is a linear map which preserves the inner product, i.e., $$ \langle A(p), A(q)\rangle=\langle p, q\rangle $$ for all $p, q \in \mathbf{R}^{n}$. One may immediately check that an orthogonal transformation is an isometry. Conversely, we have: Theorem 1. If $f: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ is an isometry, then $$ f(p)=f(o)+A(p), $$ where $o$ is the origin of $\mathbf{R}^{n}$ and $A$ is an orthogonal transformation. Proof. Let $$ \bar{f}(p):=f(p)-f(o) . $$ We need to show that $\bar{f}$ is a linear and $\langle\bar{f}(p), \bar{f}(q)\rangle=\langle p, q\rangle$. To see the latter note that $$ \langle x-y, x-y\rangle=\|x\|^{2}+\|y\|^{2}-2\langle x, y\rangle . $$ Thus, using the definition of $\bar{f}$, and the assumption that $f$ is an isometry, we obtain $$ \begin{aligned} 2\langle\bar{f}(p), \bar{f}(q)\rangle & =\|\bar{f}(p)\|^{2}+\|\bar{f}(q)\|^{2}-\|\bar{f}(p)-\bar{f}(q)\|^{2} \\ & =\|f(p)-f(o)\|^{2}+\|f(q)-f(o)\|^{2}-\|f(p)-f(q)\|^{2} \\ & =\|p\|^{2}+\|q\|^{2}-\|p-q\|^{2} \\ & =2\langle p, q\rangle . \end{aligned} $$ ${ }^{1}$ Last revised: September 17, 2004 Next note that, since $\bar{f}$ preserves the inner product, if $e_{i}, i=1 \ldots n$, is an orthonormal basis for $\mathbf{R}^{n}$, then so is $\bar{f}\left(e_{i}\right)$. Further, $$ \begin{aligned} \left\langle\bar{f}(p+q), \bar{f}\left(e_{i}\right)\right\rangle & =\left\langle p+q, e_{i}\right\rangle=\left\langle p, e_{i}\right\rangle+\left\langle q, e_{i}\right\rangle \\ & =\left\langle\bar{f}(p), \bar{f}\left(e_{i}\right)\right\rangle+\left\langle\bar{f}(q), \bar{f}\left(e_{i}\right)\right\rangle \\ & =\left\langle\bar{f}(p)+\bar{f}(q), \bar{f}\left(e_{i}\right)\right\rangle . \end{aligned} $$ Thus if follows that $$ \bar{f}(p+q)=\bar{f}(p)+\bar{f}(q) . $$ Similarly, for any constant $c$, $$ \left\langle\bar{f}(c p), \bar{f}\left(e_{i}\right)\right\rangle=\left\langle c p, e_{i}\right\rangle=\left\langle c \bar{f}(p), \bar{f}\left(e_{i}\right)\right\rangle $$ which in turn yields that $\bar{f}(c p)=\bar{f}(p)$, and completes the proof $\bar{f}$ is linear. If $f: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ is an isometry with $f(o)=o$ we say that it is a rotation, and if $A=f-f(o)$ is identity we say that $f$ is a translation. Thus another way to state the above theorem is that an isometry of the Euclidean space is the composition of a rotation and a translation. Any mapping $f: \mathbf{R}^{n} \rightarrow \mathbf{R}^{m}$ given by $f(p)=q+A(p)$, where $q \in \mathbf{R}^{n}$, and $A$ is any linear transformation, is called an affine map with translation part $q$ and linear part $A$. Thus yet another way to state the above theorem is that any isometry $f: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ is an affine map whose linear part is orthogonal An isometry of Euclidean space is also referred to as a rigid motion. Recall that if $A^{T}$ denotes the transpose of matrix $A$, then $$ \left\langle A^{T}(p), q\right\rangle=\langle p, A(q)\rangle . $$ This yields that if $A$ is an orthogonal transformation, then $A^{T} A$ is the identity matrix. In particular $$ 1=\operatorname{det}\left(A^{T} A\right)=\operatorname{det}\left(A^{T}\right) \operatorname{det}(A)=\operatorname{det}(A)^{2} . $$ So $\operatorname{det}(A)= \pm 1$. If $\operatorname{det}(A)=1$, then we say that $A$ is a special orthogonal transformation, $A: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ is a proper rotation, and any isometry $f: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ given by $f(p)=q+A(p)$ is a proper rigid motion. Exercise 2 (Isometries of $\mathbf{R}^{2}$ ). Show that if $A: \mathbf{R}^{2} \rightarrow \mathbf{R}^{2}$ is a proper rotation, then it may be represented by a matrix of the form $$ \left(\begin{array}{rr} \cos (\theta) & -\sin (\theta) \\ \sin (\theta) & \cos (\theta) \end{array}\right) . $$ Further, any improper rotation is given by $$ \left(\begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array}\right) \cdot\left(\begin{array}{rr} \cos (\theta) & -\sin (\theta) \\ \sin (\theta) & \cos (\theta) \end{array}\right) . $$ Conclude then that any isometry of $\mathbf{R}^{2}$ is a composition of a translation, a proper rotation, and possibly a reflection with respect to the $y$-axis. In the following exercise you may use the following fact: any continuous mapping of $f: \mathbf{S}^{2} \rightarrow \mathbf{S}^{2}$ of the sphere to itself has a fixed point or else sends some point to its antipodal reflection. Alternatively you may show that every $3 \times 3$ orthogonal matrix has a nonzero real eigenvalue. Exercise 3 (Isometries of $\mathbf{R}^{3}$ ). (a) Show that any proper rotation $A: \mathbf{R}^{3} \rightarrow$ $\mathbf{R}^{3}$ fixes a line $\ell$ through the origin. Further if $\Pi$ is a plane which is orthogonal to $\ell$, then $A$ maps $\Pi$ to itself by rotating it around the point $\ell \cap \Pi$ by an angle which is the same for all such planes. (b) Show that any rotation of $\mathbf{R}^{3}$ is a composition of rotations about the $x$, and $y$-axis. (c) Find a pair of proper rotations $A_{1}, A_{2}$ which do not commute, i.e., $A_{1} \circ A_{2} \neq A_{2} \circ A_{1}$. (d) Note that any improper rotation becomes proper after multiplication by an orthogonal matrix with negative determinant. Use this fact to show that any rotation of $\mathbf{R}^{3}$ is the composition of a proper rotation with reflection through the origin, or reflection through the $x y$-plane. (e) Conclude that any isometry of $\mathbf{R}^{3}$ is a composition of the following isometries: translations, rotations about the $x$, or $y$-axis, reflections through the origin, and reflections through the $x y$-plane. Exercise 4. Show that if $\alpha: I \rightarrow \mathbf{R}^{2}$ is a $C^{1}$ curve, then for any $p \in I$ there exists and open neighborhood $U$ of $p$ in $I$ and a rigid motion $f: \mathbf{R}^{2} \rightarrow \mathbf{R}^{2}$ such that $\alpha$ restricted to $U$ has a reparametrization $\beta: J \rightarrow \mathbf{R}^{2}$, where $J \subset \mathbf{R}$ is a neighborhood of the origin, and $B(t)=(t, h(t))$ for some $C^{1}$ function $f: J \rightarrow \mathbf{R}$ with $h(0)=h^{\prime}(0)=0$. ### Invariance of length under isometries Recalling the definition of length as the limit of polygonal approximations, one immediately sees that Exercise 5. Show that if $\alpha:[a, b] \rightarrow \mathbf{R}^{n}$ is a rectifiable curve, and $f: \mathbf{R}^{n} \rightarrow$ $\mathbf{R}^{n}$ is an isometry, then length $[\alpha]=$ length $[f \circ \alpha]$. Recall that earlier we had shown that the length of a curve was invariant under reparametrization. The above exercise further confirms that length is indeed a 'geometric quantity'. In the case where $\alpha$ is $C^{1}$, it is useful to give also an analytic proof for the above exercise, mainly as an excuse to recall and apply some basic concepts from multivariable calculus. Let $U \subset \mathbf{R}^{n}$ be an open subset, and $f: U \rightarrow \mathbf{R}^{m}$ be a map. Note that $f$ is a list of $m$ functions of $n$ variables: $$ f(p)=f\left(p^{1}, \ldots, p^{n}\right)=\left(f^{1}\left(p^{1}, \ldots, p^{n}\right), \ldots, f^{m}\left(p^{1}, \ldots, p^{n}\right)\right) . $$ The first order partial derivatives of $f$ are given by $$ D_{j} f^{i}(p):=\lim _{h \rightarrow 0} \frac{f^{i}\left(p^{1}, \ldots, p^{j}+h, \ldots, p^{n}\right)-f^{i}\left(p^{1}, \ldots, p^{j}, \ldots, p^{n}\right)}{h} . $$ If all the functions $D_{j} f^{i}: U \rightarrow \mathbf{R}$ exist and are continuous, then we say that $f$ is $C^{1}$. The Jacobian of $f$ at $p$ is the $m \times n$ matrix defined by $$ J_{p}(f):=\left(\begin{array}{ccc} D_{1} f^{1}(p) & \cdots & D_{n} f^{1}(p) \\ \vdots & & \vdots \\ D_{1} f^{m}(p) & \cdots & D_{n} f^{m}(p) \end{array}\right) . $$ The derivative of $f$ at $p$ is the linear transformation $D f(p): \mathbf{R}^{n} \rightarrow \mathbf{R}^{m}$ given by the above matrix, i.e., $$ (D f(p))(x):=\left(J_{p}(f)\right)(x) . $$ Exercise 6 (Derivative of linear maps). Show that if $A: \mathbf{R}^{n} \rightarrow \mathbf{R}^{m}$ is a linear map, then $$ D A(p)=A $$ for all $p \in \mathbf{R}^{n}$. In other words, for each $p \in \mathbf{R}^{n},(D A(p))(x)=A(x)$, for all $x \in \mathbf{R}^{n}$. (Hint: Let $a_{i j}, i=1 \ldots n$, and $j=1 \ldots m$, be the coefficients of the matrix representation of $A$. Then $A^{j}(p)=\sum_{i=1}^{n} a_{i j} p_{i}$.) Another basic fact is the chain rule which states that if $g: \mathbf{R}^{m} \rightarrow \mathbf{R}^{\ell}$ is a differentiable function, then $$ D(f \circ g)(p)=D f(g(p)) \circ D g(p) . $$ Now let $\alpha: I \rightarrow \mathbf{R}^{n}$ be a $C^{1}$ curve and $f: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$, given by $f(p)=$ $f(o)+A(p)$ be an isometry. Then $$ \begin{aligned} \text { length }[f \circ \alpha] & =\int_{I}\|D(f \circ \alpha)(t)\| d t \\ & =\int_{I}\|D f(\alpha(t)) \circ D \alpha(t)\| d t \\ & =\int_{I}\|D A(\alpha(t)) \circ D \alpha(t)\| d t \\ & =\int_{I}\|A(D \alpha(t))\| d t \\ & =\int_{I}\|D \alpha(t)\| d t \\ & =\text { length }[\alpha] \end{aligned} $$ The six equalities above are due respectively to (1) definition of length, (2) the chain rule, (3) definition of isometry $f$, (4) Exercise 6, (5) definition of orthogonal transformation, and (6) finally definition of length applied again. ### Curvature of $\mathrm{C}^{2}$ regular curves The curvature of a curve is a measure of how fast it is turning. More precisely, it is the speed, with respect to the arclength parameter, of the unit tangent vector of the curve. The unit tangent vector, a.k.a. tangential indicatrix, or tantrix for short, of a regular curve $\alpha: I \rightarrow \mathbf{R}^{n}$ is defined as $$ T(t):=\frac{\alpha^{\prime}(t)}{\left\|\alpha^{\prime}(t)\right\|} $$ Note that the tantrix is itself a curve with parameter ranging in $I$ and image lying on the unit sphere $\mathbf{S}^{n-1}:=\left\{x \in \mathbf{R}^{n} \mid\|x\|=1\right\}$. If $\alpha$ is parametrized with respect to arclength, i.e., $\left\|\alpha^{\prime}(t)\right\|=1$, then the curvature is given by $$ \kappa(t)=\left\|T^{\prime}(t)\right\|=\left\|\alpha^{\prime \prime}(t)\right\| \quad\left(\text { provided }\left\|\alpha^{\prime}\right\|=1\right) . $$ Thus the curvature of a road is the amount of centripetal force which you would feel, if you traveled on it in a car which has unit speed; the tighter the turn, the higher the curvature, as is affirmed by the following exercise: Exercise 7. Show that the curvature of a circle of radius $r$ is $\frac{1}{r}$, and the curvature of the line is zero (First you need to find arclength parametrizations for these curves). Recall that, as we showed earlier, there exists a unique way to reparametrize a curve $\alpha:[a, b] \rightarrow \mathbf{R}^{n}$ by arclength (given by $\alpha \circ s^{-1}(t)$ ). Thus the curvature does not depend on parametrizations. This together with the following exercise shows that $\kappa$ is indeed a 'geometric quantity'. Exercise 8. Show that $\kappa$ is invariant under isometries of the Euclidean space (Hint: See the computation at the end of the last subsection). As a practical matter, we need to have a definition for curvature which works for all curves (not just those already parametrized by arclength), because it is often very difficult, or even impossible, to find explicit formulas for unit speed curves. To find a general formula for curvature of $C^{2}$ regular curve $\alpha: I \rightarrow \mathbf{R}^{n}$, let $T: I \rightarrow \mathbf{S}^{n-1}$ be its tantrix. Let $s: I \rightarrow[0, L]$ be the arclength function. Since, as we discussed earlier $s$ is invertible, we may define $$ \bar{T}:=T \circ s^{-1} $$ to be a reparametrization of $T$. Then curvature may be defined as $$ \kappa(t):=\left\|\bar{T}^{\prime}(s(t))\right\| $$ By the chain rule, $$ \bar{T}^{\prime}(t)=T^{\prime}\left(s^{-1}(t)\right) \cdot\left(s^{-1}\right)^{\prime}(t) . $$ Further recall that $\left(s^{-1}\right)^{\prime}(t)=1 /\left\|\alpha^{\prime}\left(s^{-1}(t)\right)\right\|$. Thus $$ \kappa(t)=\frac{\left\|T^{\prime}(t)\right\|}{\left\|\alpha^{\prime}(t)\right\|} $$ Exercise 9. Use the above formula, together with definition of $T$, to show that $$ \kappa(t)=\frac{\sqrt{\left\|\alpha^{\prime}(t)\right\|^{2}\left\|\alpha^{\prime \prime}(t)\right\|^{2}-\left\langle\alpha^{\prime}(t), \alpha^{\prime \prime}(t)\right\rangle^{2}}}{\left\|\alpha^{\prime}(t)\right\|^{3}} . $$ In particular, in $\mathbf{R}^{3}$, we have $$ \kappa(t)=\frac{\left\|\alpha^{\prime}(t) \times \alpha^{\prime \prime}(t)\right\|}{\left\|\alpha^{\prime}(t)\right\|^{3}} . $$ (Hint: The first identity follows from a straight forward computation. The second identity is an immediate result of the first via the identity $\|v \times w\|^{2}=$ $\left.\|v\|^{2}\|w\|^{2}-\langle v, w\rangle^{2}.\right)$ Exercise 10. Show that the curvature of a planar curve which satisfies the equation $y=f(x)$ is given by $$ \kappa(x)=\frac{\left|f^{\prime \prime}(x)\right|}{\left(\sqrt{1+\left(f^{\prime}(x)\right)^{2}}\right)^{3}} . $$ (Hint: Use the parametrization $\alpha(t)=(t, f(t), 0)$, and use the formula in previous exercise.) Compute the curvatures of $y=x, x^{2}, x^{3}$, and $x^{4}$. Exercise 11. Let $\alpha, \beta:(-1,1) \rightarrow \mathbf{R}^{2}$ be a pair of $C^{2}$ curves with $\alpha(0)=$ $\beta(0)=(0,0)$. Further suppose that $\alpha$ and $\beta$ both lie on or above the $x$-axis, and $\beta$ lies higher than or at the same height as $\alpha$. Show that the curvature of $\beta$ at $t=0$ is not smaller than that of $\alpha$ at $t=0$ (Hint: Use exercise 4 , and a Taylor expansion). Exercise 12. Show that if $\alpha: I \rightarrow \mathbf{R}^{2}$ is a $C^{2}$ closed curve which is contained in a circle of radius $r$, then the curvature of $\alpha$ has to be bigger than $1 / r$ at some point. In particular, closed curves have a point of nonzero curvature. (Hint: Shrink the circle until it contacts the curve, and use Exercise 11). Exercise 13. Let $\alpha: I \rightarrow \mathbf{R}^{2}$ be a closed planar curve, show that $$ \text { length }[\alpha] \geq \frac{2 \pi}{\max \kappa} . $$ (Hint: Recall that the width $w$ of $\alpha$ is smaller than or equal to its length divided by $\pi$ to show that a piece of $\alpha$ should lie inside a circle of diameter at least $w)$. ## Lecture Notes 3 ### The general definition of curvature; Fox-Milnor's Theorem Let $\alpha:[a, b] \rightarrow \mathbf{R}^{n}$ be a curve and $P=\left\{t_{0}, \ldots, t_{n}\right\}$ be a partition of $[a, b]$, then (the approximation of) the total curvature of $\alpha$ with respect to $P$ is defined as $$ \text { total } \kappa[\alpha, P]:=\sum_{i=1}^{n-1} \operatorname{angle}\left(\alpha\left(t_{i}\right)-\alpha\left(t_{i-1}\right), \alpha\left(t_{i+1}\right)-\alpha\left(t_{i}\right)\right), $$ and the total curvature of $\alpha$ is given by $$ \text { total } \kappa[\alpha]:=\sup \{\kappa[\alpha, P] \mid P \in \text { Partition }[a, b]\} . $$ Our main aim here is to prove the following observation due to Ralph Fox and John Milnor: Theorem 1 (Fox-Minor). If $\alpha:[a, b] \rightarrow \mathbf{R}^{n}$ is a $C^{2}$ unit speed curve, then $$ \text { total } \kappa[\alpha]=\int_{a}^{b}\left\|\alpha^{\prime \prime}(t)\right\| d t . $$ This theorem implies, by the mean value theorem for integrals, that for any $t \in(a, b)$, $$ \kappa(t)=\lim _{\epsilon \rightarrow 0} \frac{1}{2 \epsilon} \operatorname{total} \kappa\left[\left.\alpha\right|_{t-\epsilon} ^{t+\epsilon}\right] . $$ The above formula may be taken as the definition of curvature for general (not necessarily $C^{2}$ ) curves. To prove the above theorem first we need to develop some basic spherical geometry. Let $$ \mathbf{S}^{n}:=\left\{p \in \mathbf{R}^{n+1} \mid\|p\|=1\right\} . $$ ${ }^{1}$ Last revised: September 30, 2021 denote the $n$-dimensional unit sphere in $\mathbf{R}^{n+1}$. Define a mapping from $\mathbf{S}^{n} \times \mathbf{S}^{n}$ to $\mathbf{R}$ by $$ \operatorname{dist}_{\mathbf{S}^{n}}(p, q):=\operatorname{angle}(p, q) . $$ Exercise 2. Show that $\left(\mathbf{S}^{n}, \operatorname{dist}_{\mathbf{S}^{n}}\right)$ is a metric space. The above metric has a simple geometric interpretation described as follows. By a great circle $C \subset \mathbf{S}^{n}$ we mean the intersection of $\mathbf{S}^{n}$ with a two dimensional plane which passes through the origin of $\mathbf{R}^{n+1}$. For any pair of points $p, q \in \mathbf{S}^{2}$, there exists a plane passing through them and the origin. When $p \neq \pm q$ this plane is given by the linear combinations of $p$ and $q$ and thus is unique; otherwise, $p, q$ and $o$ lie on a line and there exists infinitely many two dimensional planes passing through them. Thus through every pairs of points of $\mathbf{S}^{n}$ there passes a great circle, which is unique whenever $p \neq \pm q$. Exercise 3. For any pairs of points $p, q \in \mathbf{S}^{n}$, let $C$ be a great circle passing through them. If $p \neq q$, let $\ell_{1}$ and $\ell_{2}$ denote the length of the two segments in $C$ determined by $p$ and $q$, then $\operatorname{dist}_{\mathbf{S}^{n}}(p, q)=\min \left\{\ell_{1}, \ell_{2}\right\}$. (Hint: Let $p^{\perp} \in C$ be a vector orthogonal to $p$, then $C$ may be parametrized as the set of points traced by the curve $p \cos (t)+p^{\perp} \sin (t)$.) Let $\alpha:[a, b] \rightarrow \mathbf{S}^{n}$ be a spherical curve, i.e., a Euclidean curve $\alpha:[a, b] \rightarrow$ $\mathbf{R}^{n+1}$ with $\|\alpha\|=1$. For any partition $P=\left\{t_{0}, \ldots, t_{n}\right\}$ of $[a, b]$, the spherical length of $\alpha$ with respect the partition $P$ is defined as $$ \operatorname{length}_{\mathbf{S}^{n}}[\alpha, P]=\sum_{i=1}^{n} \operatorname{dist}_{\mathbf{S}^{n}}\left(\alpha\left(t_{i}\right), \alpha\left(t_{i-1}\right)\right) . $$ The norm of any partition $P$ of $[a, b]$ is defined as $$ |P|:=\max \left\{t_{i}-t_{i-1} \mid 1 \leq i \leq n\right\} . $$ If $P^{1}$ and $P^{2}$ are partitions of $[a, b]$, we say that $P^{2}$ is a refinement of $P^{1}$ provided that $P^{1} \subset P^{2}$. Exercise 4. Show that if $P^{2}$ is a refinement of $P^{1}$, then $$ \text { length }_{\mathbf{S}^{n}}\left[\alpha, P^{2}\right] \geq \text { length }_{\mathbf{S}^{n}}\left[\alpha, P^{1}\right] . $$ (Hint:Use the fact that $\operatorname{dist}_{\mathbf{S}^{n}}$ satisfies the triangle inequality, see Exc. 2). The spherical length of $\alpha$ is defined by $$ \text { length }_{\mathbf{S}^{n}}[\alpha]=\sup \left\{\operatorname{length}_{\mathbf{S}^{n}}[\alpha, P] \mid P \in \operatorname{Partition}[a, b]\right\} . $$ Lemma 5. If $\alpha:[a, b] \rightarrow \mathbf{S}^{n}$ is a unit speed spherical curve, then $$ \text { length }_{\mathbf{S}^{n}}[\alpha]=\text { length }[\alpha] \text {. } $$ Proof. Let $P^{k}:=\left\{t_{0}^{k}, \ldots, t_{n}^{k}\right\}$ be a sequence of partitions of $[a, b]$ with $$ \lim _{k \rightarrow \infty}\left|P^{k}\right|=0 $$ and $$ \theta_{i}^{k}:=\operatorname{dist}_{\mathbf{S}^{n}}\left(\alpha^{k}\left(t_{i}\right), \alpha^{k}\left(t_{i-1}\right)\right)=\operatorname{angle}\left(\alpha^{k}\left(t_{i}\right), \alpha^{k}\left(t_{i-1}\right)\right) $$ be the corresponding spherical distances. Then, since $\alpha$ has unit speed, $$ 2 \sin \left(\frac{\theta_{i}^{k}}{2}\right)=\left\|\alpha\left(t_{i}^{k}\right)-\alpha\left(t_{i-1}^{k}\right)\right\| \leq t_{i}^{k}-t_{i-1}^{k} \leq\left|P^{k}\right| . $$ In particular, $$ \lim _{k \rightarrow \infty} 2 \sin \left(\frac{\theta_{i}^{k}}{2}\right)=0 . $$ Now, since $\lim _{x \rightarrow 0} \sin (x) / x=1$, it follows that, for any $\epsilon>0$, there exists $N>0$, depending only on $\left|P^{k}\right|$, such that if $k>N$, then $$ (1-\epsilon) \theta_{i}^{k} \leq 2 \sin \left(\frac{\theta_{i}^{k}}{2}\right) \leq(1+\epsilon) \theta_{i}^{k} $$ which yields that $$ (1-\epsilon) \text { length }_{\mathbf{S}^{n}}\left[\alpha, P^{k}\right] \leq \text { length }\left[\alpha, P^{k}\right] \leq(1+\epsilon) \text { length }_{\mathbf{S}^{n}}\left[\alpha, P^{k}\right] . $$ The above inequalities are satisfied by any $\epsilon>0$ provided that $k$ is large enough. Thus $$ \lim _{k \rightarrow \infty} \text { length }_{\mathbf{S}^{n}}\left[\alpha, P^{k}\right]=\text { length }[\alpha] . $$ Further, note that if $P$ is any partitions of $[a, b]$ we may construct a sequence of partitions by successive refinements of $P$ so that $\lim _{k \rightarrow \infty}\left|P^{k}\right|=0$. By Exercise 4 , length $\mathbf{S}^{n}\left[\alpha, P^{k}\right] \leq$ length $_{\mathbf{S}^{n}}\left[\alpha, P^{k+1}\right]$. Thus the above expression shows that, for any partition $P$ of $[a, b]$, $$ \text { length }_{\mathbf{S}^{n}}[\alpha, P] \leq \text { length }[\alpha] $$ The last two expressions now yield that $$ \sup \left\{\text { length }_{\mathbf{S}^{n}}[\alpha, P] \mid P \in \text { Partition }[a, b]\right\}=\operatorname{length}[\alpha], $$ which completes the proof. Exercise 6. Show that if $P^{2}$ is a refinement of $P^{1}$, then $$ \text { total } \kappa\left[\alpha, P^{2}\right] \geq \operatorname{total} \kappa\left[\alpha, P^{1}\right] . $$ Now we are ready to prove the theorem of Fox-Milnor: Proof of Theorem 1. As in the proof of the previous lemma, let $P^{k}=\left\{t_{0}^{k}, \ldots, t_{n}^{k}\right\}$ be a sequence of partitions of $[a, b]$ with $\lim _{k \rightarrow \infty}\left|P^{k}\right|=0$. Set $$ \theta_{i}^{k}:=\text { angle }\left(\alpha\left(t_{i}^{k}\right)-\alpha\left(t_{i-1}^{k}\right), \alpha\left(t_{i+1}^{k}\right)-\alpha\left(t_{i}^{k}\right)\right), $$ where $i=1, \ldots, n-1$. Further, set $$ \bar{t}_{i}^{k}:=\frac{t_{i}^{k}+t_{i-1}^{k}}{2} $$ and $$ \phi_{i}^{k}:=\text { angle }\left(\alpha^{\prime}\left(\bar{t}_{i}^{k}\right), \alpha^{\prime}\left(\bar{t}_{i+1}^{k}\right)\right) . $$ Recall that, by the previous lemma, $$ \lim _{k \rightarrow \infty} \sum_{i} \phi_{i}^{k}=\operatorname{length}_{\mathbf{S}^{n-1}}\left[\alpha^{\prime}\right]=\operatorname{length}\left[\alpha^{\prime}\right]=\int_{a}^{b}\left\|\alpha^{\prime \prime}(t)\right\| d t . $$ Thus to complete the proof it suffices to show that, for every $\epsilon>0$, there exists $N$ such that for all $k \geq N$, $$ \left|\theta_{i}^{k}-\phi_{i}^{k}\right| \leq \epsilon\left(t_{i+1}^{k}-t_{i-1}^{k}\right) $$ for then it would follow that $$ 2 \epsilon[a, b] \leq \sum_{i} \theta_{i}^{k}-\sum_{i} \phi_{i}^{k} \leq 2 \epsilon[a, b] $$ which would in turn yield $$ \lim _{k \rightarrow \infty} \text { total } \kappa\left[\alpha, P^{k}\right]=\lim _{k \rightarrow \infty} \sum_{i} \theta_{i}^{k}=\lim _{k \rightarrow \infty} \sum_{i} \phi_{i}^{k}=\int_{a}^{b}\left\|\alpha^{\prime \prime}(t)\right\| d t . $$ Now, similar to the proof of Lemma 5 , note that given any partition $P$ of $[a, b]$, we may construct by subsequent refinements a sequence of partitions $P^{k}$, with $P^{0}=P$, such that $\lim _{k \rightarrow \infty}\left|P^{k}\right|=0$. Thus the last expression, together with Exercise 6, yields that $$ \text { total } \kappa[\alpha, P] \leq \int_{a}^{b}\left\|\alpha^{\prime \prime}(t)\right\| d t $$ The last two expressions complete the proof; so it remains to establish (1). To this end let $$ \beta_{i}^{k}:=\operatorname{angle}\left(\alpha^{\prime}\left(\bar{t}_{i}^{k}\right), \alpha\left(t_{i}^{k}\right)-\alpha\left(t_{i-1}^{k}\right)\right) $$ By the triangle inequality for angles (Exercise 2). $$ \phi_{i}^{k} \leq \beta_{i}^{k}+\theta_{i}^{k}+\beta_{i+1}^{k}, \quad \text { and } \quad \theta_{i}^{k} \leq \beta_{i}^{k}+\phi_{i}^{k}+\beta_{i+1}^{k}, $$ which yields $$ \left|\phi_{i}^{k}-\theta_{i}^{k}\right| \leq \beta_{i}^{k}+\beta_{i+1}^{k} . $$ So to prove (1) it is enough to show that for every $\epsilon>0$ $$ \beta_{i}^{k} \leq \frac{\epsilon}{2}\left(t_{i}-t_{i-1}\right) $$ provided that $k$ is large enough. See Exercise 7. Exercise* 7. Let $\alpha:[a, b] \rightarrow \mathbf{R}^{n}$ be a $C^{2}$ curve. For every $t, s \in[a, b], t \neq s$, define $$ f(t, s):=\text { angle }\left(\alpha^{\prime}\left(\frac{t+s}{2}\right), \alpha(t)-\alpha(s)\right) . $$ Show that $$ \lim _{t \rightarrow s} \frac{f(t, s)}{t-s}=0 . $$ In particular, if we set $f(t, t)=0$, then the resulting function $f:[a, b] \times$ $[a, b] \rightarrow \mathbf{R}$ is continuous. So, since $[a, b]$ is compact, $f$ is uniformly continuous, i.e., for every $\epsilon>0$, there is a $\delta$ such that $\|f(t)-f(s)\| \leq \epsilon$, whenever $|t-s| \leq \delta$. Does this result hold for $C^{1}$ curves as well? ## Lecture Notes 4 ### Curves of Constant Curvature Here we show that the only curves in the plane with constant curvature are lines and circles. The case of lines occurs precisely when the curvature is zero: Exercise 1. Show that the only curves with constant zero curvature in $\mathbf{R}^{n}$ are straight lines. (Hint: We may assume that our curve, $\alpha: I \rightarrow \mathbf{R}^{n}$ has unit speed. Then $\kappa=\left\|\alpha^{\prime \prime}\right\|$. So zero curvature implies that $\alpha^{\prime \prime}=0$. Integrating the last expression twice yields the desired result.) So it remains to consider the case where we have a planar curve whose curvature is equal to some nonzero constant $c$. We claim that in this case the curve has to be a circle of radius $1 / c$. To this end we introduce the following definition. If a curve $\alpha: I \rightarrow \mathbf{R}^{n}$ has nonzero curvature, the principal normal vector field of $\alpha$ is defined as $$ N(t):=\frac{T^{\prime}(t)}{\left\|T^{\prime}(t)\right\|}, $$ where $T(t):=\alpha^{\prime}(t) /\left\|\alpha^{\prime}(t)\right\|$ is the tantrix of $\alpha$ as we had defined earlier. Thus the principal normal is the tantrix of the tantrix. Exercise 2. Show that $T(t)$ and $N(t)$ are orthogonal. (Hint: Differentiate both sides of the expression $\langle T(t), T(t)\rangle=1$ ). So, if $\alpha$ is a planar curve, $\{T(t), N(t)\}$ form a moving frame for $\mathbf{R}^{2}$, i.e., any element of $\mathbf{R}^{2}$ may be written as a linear combination of $T(t)$ and $N(t)$ for any choice of $t$. In particular, we may express the derivatives of $T$ and ${ }^{1}$ Last revised: December 7, 2021 $N$ in terms of this frame. The definition of $N$ already yields that, when $\alpha$ is parametrized by arclength, $$ T^{\prime}(t)=\kappa(t) N(t) $$ To get the corresponding formula for $N^{\prime}$, first observe that $$ N^{\prime}(t)=a T(t)+b N(t) . $$ for some $a$ and $b$. To find $a$ note that, since $\langle T, N\rangle=0,\left\langle T^{\prime}, N\right\rangle=-\left\langle T, N^{\prime}\right\rangle$. Thus $$ \alpha=\left\langle N^{\prime}(t), T(t)\right\rangle=-\left\langle T^{\prime}(t), N(t)\right\rangle=-\kappa(t) . $$ Exercise 3. Show that $b=0$. (Hint: Differentiate $\langle N(t), N(t)\rangle=1$ ). So we conclude that $$ N^{\prime}(t)=-\kappa(t) T(t), $$ where we still assume that $t$ is the arclength parameter. The formulas for the derivative may be expressed in the matrix notation as $$ \left[\begin{array}{c} T(t) \\ N(t) \end{array}\right]^{\prime}=\left[\begin{array}{cc} 0 & \kappa(t) \\ -\kappa(t) & 0 \end{array}\right]\left[\begin{array}{c} T(t) \\ N(t) \end{array}\right] . $$ Now recall that our main aim here is to classify curves of constant curvature in the plane. To this end define the center of the osculating circle of $\alpha$ as $$ p(t):=\alpha(t)+\frac{1}{\kappa(t)} N(t) . $$ The circle which is centered at $p(t)$ and has radius of $1 / \kappa(t)$ is called the osculating circle of $\alpha$ at time $t$. This is the circle which best approximates $\alpha$ up to the second order: Exercise 4. Check that the osculating circle of $\alpha$ is tangent to $\alpha$ at $\alpha(t)$ and has the same curvature as $\alpha$ at time $t$. Now note that if $\alpha$ is a circle, then it coincides with its own osculating circle. In particular $p(t)$ is a fixed point (the center of the circle) and $\| \alpha(t)-$ $p(t) \|$ is constant (the radius of the circle). Conversely: Exercise 5. Show that if $\alpha$ has constant curvature $c$, then (i) $p(t)$ is a fixed point, and (ii) $\|\alpha(t)-p(t)\|=1 / c$ (Hint: For part (i) differentiate $p(t)$; part (ii) follows immediately from the definition of $p(t))$. So we conclude that a curve of constant curvature $c \neq 0$ lies on a circle of radius $1 / c$. ### Signed Curvature and Turning Angle As we mentioned earlier the curvature of a curve is a measure of how fast it is turning. When the curve lies in a plane, we may assign a sign of plus or minus one to this measure depending on whether the curve is rotating clockwise or counterclockwise. Thus we arrive at a more descriptive notion of curvature for planar curves which we call signed curvature and denote by $\bar{\kappa}$. Then we have $$ |\bar{\kappa}|=\kappa \text {. } $$ To obtain a formula for $\bar{\kappa}$, for any vector $V \in \mathbf{R}^{2}$, let $J V$ be the counterclockwise rotation by 90 degrees. More formally, $$ J V:=(0,0,1) \times V . $$ Then we set $$ \bar{\kappa}(t):=\frac{\left\langle T^{\prime}(t), J T(t)\right\rangle}{\left\|\alpha^{\prime}(t)\right\|} . $$ Exercise 6. Show that if $\alpha$ is a unit speed curve then $$ \bar{\kappa}(t)=\kappa(t)\langle N(t), J T(t)\rangle . $$ In particular, $|\bar{\kappa}|=\kappa$. Exercise 7. Compute the signed curvatures of the clockwise circle $\alpha(t)=$ $(\cos t, \sin t)$, and the counterclockwise circle $\alpha(t)=(\cos (-t), \sin (-t))$. Exercise 8. Show that $$ \bar{\kappa}(t):=\frac{\left\langle\gamma^{\prime}(t) \times \gamma^{\prime \prime}(t),(0,0,1)\right\rangle}{\left\|\gamma^{\prime}(t)\right\|^{3}} $$ Another simple and useful way to define the signed curvature (and the regular curvature) of a planar curve is in terms of the turning angle $\theta$, which is defined as follows. We claim that for any planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$ there exists a continuous function $\theta: I \rightarrow \mathbf{R}^{2}$ such that $$ T(t)=(\cos \theta(t), \sin \theta(t)) . $$ Note that $\cos (\theta(t))=\langle T(t),(1,0)\rangle$. Thus $$ \theta(t)=\angle(T(t),(1,0))+2 k \pi $$ where $k$ is an integer depending on $t$. Assuming that $t$ is the arclength parameter, or $\alpha$ has unit speed, $$ \bar{\kappa}(t)=\theta^{\prime}(t) . $$ Exercise 9. Check the above formula. Now we check that $\theta$ indeed exists. To this end note that $T$ may be thought of as a mapping from $I$ to the unit circle $\mathbf{S}^{1}$. Thus it suffices to show that Proposition 10. For any continuous function $T: I \rightarrow \mathbf{S}^{1}$, where $I=[a, b]$ is a compact interval, there exists a continuous function $\theta: I \rightarrow \mathbf{R}$ such that $T(t)=(\cos (\theta(t)), \sin (\theta(t))$. Proof. Since $T$ is continuous and $I$ is compact, $T$ is uniformly continuous, this means that for $\epsilon>0$, we may find a $\delta>0$ such that $\|T(t)-T(s)\|<\epsilon$, whenever $|t-s|<\delta$. In particular, we may set $\delta_{0}$ to be equal to some constant less than one, and $\epsilon_{0}$ to be the corresponding constant. Now choose a partition $$ a=: x_{0} \leq x_{1} \leq \cdots \leq x_{n}:=b $$ of $[a, b]$ such that $\left|x_{i}-x_{i-1}\right|<\epsilon_{0}$, for $i=1, \ldots, n$. Then $T\left(\left[x_{i}, x_{i-1}\right]\right)$ does not cover $\mathbf{S}^{1}$. So we may define $\theta_{i}:\left[x_{i-1}, x_{i}\right] \rightarrow \mathbf{R}$ by setting $\theta_{i}(x)$ to be the angle in $[0,2 \pi)$, measured counterclockwise, between $T\left(x_{i-1}\right)$ and $T(x)$. Finally, $\theta$ may be defined as $$ \theta(x):=\theta_{0}+\sum_{i=1}^{k-1} \theta_{i}\left(x_{i}\right)+\theta_{k}(x), \quad \text { if } \quad x \in\left[x_{k-1}, x_{k}\right] . $$ ### Total Signed Curvature and Winding Number The total signed curvature of a $C^{2}$ curve $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ is defined as $$ \text { total } \bar{\kappa}[\alpha]:=\int_{a}^{b} \bar{\kappa}(t)\left\|\alpha^{\prime}(t)\right\| d t $$ Recall that if $\alpha$ has unit speed, then $\bar{\kappa}=\theta^{\prime}$, where $\theta$ is the turning angle of $\alpha$ provided by Proposition 10. So if we set $\Delta \theta:=\theta(b)-\theta(a)$, then the fundamental theorem of calculus yields that $$ \text { total } \bar{\kappa}[\alpha]=\int_{a}^{b} \bar{\kappa}(t) d t=\int_{a}^{b} \theta^{\prime}(t) d t=\Delta \theta . $$ Note that $\Delta \theta$ is well-defined even when $\alpha$ is only $C^{1}$. Thus the total curvature of a $C^{1}$ curve may be defined as $$ \operatorname{total} \bar{\kappa}[\alpha]:=\Delta \theta $$ despite the fact that curvature of $\alpha$ may not be defined pointwise. We say that $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ is closed provided that $\alpha(a)=\alpha(b)$. In addition, if $\alpha_{-}^{\prime}(a)=\alpha_{+}^{\prime}(b)$, then $\alpha$ closes smoothly, and is called a $C^{1}$ closed curve. Then $\theta(b)=\theta(a)+2 k \pi$ for some integer $k$. So total $\bar{\kappa}[\alpha]=2 k \pi$. The integer $k$ is called the Hopf rotation index or winding number of $\alpha$. Exercise 11. (i) Compute the total curvature and rotation index of a circle which has been oriented clockwise, and a circle which is oriented counterclockwise. Sketch the figure eight curve $(\cos t, \sin 2 t), 0 \leq t \leq 2 \pi$, and compute its total signed curvature and rotation index. We say that a closed curve $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ is simple if it is one-to-one on $[a, b)$. Furthermore, $\alpha$ is oriented counter clockwise provides that for every $t \in[a, b]$, the counterclockwise rotation $J \alpha^{\prime}$ points into the compact region $\Omega$ bounded by $\alpha([a, b])$, i.e., there exists $\epsilon>0$ such that $\alpha(t)+\epsilon J \alpha^{\prime}(t) \in \Omega$. The following result proved by $\mathrm{H}$. Hopf is one of the fundamental theorems in theory of planar curves. Theorem 12 (Hopf's turning angle theorem). For any simple closed $C^{1}$ curve $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ which has counterclockwise orientation, total $\bar{\kappa}[\alpha]=2 \pi$. Hopf proved the above result using analytic methods including Green's theorem. Here we outline a more elementary proof which will illustrate that the above theorem is simply a generalization of one of the most basic results in Euclidean geometry: the sum of the angles in a triangle is $\pi$, which is equivalent to the sum of the exterior angles being $2 \pi$. First we will give another definition for total $\bar{\kappa}$ which will establish the connection between the total signed curvature and the sum of the exterior angles in a polygon. By a polygon $P$ we mean a closed curve formed by line segments joing an ordered set of points $\left(p_{0}, \ldots, p_{n}\right)$ in $\mathbf{R}^{2}$, where $p_{n}=p_{0}$, but $p_{i} \neq p_{i-1}$, for $i=1, \ldots, n$. Furthermore we assume that the vectors $p_{i} p_{i-i}$, and $p_{i+1} p_{i}$ are not parallel. Each $p_{i}$ is called a vertex of $P$. At each vertex $p_{i}, i=1 \ldots n$, we define the turning angle $\theta_{i}$ to be the angle in $(-\pi, \pi)$ determined by the vectors $p_{i} p_{i-i}$, and $p_{i+1} p_{i}$, and measured in the counterclockwise direction (we set $p_{n+1}:=p_{1}$ ). More formaly, $$ \theta_{i}:=\angle\left(p_{i} p_{i-i}, p_{i+1} p_{i}\right) \operatorname{sign}\left(p_{i-1}, p_{i}, p_{i+1}\right), $$ where we set $\operatorname{sign}\left(p_{i-1}, p_{i}, p_{i+1}\right)=0$ if $p_{i-1}, p_{i} p_{i+1}$ lie on a line; otherwise, $$ \operatorname{sign}\left(p_{i-1}, p_{i}, p_{i+1}\right):=\frac{\left\langle p_{i} p_{i-i} \times p_{i+1} p_{i},(0,0,1)\right\rangle}{\left\|p_{i} p_{i-i} \times p_{i+1} p_{i}\right\|} . $$ So $\operatorname{sign}\left(p_{i-1}, p_{i}, p_{i+1}\right)=1$ provided that $p_{i+1}$ lies on the left hand side of the oriented line $\ell$ spanned by $p_{i-1} p_{i}$, i.e., the side where $J\left(p_{i-1} p_{i}\right)$ points, and $\operatorname{sign}\left(p_{i-1}, p_{i}, p_{i+1}\right)=-1$ if $p_{i+1}$ lies on the right hand side of $\ell$, i.e., the side where $-J\left(p_{i-1} p_{i}\right)$ points. Note that if $P$ forms a simple closed curve which is oriented counterclockwise, then $\theta_{i}=\pi-\theta_{i}^{\prime}$ where $\theta_{i} \in(0, \pi)$ are the interior angles of $\Omega$. So $\theta_{i}$ are sometimes called exterior angles. The total curvature of $P$ is defined as the sum of its turning angles: $$ \text { total } \bar{\kappa}[P]:=\sum_{i=1}^{n} \theta_{i} . $$ Now let $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ be a closed planar curve. For $i=0, \ldots, n$, set $t_{i}:=$ $a+i \frac{b-a}{n}$, and let $P_{n}[\alpha]$ be the closed polygon with vertices $\left(\alpha\left(t_{0}\right), \ldots, \alpha\left(t_{n}\right)\right)$. Lemma 13. If $n$ is sufficiently large, then total $\bar{\kappa}[\alpha]=\operatorname{total} \bar{\kappa}\left[P_{n}[\alpha]\right]$. Proof. Let $\theta$ be the rotation angle of $\alpha$, and $\theta_{i}$ be the turning angles of $P_{n}[\alpha]$. Since $\alpha$ is $C^{1}$, there exists, for $i=0, \ldots, n$, an element $\bar{t}_{i} \in\left[t_{i-1}, t_{i}\right]$ such that $T\left(t_{i}\right)$ is parallel to $\alpha\left(t_{i}\right)-\alpha\left(t_{i-1}\right)$. Furthermore, choosing $n$ large enough, we can make sure that $\theta\left(\bar{t}_{i}\right)-\theta\left(\bar{t}_{i-1}\right)<\pi$, since $\theta$ is continuous. Then it follows that $\theta_{i}=\theta\left(\bar{t}_{i}\right)-\theta\left(\bar{t}_{i-1}\right)$. So $$ \sum \theta_{i}=\sum\left(\theta\left(\bar{t}_{i}\right)-\theta\left(\bar{t}_{i-1}\right)\right)=\theta(b)-\theta(a)=\Delta \theta, $$ which completes the proof. Now to complete the proof of Theorem 12 we need only to verify: Lemma 14. For any simple closed polygonal curve $P$, oriented counter clockwise, $\operatorname{total} \bar{\kappa}[\alpha]=2 \pi$. Proof. First note that the lemma holds for triangles. Then it holds for convex polygons as well, since they can be decomposed into triangles, by connecting a point in the interior of the region bounded by the polygon to all the vertices. The rest of the proof proceeds by induction. Suppose that the lemma holds for all polygons with $n$ sides, and assume that $P$ is a polygon with $n+1$ sides. If $P$ is convex, then we are done. Otherwise, the boundary of the convex hull of $P$, which we call $Q$ yields a convex simple closed polygonal curve whose vertices form a subset of the vertices of $P$. Any vertex of $P$ which does not lie on $Q$ is part of a polygonal path of $P$ whose end points lie on a pair of adjacent vertices of $Q$. Joining the edge in between these vertices to the polygonal path, we obtain a simple closed polygonal curve. There are a finite number of such curves which we call $R_{1}, \ldots, R_{m}$. We claim that $$ \operatorname{total} \bar{\kappa}[P]+\sum_{i=1}^{m} \operatorname{total} \bar{\kappa}\left[R_{i}\right]=\operatorname{total} \bar{\kappa}[Q]+2 m \pi, $$ which will complete the proof, since total $\bar{\kappa}\left[R_{i}\right]=2 \pi$ by the inductive hypothesis, and total $\bar{\kappa}[Q]=2 \pi$ as well since $Q$ is convex. It is enough to establish the above equality for the case $m=1$ : $$ \text { total } \bar{\kappa}[P]+\operatorname{total} \bar{\kappa}[R]=\operatorname{total} \bar{\kappa}[Q]+2 \pi . $$ The general case for $m>1$ then follows from repeated application of the last equality. To establish this equality, let $v$ be a vertex of $R$ in the interior of the region bounded by $Q$. Then $v$ is also a vertex of $P$. Let $\alpha, \beta$ denote the turning angles of $R$ and $P$ at $v$. Then $\alpha+\beta=2 \pi-\left(\alpha^{\prime}+\beta^{\prime}\right)=0$, where $\alpha^{\prime}$ and $\beta^{\prime}$ are the interior angles of $R$ and $P$ at $v$. So turning angles at vertices of $P$ or $R$ which are contained inside $Q$ cancel each other. We just need to consider then the vertices $v$ of $P$ which lie on $Q$. In this case, if $v$ is a vertex of $P$ but not of $R$, then the turning angle of $P$ at $v$ is equal to that of $Q$. On the other hand if $v$ also belongs to $R$, and $\alpha, \beta$ denote the turning angles of $P$ and $R$ at $v$, then $\alpha+\beta=2 \pi-\left(\alpha^{\prime}+\beta^{\prime}\right)=2 \pi-\gamma^{\prime}=\pi+\gamma$, where $\gamma$ is the turning angle of $Q$ at $v$, and $\gamma^{\prime}=\pi-\gamma$ is the corresponding interior angle. Thus we pick up an extra $\pi$ for each vertex of $R$ on $Q$, which completes the proof. Our method of proof via polygonal approximations yields a genralization of Hopf's theorem to piecewise $C^{1}$ curves. We say $\alpha$ is piecewise $C^{1}$ provided that there are points $a=: t_{0}<t_{1}<\cdots<t_{k}:=b$ such that $\alpha$ is $C^{1}$ on each subintervals $\left[t_{i-1}, t_{i}\right]$. Then the points $\alpha\left(t_{i}\right), i=1, \ldots, k$ will be called the corners of $\alpha$ or the corners of the region $\Omega$ bounded by $\alpha$. Assuming that $\alpha$ is oriented counterclockwise, i.e., $J \alpha^{\prime}(t)$ points into $\Omega$ at all differentiable points $t \in[a, b]$, then the turning angle at the corner $\theta_{i}$ is defined as $\pi-\theta_{i}^{\prime}$ where $\theta_{i}^{\prime} \in[0,2 \pi]$ is the interior angle of $\Omega$ at $\alpha\left(t_{i}\right)$. So $\theta_{i} \in[-\pi, \pi]$. Now we define $$ \operatorname{total} \bar{\kappa}[\alpha]:=\sum_{i} \operatorname{total} \bar{\kappa}\left[\left.\alpha\right|_{\left[t_{i-1}, t_{i}\right]}\right]+\sum_{i} \theta_{i} . $$ Theorem 15. Let $\alpha$ be a simple closed piecewise $C^{1}$ planar curve oriented counter clockwise. Then $$ \operatorname{total} \bar{\kappa}[\alpha]=2 \pi $$ To prove the above theorem we just need to stipulate that the vertices of polygonal approximations $P_{n}[\alpha]$ we discussed earlier include all corners of $\alpha$. ### The fundamental theorem of planar curves If $\alpha:[0, L] \rightarrow \mathbf{R}^{2}$ is a planar curve parametrized by arclength, then its signed curvature yields a function $\bar{\kappa}:[0, L] \rightarrow \mathbf{R}$. Now suppose that we are given a continuous function $\bar{\kappa}:[0, L] \rightarrow \mathbf{R}$. Is it always possible to find a unit speed curve $\alpha:[0, L] \rightarrow \mathbf{R}^{2}$ whose signed curvature is $\bar{\kappa}$ ? If so, to what extent is such a curve unique? In this section we show that the signed curvature does indeed determine a planar curve, and such a curve is unique up to proper rigid motions. Recall that by a proper rigid motion we mean a composition of a translation with a proper rotation. A translation is a mapping $T: \mathbf{R}^{2} \rightarrow \mathbf{R}^{2}$ given by $$ T(p):=p+v $$ where $v$ is a fixed vector. And a proper rotation $\rho: \mathbf{R}^{2} \rightarrow \mathbf{R}^{2}$ is a linear mapping given by $$ \rho\left(\left[\begin{array}{l} x \\ y \end{array}\right]\right):=\left[\begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array}\right]\left[\begin{array}{l} x \\ y \end{array}\right] . $$ Exercise 16. Show that the signed curvature of a planar curve is invariant under proper rigid motions. Exercise 17 (Local Convexity). Show that if the curvature of a planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$ does not vanish at an interior point $t_{0}$ of $I$ then there exists an open neighborhood $U$ of $t_{0}$ in $I$ such that $\alpha(U)$ lies on one side of the tangent line of $\alpha$ at $t_{0}$. (Hint: By the invariance of signed curvature under rigid motions, we may assume that $\alpha\left(t_{0}\right)=(0,0)$ and $\alpha^{\prime}(0)=(1,0)$. Then we may reparametrize $\alpha$ as $(t, f(t))$ in a neighborhood of $t_{0}$. Recalling the formula for curvature for graphs, and applying the Taylor's theorem yields the desired result.) Now suppose that we are given a function $\bar{\kappa}:[0, L] \rightarrow \mathbf{R}$. If there exist a curve $\alpha:[0, L] \rightarrow \mathbf{R}^{2}$ with signed curvature $\bar{\kappa}$, then $$ \theta^{\prime}=\bar{\kappa} $$ where $\theta$ is the rotation angle of $\alpha$. Integration yields $$ \theta(t):=\int_{0}^{t} \bar{\kappa}(s) d s+\theta(0) $$ By the definition of the turning angle $$ \alpha^{\prime}(t)=(\cos \theta(t), \sin \theta(t)) $$ Consequently, $$ \alpha(t)=\left(\int_{0}^{t} \cos \theta(s) d s, \int_{0}^{t} \sin \theta(s) d s\right)+\alpha(0), $$ which gives an explicit formula for the desired curve. Exercise 18 (Fundamental theorem of planar curves). Let $\alpha, \beta:[0, L] \rightarrow$ $\mathbf{R}^{2}$ be unit speed planar curves with the same signed curvature function $\bar{\kappa}$. Show that there exists a proper rigid motion $m: \mathbf{R}^{2} \rightarrow \mathbf{R}^{2}$ such that $\alpha(t)=m(\beta(t))$. Exercise 19. Use the above formula to show that the only closed curves of constant curvature in the plane are circles. ## Lecture Notes 5 ### Osculating Circle and Radius of Curvature Recall that in a previous section we defined the osculating circle of a planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$ at a point a of nonvanishing curvature $t \in I$ as the circle with radius $r(t)$ and center at $$ \alpha(t)+r(t) N(t) $$ where $$ r(t):=\frac{1}{\kappa(t)} $$ is called the radius of curvature of $\alpha$. If we had a way to define the osculating circle independently of curvature, then we could define curvature simply as the reciprocal of the radius of the osculating circle, and thus obtain a more geometric definition for curvature. Exercise 1. Let $r(s, t)$ be the radius of the circle which is tangent to $\alpha$ at $\alpha(t)$ and is also passing through $\alpha(s)$. Show that $$ \kappa(t)=\lim _{s \rightarrow t} r(s, t) $$ To do the above exercise first recall that, as we showed in the previous lecture, curvature is invaraint under rigid motions. Thus, after a rigid motion, we may assume that $\alpha(t)=(0,0)$ and $\alpha^{\prime}(t)$ is parallel to the $x$-axis. Then, we may assume that $\alpha(t)=(t, f(t))$, for some function $f: \mathbf{R} \rightarrow \mathbf{R}$ with $f(0)=0$ and $f^{\prime}(0)=0$. Further, recall that $$ \kappa(t)=\frac{\left|f^{\prime \prime}(t)\right|}{\left(\sqrt{1+f^{\prime}(t)^{2}}\right)^{3}} . $$ ${ }^{1}$ Last revised: September 13, 2021 Thus $$ \kappa(0)=\left|f^{\prime \prime}(0)\right| \text {. } $$ Next note that the center of the circle which is tangent to $\alpha$ at $(0,0)$ must lie on the $y$-axis at some point $(0, r)$, and for this circle to also pass through the point $(s, f(s))$ we must have: $$ r^{2}=s^{2}+(r-f(s))^{2} $$ Solving the above equation for $r$ and taking the limit as $s \rightarrow 0$, via the L'Hopital's rule, we have $$ \lim _{s \rightarrow 0} \frac{2|f(s)|}{f^{2}(s)+s^{2}}=\left|f^{\prime \prime}(0)\right|=\kappa(0), $$ which is the desired result. Note 2. The above limit can be used to define a notion of curvature for curves that are not twice differentiable. In this case, we may define the upper curvature and lower curvature respectively as the upper and lower limit of $$ \frac{2|f(s)|}{f^{2}(s)+s^{2}} . $$ as $s \rightarrow 0$. We may even distinguish between right handed and left handed upper or lower curvature, by taking the right handed or left handed limits respectively. Exercise* 3. Let $\alpha: I \rightarrow \mathbf{R}^{2}$ be a planar curve and $t_{0}, t_{1}, t_{2} \in I$ with $t_{1} \leq t_{0} \leq t_{2}$. Show that $\kappa\left(t_{0}\right)$ is the reciprocal of the limit of the radius of the circles which pass through $\alpha\left(t_{0}\right), \alpha\left(t_{1}\right)$ and $\alpha\left(t_{2}\right)$ as $t_{1}, t_{2} \rightarrow t_{0}$. ### Kneser's Nesting Theorem We say that the curvature of a curve is monotone if it is strictly increasing or decreasing. The following result shows that the osculating circles of a curve with monotone curvature are "nested", i.e., the lie inside each other: Theorem 4 (Kneser's Nesting theorem). Let $\alpha: I \rightarrow \mathbf{R}^{2}$ be a $C^{4}$ curve with monotone nonvanishing curvature. Then the osculating circles of $\alpha$ are pairwise disjoint. To prove the above result we need the following Lemma. Note that if $\alpha: I \rightarrow \mathbf{R}^{2}$ is a curve with nonvanishing curvature, then the centers of the osculating circles of $\alpha$ for the curve $$ \beta(t):=\alpha(t)+r(t) N(t) $$ where $r(t):=1 / \kappa(t)$ is the radius of curvature of $\alpha$. This curve $\beta$ is known as the evolute of $\alpha$. Exercise 5. Show that if $\alpha: I \rightarrow \mathbf{R}^{2}$ is a $C^{4}$ curve with monotone nonvanishing curvature, then its evolute $\beta$ is a regular curve which also has nonvanishing curvature. In particular $\beta$ contains no line segments. Now we are ready to prove the main result of this section: Proof of Neser's Theorem. We may suppose that $\left\|\alpha^{\prime}\right\|=1$, and its curvature $\kappa$ is increasing. We need to show that for every $t_{0}, t_{1} \in I$, with $t_{0}<t_{1}$, the osculating circle at $t_{1}$ lies inside the osculating circle at $t_{0}$. To this end it suffices to showing that $$ \left\|\beta\left(t_{0}\right)-\beta\left(t_{1}\right)\right\|+r\left(t_{1}\right)<r\left(t_{0}\right) $$ To see this end first note that, since $\beta$ contains no line segments (see the previous exercise) $$ \left\|\beta\left(t_{0}\right)-\beta\left(t_{1}\right)\right\|<\int_{t_{0}}^{t_{1}}\left\|\beta^{\prime}(t)\right\| d t . $$ Now a simple computation completes the proof: $$ \begin{aligned} \int_{t_{0}}^{t_{1}}\left\|\beta^{\prime}(t)\right\| & =\int_{t_{0}}^{t_{1}}\left|r^{\prime}(t)\right| d t \\ & =\int_{t_{0}}^{t_{1}}-r^{\prime}(t) d t=r\left(t_{0}\right)-r\left(t_{1}\right) . \end{aligned} $$ (Here $\left|r^{\prime}\right|=-r^{\prime}$, becuase, since $\kappa$ is increasing by assumption, $r$ is decreasing.) Kneser's theorem has a number of interesting corollaries: Exercise 6. Show that a curve with monotone curvature cannot have any self intersections. Exercise 7. Show that a curve with monotone curvature cannot have any bitangent lines. The last two exercises show that a curve with monotone curvature looks essentially as depicted in the following figure, i.e., it spirals around itself. ### Total Curvature and Convexity The boundary of $X \subset \mathbf{R}^{n}$ is defined as the intersection of the closure of $X$ with the closure of its complement. Exercise 8. Is it true that the boundary of any set is equal to its closure minus its interior? (Hint: Consider a ball with its center removed) We say that a simple closed curve $\alpha: I \rightarrow \mathbf{R}^{2}$ is convex provided that its image lies on one side of every tangent line. A subset of $\mathbf{R}^{n}$ is convex if it contains the line segment joining each pairs of its points. Clearly the intersection of convex sets is convex. Exercise 9. Show that a simple closed planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$ is convex only if it lies on the boundary of a convex set. (Hint: By defintion, through each point $p$ of $\Gamma$ there passes a line $\ell_{p}$ with respect to which $\Gamma$ lies on one side. Thus each $\ell_{p}$ defines a closed half plane $H_{p}$ which contains $\Gamma$. Show that $\Gamma$ lies on the boundary of the intersection of all these half planes). The total curvature of a curve $\alpha: I \rightarrow \mathbf{R}^{n}$ is defined as $$ \int_{I} \kappa(t) d t $$ where $t$ is the arclength parameter. Exercise 10. Show that the total curvature of any convex planar curve is $2 \pi$. (Hint: We only need to check that the exterior angles of polygonal approximations of a convex curve do not change sign. Recall that, as we showed in a previous section, the sum of these angles is the total signed curvature. So it follows that the signed curvature of any segment of $\alpha$ is either zero or has the same sign as any other segment. This in turn implies that the signed curvature of $\alpha$ does not change sign. So the total signed curvature of $\alpha$ is equal to its total curvature up to a sign. Since by definition the curve is simple, however, the total signed curvature is $\pm 2 \pi$ by Hopf's theorem.) Theorem 11. For any closed planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$, $$ \int_{I} \kappa(t) d t \geq 2 \pi $$ with equality if and only if $\alpha$ is convex. First we show that the total curvature of any curve is at least $2 \pi$. To this end recall that when $t$ is the arclength parameter $\kappa(t)=\left\|T^{\prime}(t)\right\|$. Thus the total curvature is simply the total length of the tantrix curve $T: I \rightarrow \mathbf{S}^{2}$. Since $T$ is a closed curve, to show that its total length is bigger than $2 \pi$, it suffices to check that the image of $T$ does not lie in any semicircle. Exercise 12. Verify the last sentence. To see the that the image of $T$ does not lie in any semicircle, let $u \in \mathbf{S}^{1}$ be a unit vector and note that $$ \int_{a}^{b}\langle T(t), u\rangle d t=\int_{a}^{b}\left\langle\alpha^{\prime}(t), u\right\rangle d t=\langle\alpha(b)-\alpha(a), u\rangle=0 $$ Since $T(t)$ is not constant (why?), it follows that the function $t \mapsto\langle T(t), u\rangle$ must change sign. So the image of $T$ must lie on both sides of the line through the origin and orthogonal to $u$. Since $u$ was chosen arbitrarily, it follows that the image of $T$ does not lie in any semicircle, as desired. Next we show that the total curvature is $2 \pi$ if and only if $\alpha$ is convex. The "if" part has been established already in exercise 10. To prove the "only if" part, suppose that $\alpha$ is not convex, then there exists a tangent line $\ell_{0}$ of $\alpha$, say at $\alpha\left(t_{0}\right)$, with respect to which the image of $\alpha$ lies on both sides. Then $\alpha$ must have two more tangent lines parallel to $\ell_{0}$. Exercise 13. Verify the last sentence (Hint: Let $u$ be a unit vector orthogonal to $\ell$ and note that the function $t \mapsto\left\langle\alpha(t)-\alpha\left(t_{0}\right), u\right\rangle$ must have a minimum and a maximum differerent from 0 . Thus the derivative at these two points vanishes.) Now that we have established that $\alpha$ has three distinct parallel lines, it follows that it must have at least two parallel tangents. This observation is worth recording: Lemma 14. If $\alpha: I \rightarrow \mathbf{R}^{2}$ is a closed curve which is not convex, then it has a pair of parallel tangent vectors which generate disitinct parallel lines. Next note that Exercise 15. If $\alpha: I \rightarrow \mathbf{R}^{2}$ is closed curve whose tantrix $T: I \rightarrow \mathbf{S}^{1}$ is not onto, then the total curvature is bigger than $2 \pi$. (Hint: This is immediate consequence of the fact that $T$ is a closed curve and it does not lie in any semicircle) So if $T$ is not onto then we are done (recall that we are trying to show that if $\alpha$ is not convex, then its total curvature is bigger than $2 \pi$ ). We may assume, therefore, that $T$ is onto. This together with the above lemma yields that the total curvature is bigger than $2 \pi$. To see this note that let $t_{1}, t_{2} \in I$ be the two points such that $T\left(t_{1}\right)$ and $T\left(t_{2}\right)$ are parallel and the corresponding tangent lines are distinct. Then $T$ restricted to $\left[t_{1}, t_{2}\right]$ is a closed nonconstant. So either $T\left(\left[t_{1}, t_{2}\right]\right)$ (i) covers some open segment of the circle twice or (ii) covers the entire circle. Since we have established that $T$ is onto, the first possibility implies that the length of $T$ is bigger than $2 \pi$. Further, since, $T$ restricted to $I-\left(t_{1}, t_{2}\right)$ is not constant, the second possibility (ii) would imply the again the first case (i). Hence we conclude that if $\alpha$ is not convex, then its total curvature is bigger than $2 \pi$, which completes the proof of Theorem 11. Corollary 16. Any simple closed curve $\alpha: I \rightarrow \mathbf{R}^{2}$ is convex if and only if its signed curvature does not change sign. Proof. Since $\alpha$ is simple, its total signed curature is $\pm 2 \pi$ by Hopf's theorem. After switching the orientation of $\alpha$, if necessary, we may assume that the total signed curvature is $2 \pi$. Suppose, towards a contradiction, that the signed curvature does change sign. The integral of the signed curvature over the regions where its is positive must be bigger than $2 \pi$, which in turn implies that the total curvature is bigger than $2 \pi$, which contradicts the previous theorem. So if $\alpha$ is convex, then $\bar{\kappa}$ does not change sign. Next suppose that $\bar{\kappa}$ does not change sign. Then the total signed curvature is equal to the total curvature (up to a sign), which, since the curve is simple, implies, via the Hopf's theorem, that the total curvature is $2 \pi$. So by the previous theorem the curve is convex. ## Lecture Notes 6 ### The four vertex theorem for convex curves A vertex of a planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$ is a point where the signed curvature of $\alpha$ has a local max or min. Exercise 1. Show that an ellipse has exactly 4 vertices, unless it is a circle. We say that a planar curve is convex if through each point in the image of it there passes a line with respect to which the curve lies on side. The main aim of this section is to show that: Theorem 2. Any convex $C^{3}$ planar curve has (at least) four vertices. In fact any simple closed curve has 4 vertices, and it is not necessary to assume that $\kappa$ is $C^{1}$, but the proof is harder. On the other hand if the curve is not simple, then the 4 vertex property may no longer be true: Exercise 3. Sketch the limacon $\alpha:[0,2 \pi] \rightarrow \mathbf{R}^{2}$ given by $$ \alpha(t):=(2 \cos t+1)(\cos t, \sin t) $$ and show that it has only two vertices. (Hint: It looks like a loop with a smaller loop inside) The proof of the above theorem is by contradiction. Suppose that $\alpha$ has fewer than 4 vertices, then it must have exactly 2 . Exercise 4. Verify the last sentence. Suppose that these two vertices occur at $t_{0}$ and $t_{1}$. Then $\kappa^{\prime}(t)$ will have one sign on $\left(t_{1}, t_{2}\right)$ and the opposite sign on $I-\left[t_{1}, t_{2}\right]$. Let $\ell$ be the line passing through $\alpha\left(t_{1}\right)$ and $\alpha\left(t_{2}\right)$. Then, since $\alpha$ is convex, $\alpha$ restricted to $\left(t_{1}, t_{2}\right)$ lies entirely in one of the closed half planes determined by $\ell$ and $\alpha$ restricted to $I-\left[t_{1}, t_{2}\right]$ lies in the other closed half plane. ${ }^{1}$ Last revised: September 13, 2021 Exercise 5. Verify the last sentence, i.e., show that if $\alpha: I \rightarrow \mathbf{R}^{2}$ is a simple closed convex planar curve, and $\ell$ is any line in the plane which intersects $\alpha(I)$, then either $\ell$ intersects $\alpha$ in exactly two points, or $\alpha(I)$ lies on one side $\ell$. (Hint: Show that if $\alpha$ intersects $\ell$ at 3 points, then it lies on one side of $\ell$.) Let $p$ be a point of $\ell$ and $v$ be a vector orthogonal to $\ell$, then $f: I \rightarrow \mathbf{R}$, given by $f(t):=\langle\alpha(t)-p, v\rangle$ has one sign on $\left(t_{1}, t_{2}\right)$ and has the opposite sign on $I-\left[t_{1}, t_{2}\right]$. Consequently, $\kappa^{\prime}(t) f(t)$ is always nonnegative. So $$ 0<\int_{I} \kappa^{\prime}(t)\langle\alpha(t)-p, v\rangle d t $$ On the other hand $$ \begin{aligned} \int_{I} \kappa^{\prime}(t)\langle\alpha(t)-p, v\rangle d t & =\left.\kappa(t)\langle\alpha(t)-p, v\rangle\right|_{a} ^{b}-\int_{I} \kappa(t)\langle T(t), v\rangle d t \\ & =0-\int_{I}\left\langle-N^{\prime}(t), v\right\rangle d t \\ & =\left.\langle N(t), v\rangle\right|_{a} ^{b} \\ & =0 . \end{aligned} $$ So we have a contradiction, as desired. Exercise 6. Justify each of the lines in the above computation. ### Shur's Arm Lemma The following result describes how the distance between the end points of a planar curve is effected by its curvature: Theorem 7 (Shur's Arm Lemma). Let $\alpha_{1}, \alpha_{2}:[0, L] \rightarrow \mathbf{R}^{3}$ be unit speed $C^{1}$ curves such that the union of each $\alpha_{i}$ with the line segment from $\alpha_{i}(0)$ to $\alpha_{i}(L)$ is a convex curve. Suppose that for almost all $t \in[0, L], \kappa_{i}(t)$ is well defined, e.g., $\alpha_{i}$ is piecewise $C^{2}$, and $$ \kappa_{1}(t) \geq \kappa_{2}(t) $$ for almost all $t \in[0, L]$. Then $$ \operatorname{dist}\left(\alpha_{1}(0), \alpha_{1}(L)\right) \leq \operatorname{dist}\left(\alpha_{2}(0), \alpha_{2}(L)\right) . $$ Proof. After a rigid motion we may assume that the segment $\alpha_{1}(0) \alpha_{1}(L)$ is parallel to the $x$-axis and $\alpha_{1}^{\prime}$ is rotating counterclockwise, see the picture below. Then there exists $t_{0} \in[0, L]$ such that $\alpha_{1}^{\prime}\left(t_{0}\right)$ is horizontal. After a rigid motion, we may assume that $\alpha_{2}^{\prime}\left(t_{0}\right)$ is horizontal as well. Now let $\theta_{i}$ be the angle that $\alpha_{i}^{\prime}$ makes with the positive direction of the $x$-axis measured counterclockwise. Then $\theta_{i} \in[-\pi, \pi]$ (for $\theta_{1}$ this follows from convexity of $\alpha_{1}$, and for $\theta_{2}$, this follows from the assumption that $\kappa_{2} \leq \kappa_{1}$ ). Further note that $$ \left|\theta_{i}(t)\right|=\left|\theta_{i}(t)-\theta_{i}\left(t_{0}\right)\right|=\left|\int_{t_{0}}^{t} \theta_{i}^{\prime}(s) d s\right|=\left|\int_{t_{0}}^{t} \kappa_{i}(s) d s\right| . $$ Thus $\left|\theta_{1}(t)\right| \geq\left|\theta_{2}(t)\right|$, and, since $\left|\theta_{i}(t)\right| \in[0, \pi]$, it follows that $$ \cos \left|\theta_{1}(t)\right| \leq \cos \left|\theta_{2}(t)\right| $$ Finally note that, if we set $e_{1}:=(1,0)$, then $$ \begin{aligned} \left\|\alpha_{1}(L)-\alpha_{1}(0)\right\| & =\left\langle\alpha_{1}(L)-\alpha_{1}(0), e_{1}\right\rangle \\ & =\int_{0}^{L}\left\langle\alpha_{1}^{\prime}(t), e_{1}\right\rangle d t \\ & =\int_{0}^{L} \cos \left|\theta_{1}(t)\right| d t \\ & \leq \int_{0}^{L} \cos \left|\theta_{2}(t)\right| d t \\ & =\int_{0}^{L}\left\langle\alpha_{2}^{\prime}(t), e_{1}\right\rangle d t \\ & =\left\langle\alpha_{2}(L)-\alpha_{2}(0), e_{1}\right\rangle \leq\left\|\alpha_{2}(L)-\alpha_{2}(0)\right\| . \end{aligned} $$ Exercise 8. Is Shur's arm lemma true for nonconvex arcs? Exercise 9. Prove the four vertex theorem for convex curves using the Schur's arm lemma. pose that we have a pair of polygonal arcs $P_{1}$ and $P_{2}$ in the plane, each of which is convex (i.e., when we connect the end points of each arc, then we obtain a closed convex curve). Further suppose that these curves have the same number of segments and the corresponding segments (if we order them consecutively) have the same length. Now show that if the exterior angles in $P_{1}$ is smaller than the corresponding angles in $P_{2}$, then the distance between the end points of $P_{1}$ is larger than the distance between the end points of $P_{2}$. ### The four vertex theorem for general curves In this section we generalize the four vertex theorem which was proved earlier. First we need the following result. An inflection point is a point where the signed curvature changes sign. Lemma 11. Let $\alpha:[a, b] \rightarrow \mathbf{R}^{2}$ be a simple $C^{2}$ curve. Suppose that $\alpha(a)$ and $\alpha(b)$ both lie on a line $\ell$ with respect to which the image of $\alpha$ lies on one side. Further suppose that $\alpha^{\prime}(a)$ and $\alpha^{\prime}(b)$ are parallel. Then either the image of $\alpha$ is a line segment, or else $\alpha$ has at least two inflection points. A support line of a set $A \subset \mathbf{R}^{2}$ is a line with respect to which $A$ lies on one side and intersects $A$ at some point. Exercise* 12. Prove the above lemma. Lemma 13. Let $\Gamma$ be a simple closed $C^{2}$ curve in the plane. Suppose that every support line of $\Gamma$ intersects $\Gamma$ in a single point. Then $\Gamma$ is convex. Exercise* 14. Prove the above lemma. Theorem 15. Every simple closed $C^{2}$ planar curve has four vertices. Proof. We may suppose that the signed curvature of our curve $\Gamma$ changes sign at most twice, because there has to be a vertex between every pair of inflection points. Since $\Gamma$ is not convex, there exists by the above lemma a support line $\ell$ which is tangent to $\Gamma$ at two distinct points say $A$ and $B$. Since $\Gamma$ is simple there must be a portion of $\Gamma$, say $\Gamma_{1}$ bounded by $A$ and $B$ so that the unit tangent vectors of $\Gamma_{1}$ at $A$ and $B$, with respect to some parametrization, are parallel. Then by the above lemma $\Gamma_{1}$ must contain both inflection points of $\Gamma$. Consequently, the complement of $\Gamma_{1}$ say $\Gamma_{2}$ has no inflection points and it follows that the union of $\Gamma_{2}$ with the line segment $A B$ is a closed convex curve, see the picture below. It is enough to show that the interior of $\Gamma_{2}$ contains at least two vertices, because $\Gamma_{1}$ already contains at least one vertex (since a vertex must be between every pair of inflection points), and the total number of vertices must be even. First we show that the inerior of $\Gamma_{2}$ must contain at least one vertex. Suppose not. Then the curvature of $\Gamma_{2}$ is monotone, so its minimum must be either at $A$ or $B$. Suppose that the minimum of curvature is at $A$. Let $A^{\prime}$ be the point in $\Gamma_{2}$ so that $A$ and $A^{\prime}$ divide $\Gamma_{2} \cup A B$ into portions of same length. In one of these portions the curvature is less than the other, which contradicts the Arm lemma proved earlier. So there exists a point $p$ in the interior of $\Gamma_{2}$ which is a vertex. Suppose that $\Gamma$ has only two vertices. Then it follows that $p$ must be a maximum point of curvature of $\Gamma_{2}$. Suppose that the length of the $\operatorname{arc} p A$ in $\Gamma_{2}$ is not bigger than the length of $p B$. Lat $x(t)$ be a parametrization of $p A$ from $p$ to $A$, and let $y(t)$ be a point of $p B$, if one exists, so that the curvature at $y(t)$ is equal to the curvature at $x(t)$. Suppose there exists a time $t_{1}$ so that $x\left(t_{1}\right)$ and $y\left(t_{1}\right)$ divide the length of $\Gamma_{2} \cup A B$ in half. Then one of the portions determined by $x\left(t_{1}\right)$ and $y\left(t_{1}\right)$ will have smaller curvature than other at every point, which contradicts the Arm Lemma. So we may suppose that $t_{1}$ does not exist. This implies that there exists a point $A^{\prime}$ in the segment $p B$ of $\Gamma_{2}$ such that the curvature at $A^{\prime}$ is equal to the curvature at $A$ and the length of $A A^{\prime}$ in $\Gamma_{2}$ is less than half the length of $\Gamma_{2} \cup A B$. In this case, let $A^{\prime \prime}$ be the point in the segment $A^{\prime} B$ of $\Gamma_{2}$ so that $A$ and $A^{\prime \prime}$ divide the length of $\Gamma_{2} \cup A B$ in half. Then one of the portions determined by $A$ and $A^{\prime \prime}$ will have everywhere bigger curvature than the other, which is again a contradiction. ### Area of planar regions and the Isoperimetric in- equality The area of a rectangle is defined as the product of lengths of two of its adjacent sides. Let $X \subset \mathbf{R}^{2}$ be any set, $R$ be a collection of rectangles which cover $X$, and $\operatorname{Area}(X, R)$ be the sum of the areas of all rectangles in $R$. Then area of $X$ is defined as the infimum of $\operatorname{Area}(X, R)$ where $R$ ranges over all different ways to cover $X$ by rectangles. In particular it follows that, if $X \subset Y$, then $\operatorname{Area}(X) \leq \operatorname{Area}(Y)$, and if $X=X_{1} \cup X_{2}$, then $\operatorname{Area}(X)=\operatorname{Area}\left(X_{1}\right)+\operatorname{Area}\left(X_{2}\right)$. These in turn quickly yield the areas of triangles and polygons. Exercise 16 (Invariance under isometry and the Special linear group). Show that area is invariant under rigid motions of $\mathbf{R}^{2}$, and that dilation by a factor of $r$, i.e., multiplying each point $\mathbf{R}^{2}$ by $r$, changes the area by a factor of $r^{2}$. More generally show that any linear transformation $A: \mathbf{R}^{2} \rightarrow \mathbf{R}^{2}$ changes the area by a factor of $\operatorname{det}(A)$. Exercise 17 (Archemedes). Compute the area of a circle (Hint: For any $n$ compute the area of regular $n$-gons which are inscribed in the circle, and take the limit. Each of these areas is the sum of $n$ isoceles triangles with an angle $2 \pi / n$, and adjacent sides of length equal to the radius of the circle. This gives a lower bound for the area. An upper bound may also be obtained by taking the limit of regular polygons which circumscribe the circle.) Recalling the defintion of Riemann sums, one quickly observes that $$ \operatorname{Area}(X)=\iint_{X} 1 d x d y . $$ We say that a subset $X$ of $\mathbf{R}^{n}$ is connected provided that the only subsets of $X$ which are both open and closed in $X$ are the $X$ and the emptyset. Every subset of $X$ which is connected and is different from $X$ and the empty set is called a component of $X$. Let $\alpha: I \rightarrow \mathbf{R}^{2}$ be a simple closed planar curve. By the Jordan curve theorem (which we will not prove here), $\mathbf{R}^{2}-\alpha(I)$ consists of exactly two connected components, and the boundary of each component is $\alpha(I)$. Further, one of these components, which we call the interior of $\alpha$, is contained in some large sphere, while the other is unbounded. By area of $\alpha$ we mean the area of its interior. Theorem 18. For any simple closed planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$, Equality holds only when $\alpha$ is a circle. Our proof of the above theorem hinges on the following subtle fact whose proof we leave out Lemma 19. Of all simple closed curves of fixed length $L$, there exists at least one with the biggest area. Further, every such curve is $C^{1}$. Exercise* 20. Show that the area maximizer (for a fixed length) must be convex. (Hint: It is enough to show that if the maximizer, say $\alpha$, is not convex, then there exist a line $\ell$ with respect to which $\alpha(I)$ lies on one side, and intersects $\alpha(I)$ at two points $p$ and $q$ but not in the intervening open segment of $\ell$ determined by $p$ and $q$. Then reflecting one of the segments of $\alpha(I)$, determined by $p$ and $q$, through $\ell$ increases area while leaving the length unchanged.) We say that $\alpha$ is symmetric with respect to a line $\ell$ provided that the image of $\alpha$ is invariant under reflection with respect to $\ell$. Exercise 21. Show that a $C^{1}$ convex planar curve $\alpha: I \rightarrow \mathbf{R}^{2}$ is a circle, if and only if for every unit vector $u \in \mathbf{S}^{1}$ there exists a line perpendicular to $u$ with respect to which $\alpha$ is symmetric (Hint Suppose that $\alpha$ has a line of symmetry in every direction. First show that each line of symmetry is unique in the corresponding direction. After a translation we may assume that $\alpha$ is symmetric with respect to both the $x$-axis and the $y$-axis. Show that this yields that $\alpha$ is symmetric with respect to the origin, i.e. rotation by $180^{\circ}$. From this and the uniqueness of the lines of symmetry conclude that every line of symmetry passes through the origin. Finally show that each line of symmetry must meet the curve orthogonally at the intersection points. This shows that $\left\langle\alpha(t), \alpha^{\prime}(t)\right\rangle=0$, which in turn yields that $\|\alpha(t)\|=$ const.) Now we are ready to prove the isoperimetric inequality. The proof we give here is based on Steiner's symmetrization technique. Let $\alpha: I \rightarrow \mathbf{R}^{2}$ be an area maximizer. By Exercise 20 we may assume that $\alpha$ is convex. We claim that $\alpha$ must have a line of symmetry in every direction, which would show, by Exercise 21, that $\alpha$ is a circle, and hence would complete the proof. Suppose, towards a contradiction, that there exists a direction $u \in \mathbf{S}^{1}$ such that $\alpha$ has no line of symmetry in that direction. After a rigid motion, we may assume that $u=(0,1)$. Let $[a, b]$ be the projection of $\alpha(I)$ to the $x$-axis. Then, since $\alpha$ is convex, every vertical line which passes through an interior point of $(a, b)$ intersects $\alpha(I)$ at precisely two points. Let $f(x)$ be the $y$-coordinate of the higher point, and $g(x)$ be the $y$-coordinate of the other points. Then $$ \text { Area }[\alpha]=\int_{a}^{b} f(x)-g(x) d x . $$ Further note that if $\alpha$ is $C^{1}$ then $f$ and $g$ are $C^{1}$ as well, thus Length $[\alpha]=f(a)-g(a)+\int_{a}^{b} \sqrt{1+f^{\prime}(x)^{2}} d x+\int_{a}^{b} \sqrt{1+g^{\prime}(x)^{2}} d x+f(b)-g(b)$. Now we are going to define a new curve $\bar{\alpha}$ which is composed of the graph of the function $\bar{f}:[a, b] \rightarrow \mathbf{R}$ given by $$ \bar{f}(x):=\frac{f(x)-g(x)}{2}, $$ on top, the graph of $-\bar{f}$ in the bottom, and vertical segments, which may consist only of a single point, on right and left (We may think of this curve as the boundary of the region which is obtained when we move the segments with end points at $f(x)$ and $g(x)$ parallel to themselves until their centers lie on the $x$-axis). One immdediately checks that $$ \text { Area }[\bar{\alpha}]=\text { Area }[\alpha] \text {. } $$ Further, note that since by assumption $\alpha$ is not symmetric with respect to the $x$-axis, $\bar{f}$ is strictly positive on $(a, b)$. This may be used to show that $$ \text { Length }[\bar{\alpha}]<\text { Length }[\alpha] . $$ Exercise 22. Verify the last inequlaity above (Hint: It is enough to check that $\int_{a}^{b} \sqrt{1+\bar{f}^{\prime}(x)^{2}} d x$ is strictly smaller than either of the integrals in the above formula for the length of $\alpha$ ). ## Lecture Notes 7 ### The Frenet-Serret Frame and Torsion Recall that if $\alpha: I \rightarrow \mathbf{R}^{n}$ is a unit speed curve, then the unit tangent vector is defined as $$ T(t):=\alpha^{\prime}(t) $$ Further, if $\kappa(t)=\left\|T^{\prime}(t)\right\| \neq 0$, we may define the principal normal as $$ N(t):=\frac{T^{\prime}(t)}{\kappa(t)} . $$ As we saw earlier, in $\mathbf{R}^{2},\{T, N\}$ form a moving frame whose derivatives may be expressed in terms of $\{T, N\}$ itself. In $\mathbf{R}^{3}$, however, we need a third vector to form a frame. This is achieved by defining the binormal as $$ B(t):=T(t) \times N(t) $$ Then similar to the computations we did in finding the derivatives of $\{T, N\}$, it is easily shown that $$ \left(\begin{array}{c} T(t) \\ N(t) \\ B(t) \end{array}\right)^{\prime}=\left(\begin{array}{ccc} 0 & \kappa(t) & 0 \\ -\kappa(t) & 0 & \tau(t) \\ 0 & -\tau(t) & 0 \end{array}\right)\left(\begin{array}{c} T(t) \\ N(t) \\ B(t) \end{array}\right) $$ where $\tau$ is the torsion which is defined as $$ \tau(t):=-\left\langle B^{\prime}, N\right\rangle $$ Note that torsion is well defined only when $\kappa \neq 0$, so that $N$ is defined. Torsion is a measure of how much a space curve deviates from lying in a plane: ${ }^{1}$ Last revised: September 28, 2021 Exercise 1. Show that if the torsion of a curve $\alpha: I \rightarrow \mathbf{R}^{3}$ is zero everywhere then it lies in a plane. (Hint: We need to check that there exist a point $p$ and a (fixed) vector $v$ in $\mathbf{R}^{3}$ such that $\langle\alpha(t)-p, v\rangle=0$. Let $v=B$, and $p$ be any point of the curve.) Exercise 2. Computer the curvature and torsion of the circular helix $$ (r \cos t, r \sin t, h t) $$ where $r$ and $h$ are constants. How does changing the values of $r$ and $h$ effect the curvature and torsion. ### Curves of Constant Curvature and Torsion The above exercise shows that the curvature and torsion of a circular helix are constant. The converse is also true Theorem 3. The only curve $\alpha: I \rightarrow \mathbf{R}^{3}$ whose curvature and torsion are nonzero constants is the circular helix. The rest of this section develops a number of exercises which lead to the proof of the above theorem Exercise 4. Show that $\alpha: I \rightarrow \mathbf{R}^{3}$ is a circular helix (up to rigid motion) provided that there exists a vector $v$ in $\mathbf{R}^{3}$ such that $$ \langle T, v\rangle=\text { const } $$ and the projection of $\alpha$ into a plane orthogonal to $v$ is a circle. So first we need to show that when $\kappa$ and $\tau$ are constants, $v$ of the above exercise can be found. We do this with the aid of the Frenet-Serret frame. Since $\{T, N, B\}$ is an orthonormal frame, we may write $$ v=a(t) T(t)+b(t) N(t)+c(t) B(t) . $$ Next we need to find $a, b$ and $c$ subject to the condtions that $v$ is a constant vector, i.e., $v^{\prime}=0$, and that $\langle T, v\rangle=$ const. The latter implies that $$ a=\text { const } $$ because $\langle T, v\rangle=a$. In particular, we may set $a=1$. Exercise 5. By setting $v^{\prime}=0$ show that $$ v=T+\frac{\kappa}{\tau} B $$ and check that $v$ is the desired vector, i.e. $\langle T, v\rangle=$ const and $v^{\prime}=0$. So to complete the proof of the theorem, only the following remains: Exercise 6. Show that the projection of $\alpha$ into a plane orthogonal to $v$, i.e., $$ \bar{\alpha}(t):=\alpha(t)-\langle\alpha(t), v\rangle \frac{v}{\|v\|^{2}} $$ is a circle. (Hint: Compute the curvature of $\bar{\alpha}$.) ### Intrinsic Characterization of Spherical Curves In this section we derive a characterization in terms of curvature and torsion for unit speed curves which lie on a shphere. Suppose $\alpha: I \rightarrow \mathbf{R}^{3}$ lies on a sphere of radius $r$. Then there exists a point $p$ in $\mathbf{R}^{3}$ (the center of the sphere) such that $$ \|\alpha(t)-p\|=r $$ Thus differentiation yields $$ \langle T(t), \alpha(t)-p\rangle=0 . $$ Differentiating again we obtain: $$ \left\langle T^{\prime}(t), \alpha(t)-p\right\rangle+1=0 . $$ The above expression shows that $\kappa(t) \neq 0$. Consequently $N$ is well defined, and we may rewrite the above expression as $$ \kappa(t)\langle N(t), \alpha(t)-p\rangle+1=0 . $$ Differentiating for the third time yields $$ \kappa^{\prime}(t)\langle N(t), \alpha(t)-p\rangle+\kappa(t)\langle-\kappa(t) T(t)+\tau(t)\langle B(t), \alpha(t)-p\rangle=0, $$ which using the previous expressions above we may rewrite as $$ -\frac{\kappa^{\prime}(t)}{\kappa(t)}+\kappa(t) \tau(t)\langle B(t), \alpha(t)-p\rangle=0 . $$ Next note that, since $\{T, N, B\}$ is orthonormal, $$ \begin{aligned} r^{2} & =\|\alpha(t)-p\|^{2} \\ & =\langle\alpha(t)-p, T(t)\rangle^{2}+\langle\alpha(t)-p, N(t)\rangle^{2}+\langle\alpha(t)-p, B(t)\rangle^{2} \\ & =0+\frac{1}{\kappa^{2}(t)}+\langle\alpha(t)-p, B(t)\rangle^{2} . \end{aligned} $$ Thus, combining the previous two calculations, we obtain: $$ \left(\frac{\kappa^{\prime}(t)}{\kappa^{2}(t)}\right)^{2}=\tau^{2}(t)\left(r^{2}-\frac{1}{\kappa^{2}(t)}\right) $$ Exercise 7. Check the converse, that is supposing that the curvature and torsion of some curve satisfies the above expression, verify whether the curve has to lie on a sphere of radius $r$. To do the above exercise, we need to first find out where the center $p$ of the sphere could lie. To this end we start by writing $$ p=\alpha(t)+a(t) T(t)+b(t) N(t)+c(t) B(t), $$ and try to find $a(t), b(t)$ and $c(t)$ so that $p^{\prime}=(0,0,0)$, and $\|\alpha(t)-p\|=r$. To make things easier, we may note that $a(t)=0$ (why?). Then we just need to find $b(t)$ and $c(t)$ subject to the two constraints mentioned above. We need to verify whether this is possible, when $\kappa$ and $\tau$ satisfy the above expression. ### The Local Canonical form In this section we show that all $C^{3}$ curve in $\mathbf{R}^{3}$ essentially look the same in the neighborhood of any point which has nonvanishing curvature and a given sign for torsion. Let $\alpha:(-\epsilon, \epsilon) \rightarrow \mathbf{R}^{3}$ be a $C^{3}$ curve. By Taylor's theorem $$ \alpha(t)=\alpha(0)+\alpha^{\prime}(0) t+\frac{1}{2} \alpha^{\prime \prime}(0) t^{2}+\frac{1}{6} \alpha^{\prime \prime \prime}(0) t^{3}+R(t) $$ where $\lim _{t \rightarrow 0}|R(t)| / t^{3}=0$, i.e., for $t$ small, the remainder term $R(t)$ is negligible. Now suppose that $\alpha$ has unit speed. Then $$ \begin{gathered} \alpha^{\prime}=T \\ \alpha^{\prime \prime}=T^{\prime}=\kappa N \\ \alpha^{\prime \prime \prime}=(\kappa N)^{\prime}=\kappa^{\prime} N+\kappa(-\kappa T+\tau B)=-\kappa^{2} T+\kappa^{\prime} N+\kappa \tau B . \end{gathered} $$ So we have $$ \begin{aligned} \alpha(t) & =\alpha(0)+T_{0} t+\frac{\kappa_{0} N_{0} t^{2}}{2}+\frac{\left(-\kappa_{0}^{2} T_{0}+\kappa_{0}^{\prime} N_{0}+\kappa \tau_{0} B_{0}\right) t^{3}}{6}+R(t) \\ & =\alpha(0)+\left(t-\frac{\kappa_{0}^{2}}{6} t^{3}\right) T_{0}+\left(\frac{\kappa_{0}}{2} t^{2}+\frac{\kappa_{0}^{\prime}}{6} t^{3}\right) N_{0}+\left(\frac{\kappa_{0} \tau_{0}}{6} t^{3}\right) B_{0}+R(t) \end{aligned} $$ Now if, after a rigid motion, we suppose that $\alpha(0)=(0,0,0), T=(1,0,0)$, $N=(0,1,0)$, and $B=(0,0,1)$, then we have $$ \alpha(t)=\left(t-\frac{\kappa_{0}^{2}}{6} t^{3}+R_{x}, \frac{\kappa_{0}}{2} t^{2}+\frac{\kappa_{0}^{\prime}}{6} t^{3}+R_{y}, \frac{\kappa_{0} \tau_{0}}{6} t^{3}+R_{z}\right), $$ where $\left(R_{x}, R_{y}, R_{z}\right)=R$. It follows then that when $t$ is small $$ \alpha(t) \approx\left(t, \frac{\kappa_{0}}{2} t^{2}, \frac{\kappa_{0} \tau_{0}}{6} t^{3}\right) $$ Thus, up to third order of differentiation, any curve with nonvanishing curvature in space may be approximated by a cubic curve. Also note that the above approximtion shows that any planar curve with nonvanishing curvature locally looks like a parabola. Exercise 8. Show that the curvature of a space curve $\alpha$ at a point $t_{0}$ with nonvanishing curvature is the same as the curvature of the projection of $\alpha$ into the the osculating plane at time $t_{0}$. (The osculating plane is the plane generated by $T$ and $N$ ). ## Lecture Notes 8 ## Surfaces ### Definition of a regular embedded surface An $n$-dimensional open ball of radius $r$ centered at $p$ is defined by $$ B_{r}^{n}(p):=\left\{x \in \mathbf{R}^{n} \mid \operatorname{dist}(x, p)<r\right\} . $$ We say a subset $U \subset \mathbf{R}^{n}$ is open if for each $p$ in $U$ there exists an $\epsilon>0$ such that $B_{\epsilon}^{n}(p) \subset U$. Let $A \subset \mathbf{R}^{n}$ be an arbitrary subset, and $U \subset A$. We say that $U$ is open in $A$ if there exists an open set $V \subset \mathbf{R}^{n}$ such that $U=A \cap V$. A mapping $f: A \rightarrow B$ between arbitrary subsets of $\mathbf{R}^{n}$ is said to be continuous if for every open set $U \subset B, f^{-1}(U)$ is open in $A$. Intuitively, we may think of a continuous map as one which sends nearby points to nearby points: Exercise 1. Let $A, B \subset \mathbf{R}^{n}$ be arbitrary subsets, $f: A \rightarrow B$ be a continuous map, and $p \in A$. Show that for every $\epsilon>0$, there exists a $\delta>0$ such that whenever $\operatorname{dist}(x, p)<\delta$, then $\operatorname{dist}(f(x), f(p))<\epsilon$. Two subsets $A, B \subset \mathbf{R}^{n}$ are said to be homeomorphic, or topologically equivalent, if there exists a mapping $f: A \rightarrow B$ such that $f$ is one-to-one, onto, continuous, and has a continuous inverse. Such a mapping is called a homeomorphism. We say a subset $M \subset \mathbf{R}^{3}$ is an embedded surface if every point in $M$ has an open neighborhood in $M$ which is homeomorphic to an open subset of $\mathbf{R}^{2}$. Exercise 2. (Stereographic projection) Show that the standard sphere $\mathbf{S}^{2}:=\left\{p \in \mathbf{R}^{3} \mid\|p\|=1\right\}$ is an embedded surface (Hint: Show that the stereographic projection $\pi_{+}$form the north pole gives a homeomorphism between $\mathbf{R}^{2}$ and $\mathbf{S}^{2}-(0,0,1)$. Similarly, the stereographic projection $\pi_{-}$ ${ }^{1}$ Last revised: October 15, 2021 from the south pole gives a homeomorphism between $\mathbf{R}^{2}$ and $\mathbf{S}^{2}-(0,0,-1)$; $\pi_{+}(x, y, z):=\left(\frac{x}{1-z}, \frac{y}{1-z}, 0\right)$, and $\left.\pi_{-}(x, y, z):=\left(\frac{x}{z-1}, \frac{y}{z-1}, 0\right)\right)$. Exercise 3. (Surfaces as graphs) Let $U \subset \mathbf{R}^{2}$ be an open subset and $f: U \rightarrow \mathbf{R}$ be a continuous map. Then $$ \operatorname{graph}(f):=\{(x, y, f(x, y)) \mid(x, y) \in U\} $$ is a surface. (Hint: Show that the orthogonal projection $\pi(x, y, z):=(x, y)$ gives the desired homeomorphism). Note that by the above exercise the cone given by $z=\sqrt{x^{2}+y^{2}}$, and the troughlike surface $z=|x|$ are examples of embedded surfaces. These surfaces, however, are not "regular", as we will define below. From the point of view of differential geometry it is desirable that a surface be without sharp corners or vertices. Let $U \subset \mathbf{R}^{n}$ be open, and $f: U \rightarrow \mathbf{R}^{m}$ be a map. Note that $f$ may be regarded as a list of $m$ functions of $n$ variables: $f(p)=\left(f^{1}(p), \ldots, f^{m}(p)\right)$, $f^{i}(p)=f^{i}\left(p^{1}, \ldots, p^{n}\right)$. The first order partial derivatives of $f$ are given by $$ D_{j} f^{i}(p):=\lim _{h \rightarrow 0} \frac{f^{i}\left(p^{1}, \ldots, p^{j}+h, \ldots, p^{n}\right)-f^{i}\left(p^{1}, \ldots, p^{j}, \ldots, p^{n}\right)}{h} . $$ If all the functions $D_{j} f^{i}: U \rightarrow \mathbf{R}$ exist and are continuous, then we say that $f$ is differentiable $\left(C^{1}\right)$. We say that $f$ is smooth $\left(C^{\infty}\right)$ if the partial derivatives of $f$ of all order exist and are continuous. These are defined by $$ D j_{1}, j_{2}, \ldots, j_{k} f^{i}:=D_{j_{1}}\left(D_{j_{2}}\left(\cdots\left(D_{j_{k}} f^{i}\right) \cdots\right)\right) . $$ Let $f: U \subset \mathbf{R}^{n} \rightarrow \mathbf{R}^{m}$ be a differentiable map, and $p \in U$. Then the Jacobian of $f$ at $p$ is an $m \times n$ matrix defined by $$ J_{p}(f):=\left(\begin{array}{ccc} D_{1} f^{1}(p) & \cdots & D_{n} f^{1}(p) \\ \vdots & & \vdots \\ D_{1} f^{m}(p) & \cdots & D_{n} f^{m}(p) \end{array}\right) . $$ We say that $p$ is a regular point of $f$ if the rank of $J_{p}(f)$ is equal to $n$. If $f$ is regular at all points $p \in U$, then we say that $f$ is regular. Exercise 4 (Monge Patch). Let $f: U \subset \mathbf{R}^{2} \rightarrow \mathbf{R}$ be a differentiable map. Show that the mapping $X: U \rightarrow \mathbf{R}^{3}$, defined by $X\left(u^{1}, u^{2}\right):=\left(u^{1}, u^{2}, f\left(u^{1}, u^{2}\right)\right)$ is regular (the pair $(X, U)$ is called a Monge Patch). If $f$ is a differentiable function, then we define, $$ D_{i} f(p):=\left(D_{i} f^{1}(p), \ldots D_{i} f^{n}(p)\right) . $$ Exercise 5. Show that $f: U \subset \mathbf{R}^{2} \rightarrow \mathbf{R}^{3}$ is regular at $p$ if and only if $$ \left\|D_{1} f(p) \times D_{2} f(p)\right\| \neq 0 . $$ Let $f: U \subset \mathbf{R}^{n} \rightarrow \mathbf{R}^{m}$ be a differentiable map and $p \in U$. Then the differential of $f$ at $p$ is a mapping from $\mathbf{R}^{n}$ to $\mathbf{R}^{m}$ defined by $$ d f_{p}(x):=\lim _{t \rightarrow 0} \frac{f(p+t x)-f(p)}{t} . $$ Exercise 6. Show that (i) $$ d f_{p}(x)=J_{p}(f)\left(\begin{array}{c} x^{1} \\ \vdots \\ x^{n} \end{array}\right) . $$ Conclude then that (ii) $d f_{p}$ is a linear map, and (iii) $p$ is a regular value of $f$ if and only if $d f_{p}$ is one-to-one. Further, (iv) show that if $f$ is a linear map, then $d f_{p}(x)=f(x)$, and $(\mathrm{v}) J_{p}(f)$ coincides with the matrix representation of $f$ with respect to the standard basis. By a regular patch we mean a pair $(U, X)$ where $U \subset \mathbf{R}^{2}$ is open and $X: U \rightarrow \mathbf{R}^{3}$ is a one-to-one, smooth, and regular mapping. Furthermore, we say that the patch is proper if $X^{-1}$ is continuous. We say a subset $M \subset \mathbf{R}^{3}$ is a regular embedded surface, if for each point $p \in M$ there exists a proper regular patch $(U, X)$ and an open set $V \subset \mathbf{R}^{3}$ such that $X(U)=M \cap V$. The pair $(U, X)$ is called a local parameterization for $M$ at $p$. Exercise 7. Let $f: U \subset \mathbf{R}^{2} \rightarrow \mathbf{R}$ be a smooth map. Show that graph $(f)$ is a regular embedded surface, see Exercise 4. Exercise 8. Show that $\mathbf{S}^{2}$ is a regular embedded surface (Hint: (Method 1) Let $p \in \mathbf{S}^{2}$. Then $p^{1}, p^{2}$, and $p^{3}$ cannot vanish simultaneously. Suppose, for instance, that $p^{3} \neq 0$. Then, we may set $U:=\left\{u \in \mathbf{R}^{2} \mid\|u\|<1\right\}$, and let $X\left(u^{1}, u^{2}\right):=\left(u^{1}, u^{2}, \pm \sqrt{1-\left(u^{1}\right)^{2}-\left(u^{2}\right)^{2}}\right)$ depending on whether $p^{3}$ is positive or negative. The other cases involving $p^{1}$ and $p^{2}$ may be treated similarly. (Method 2) Write the inverse of the stereographic projection, see Exercise 2, and show that it is a regular map). The following exercise shows that smoothness of a patch is not sufficient to ensure that the corresponding surface is without singularities (sharp edges or corners). Thus the regularity condition imposed in the definition of a regular embedded surface is not superfluous. Exercise 9. Let $M \subset \mathbf{R}^{3}$ be the graph of the function $f(x, y)=|x|$. Sketch this surface, and show that there exists a smooth one-to-one map $X: \mathbf{R}^{2} \rightarrow$ $\mathbf{R}^{3}$ such that $X\left(\mathbf{R}^{2}\right)=M$ (Hint: Let $X(x, y):=\left(e^{-1 / x^{2}}, y, e^{-1 / x^{2}}\right)$, if $x>0$; $X(x, y):=\left(-e^{-1 / x^{2}}, y, e^{-1 / x^{2}}\right)$, if $X<0$; and, $X(x, y):=(0,0,0)$, if $\left.x=0\right)$. The following exercise demonstrates the significance of the requirement in the definition of a regular embedded surface that $X^{-1}$ be continuous. Exercise 10. Let $U:=\left\{(u, v) \in \mathbf{R}^{2} \mid-\pi<u<\pi, 0<v<1\right\}$, define $X: U \rightarrow \mathbf{R}^{3}$ by $X(u, v):=(\sin (u), \sin (2 u), v)$, and set $M:=X(U)$. Sketch $M$ and show that $X$ is smooth, one-to-one, and regular, but $X^{-1}$ is not continuous. Exercise 11 (Surfaces of Revolution). Let $\alpha: I \rightarrow \mathbf{R}^{2}, \alpha(t)=(x(t), y(t))$, be a regular simple closed curve. Show that the image of $X: I \times \mathbf{R} \rightarrow \mathbf{R}^{3}$ given by $$ X(t, \theta):=(x(t) \cos \theta, x(t) \sin \theta, y(t)), $$ is a regular embedded surface. ## Lecture Notes 9 ### Definition of Gaussian Curvature Let $M \subset \mathbf{R}^{3}$ be a regular embedded surface, as we defined in the previous lecture, and let $p \in M$. By the tangent space of $M$ at $p$, denoted by $T_{p} M$, we mean the set of all vectors $v$ in $\mathbf{R}^{3}$ such that for each vector $v$ there exists a smooth curve $\gamma:(-\epsilon, \epsilon) \rightarrow M$ with $\gamma(0)=p$ and $\gamma^{\prime}(0)=v$. Exercise 1. Let $H \subset \mathbf{R}^{3}$ be a plane. Show that, for all $p \in H, T_{p} H$ is the plane parallel to $H$ which passes through the origin. Exercise 2. Prove that, for all $p \in M, T_{p} M$ is a 2-dimensional vector subspace of $\mathbf{R}^{3}$ (Hint: Let $(U, X)$ be a proper regular patch centered at $p$, i.e., $X(0,0)=p$. Recall that $d X_{(0,0)}$ is a linear map and has rank 2. Thus it suffices to show that $\left.T_{p} M=d X_{(0,0)}\left(\mathbf{R}^{2}\right)\right)$. Exercise 3. Prove that $D_{1} X(0,0)$ and $D_{2} X(0,0)$ form a basis for $T_{p} M$ (Hint: Show that $D_{1} X(0,0)=d X_{(0,0)}(1,0)$ and $\left.D_{2} X(0,0)=d X_{(0,0)}(0,1)\right)$. By a local gauss map of $M$ centered at $p$ we mean a pair $(V, n)$ where $V$ is an open neighborhood of $p$ in $M$ and $n: V \rightarrow \mathbf{S}^{2}$ is a continuous mapping such that $n(p)$ is orthogonal to $T_{p} M$ for all $p \in M$. For a more explicit formulation, let $(U, X)$ be a proper regular patch centered at $p$, and define $N: U \rightarrow \mathbf{S}^{2}$ by $$ N\left(u_{1}, u_{2}\right):=\frac{D_{1} X\left(u_{1}, u_{2}\right) \times D_{2} X\left(u_{1}, u_{2}\right)}{\left\|D_{1} X\left(u_{1}, u_{2}\right) \times D_{2} X\left(u_{1}, u_{2}\right)\right\|} . $$ Set $V:=X(U)$, and recall that, since $(U, X)$ is proper, $V$ is open in $M$. Now define $n: V \rightarrow \mathbf{S}^{2}$ by $$ n(p):=N \circ X^{-1}(p) . $$ ${ }^{1}$ Last revised: December 11, 2021 Exercise 4. Check that $(V, n)$ is indeed a local gauss map. Exercise 5 . Show that $n: \mathbf{S}^{2} \rightarrow \mathbf{S}^{2}$, given by $n(p):=p$ is a Gauss map (Hint: Define $f: \mathbf{R}^{3} \rightarrow \mathbf{R}$ by $f(p):=\|p\|^{2}$ and compute its gradient. Note that $\mathbf{S}^{2}$ is a level set of $f$. Thus the gradient of $f$ at $p$ must be orthogonal to $\left.\mathbf{S}^{2}\right)$. Let $M_{1}$ and $M_{2}$ be regular embedded surfaces in $\mathbf{R}^{3}$ and $f: M_{1} \rightarrow M_{2}$ be a smooth map (recall that this means that $f$ may be extended to a smooth map in an open neighborhood of $M_{1}$ in $\mathbf{R}^{3}$ ). Then for every $p \in M_{1}$, we define a mapping $d f_{p}: T_{p} M_{1} \rightarrow T_{f(p)} M_{2}$, known as the differential of $M_{1}$ at $p$ as follows. Let $v \in T_{p} M_{1}$ and let $\gamma_{v}:(-\epsilon, \epsilon) \rightarrow M_{1}$ be a curve such that $\gamma(0)=p$ and $\gamma_{v}^{\prime}(0)=v$. Then we set $$ d f_{p}(v):=\left(f \circ \gamma_{v}\right)^{\prime}(0) $$ Exercise 6. Prove that $d f_{p}$ is well defined (i.e. is independent of the smooth extension) and linear (Hint: Let $\tilde{f}$ be a smooth extension of $f$ to an open neighborhood of $M$. Then $d \tilde{f}_{p}$ is well defined. Show that for all $v \in T_{p} M$, $d f_{p}(v)=d \tilde{f}_{p}(v)$. Let $(V, n)$ be a local gauss map centered at $p \in M$. Then the shape operator of $M$ at $p$ with respect to $n$ is defined as $$ S_{p}:=-d n_{p} $$ Note that the shape operator is determined up to two choices depending on the local gauss map, i.e., replacing $n$ by $-n$ switches the sign of the shape operator. Exercise 7. Show that $S_{p}$ may be viewed as a linear operator on $T_{p} M$ (Hint: By definition, $S_{p}$ is a linear map from $T_{p} M$ to $T_{n(p)} \mathbf{S}^{2}$. Thus it suffices to show that $T_{p} M$ and $T_{n(p)} \mathbf{S}^{2}$ coincide). Exercise 8. A subset $V$ of $M$ is said to be connected if any pairs of points $p$ and $q$ in $V$ may be joined by a curve in $V$. Suppose that $V$ is a connected open subset of $M$, and, furthermore, suppose that the shape operator vanishes throughout $V$, i.e., for every $p \in M$ and $v \in T_{p} M, S_{p}(v)=0$. Show then that $V$ must be flat, i.e., it is a part of a plane (Hint: It is enough to show that the gauss map is constant on $V$; or, equivalently, $n(p)=n(q)$ for all pairs of points $p$ and $q$ in $V$. Since $V$ is connected, there exists a curve $\gamma:[0,1] \rightarrow M$ with $\gamma(0)=p$ and $\gamma(1)=q$. Furthermore, since $V$ is open, we may choose $\gamma$ to be smooth as well. Define $f:[0,1] \rightarrow \mathbf{R}$ by $f(t):=n \circ \gamma(t)$, and differentiate. Then $f^{\prime}(t)=d n_{\gamma(t)}\left(\gamma^{\prime}(t)\right)=0$. Justify the last step and conclude that $n(p)=n(q)$. Exercise 9. Compute the shape operator of a sphere of radius $r$ (Hint: Define $\pi: \mathbf{R}^{3}-\{0\} \rightarrow \mathbf{S}^{2}$ by $\pi(x):=x /\|x\|$. Note that $\pi$ is a smooth mapping and $\pi=n$ on $\mathbf{S}^{2}$. Thus, for any $\left.v \in T_{p} \mathbf{S}^{2}, d \pi_{p}(v)=d n_{p}(v)\right)$. The Gaussian curvature of $M$ at $p$ is defined as the determinant of the shape operator: $$ K(p):=\operatorname{det}\left(S_{p}\right) $$ Exercise 10. Show that $K(p)$ does not depend on the choice of the local gauss map, i.e, replacing $n$ by $-n$ does not effect the value of $K(p)$. Exercise 11. Compute the curvature of a sphere of radius $r$ (Hint: Use exercise ??). Next we derive an explicit formula for $K(p)$ in terms of local coordinates. Let $(U, X)$ be a proper regular patch centered at $p$. For $1 \leqslant i, j \leqslant 2$, define the functions $g_{i j}: U \rightarrow \mathbf{R}$ by $$ g_{i j}\left(u_{1}, u_{2}\right):=\left\langle D_{i} X\left(u_{1}, u_{2}\right), D_{j} X\left(u_{1}, u_{2}\right)\right\rangle . $$ Note that $g_{12}=g_{21}$. Thus the above expression defines three functions. These are called the coefficients of the first fundamental form (a.k.a. the metric tensor) with respect to the given patch $(U, X)$. In the classical notation, these functions are denoted by $E, F$, and $G\left(E:=g_{11}, F:=g_{12}\right.$, and $\left.G:=g_{22}\right)$. Next, define $l_{i j}: U \rightarrow \mathbf{R}$ by $$ l_{i j}\left(u_{1}, u_{2}\right):=\left\langle D_{i j} X\left(u_{1}, u_{2}\right), N\left(u_{1}, u_{2}\right)\right\rangle . $$ Thus $l_{i j}$ is a measure of the second derivatives of $X$ in a normal direction. $l_{i j}$ are known as the coefficients of the second fundamental form of $M$ with respect to the local patch $(U, X)$ (the classical notation for these functions are $L:=l_{11}, M:=l_{12}$, and $\left.N:=l_{22}\right)$. We claim that $$ K(p)=\frac{\operatorname{det}\left(l_{i j}(0,0)\right)}{\operatorname{det}\left(g_{i j}(0,0)\right)} . $$ To see the above, recall that $e_{i}(p):=D_{i} X\left(X^{-1}(p)\right)$ form a basis for $T_{p} M$. Thus, since $S_{p}$ is linear, $S_{p}\left(e_{i}\right)=\sum_{j=1}^{2} S_{i j} e_{j}$. This yields that $\left\langle S_{p}\left(e_{i}\right), e_{k}\right\rangle=$ $\sum_{j=1}^{2} S_{i j} g_{j k}$. It can be shown that that $$ \left\langle S_{p}\left(e_{i}\right), e_{k}\right\rangle=l_{i k} $$ see the exercise below. Then we have $\left[l_{i j}\right]=\left[S_{i j}\right]\left[g_{i j}\right]$, where the symbol $[\cdot]$ denotes the matrix with the given coefficients. Thus we can write $\left[S_{i j}\right]=$ $\left[g_{i j}\right]^{-1}\left[l_{i j}\right]$ which yields the desired result. Exercise 12. Show that $\left\langle S_{p}\left(e_{i}(p)\right), e_{j}(p)\right\rangle=l_{i j}(0,0)$ (Hints: First note that $\left\langle n(p), e_{j}(p)\right\rangle=0$ for all $p \in V$. Let $\gamma:(-\epsilon, \epsilon) \rightarrow M$ be a curve with $\gamma(0)=p$ and $\gamma^{\prime}(0)=e_{i}(p)$. Define $f:(-\epsilon, \epsilon) \rightarrow M$ by $f(t):=\left\langle n(\gamma(t)), e_{j}(\gamma(t))\right\rangle$, and compute $f^{\prime}(0)$.) Exercise 13. Compute the Gaussian curvature of a surface of revolution, i.e., the surface covered by the patch $$ X(t, \theta)=(x(t) \cos \theta, x(t) \sin \theta, y(t)) $$ Next, letting $$ (x(t), y(t))=(R+r \cos t, r \sin t), $$ i.e., a circle of radius $r$ centered at $(R, 0)$, compute the curvature of a torus of revolution. Sketch the torus and indicate the regions where the curvature is postive, negative, or zero. Exercise 14. Let $(U, X)$ be a Monge patch, i.e, $$ X\left(u_{1}, u_{2}\right):=\left(u_{1}, u_{2}, f\left(u_{1}, u_{2}\right)\right) $$ centered at $p \in M$. Show that $$ K(p):=\frac{\operatorname{det}(\operatorname{Hess} f(0,0))}{\left(1+\|\operatorname{grad} f(0,0)\|^{2}\right)^{2}}, $$ where Hess $f:=\left[D_{i j} f\right]$ is the Hessian matrix of $f$ and grad $f$ is its gradient. Exercise 15. Compute the curvature of the graph of $z=a x^{2}+b y^{2}$, where $a$ and $b$ are constants. Note how the signs of $a$ and $b$ effect the curvature and shape of the surface. Also note the values of $a$ and $b$ for which the curvature is zero. ## Lecture Notes 10 ### Meaning of Gaussian Curvature In the previous lecture we gave a formal definition for Gaussian curvature $K$ in terms of the differential of the gauss map, and also derived explicit formulas for $K$ in local coordinates. In this lecture we explore the geometric meaning of $K$. #### A measure for local convexity Let $M \subset \mathbf{R}^{3}$ be a regular embedded surface, $p \in M$, and $H_{p}$ be hyperplane passing through $p$ which is parallel to $T_{p} M$. We say that $M$ is locally convex at $p$ if there exists an open neighborhood $V$ of $p$ in $M$ such that $V$ lies on one side of $H_{p}$. In this section we prove: Theorem 1. If $K(p)>0$ then $M$ is locally convex at $p$, and if $k(p)<0$ then $M$ is not locally convex at $p$. When $K(p)=$, we cannot in general draw an conclusion with regard to the local convexity of $M$ at $p$ as the following two exercises demonstrate: Exercise 2. Show that there exists a surface $M$ and a point $p \in M$ such that $M$ is strictly locally convex at $p$; however, $K(p)=0$ (Hint: Let $M$ be the graph of the equation $z=\left(x^{2}+y^{2}\right)^{2}$. Then may be covered by the Monge patch $X\left(u^{1}, u^{2}\right):=\left(u^{1}, u^{2},\left(\left(u^{1}\right)^{2}+\left(u^{2}\right)\right)^{2}\right)$. Use the Monge Ampere equation derived in the previous lecture to compute the curvature at $X(0,0)$.). Exercise 3. Let $M$ be the Monkey saddle, i.e., the graph of the equation $z=y^{3}-3 y x^{2}$, and $p:=(0,0,0)$. Show that $K(p)=0$, but $M$ is not locally convex at $p$. ${ }^{1}$ Last revised: October 25, 2021 After a rigid motion we may assume that $p=(0,0,0)$ and $T_{p} M$ is the $x y$-plane. Then, using the inverse function theorem, it is easy to show that there exists a Monge Patch $(U, X)$ centered at $p$, as the following exercise demonstrates: Exercise 4. Define $\pi: M \rightarrow \mathbf{R}^{2}$ by $\pi(q):=\left(q^{1}, q^{2}, 0\right)$. Show that $d \pi_{p}$ is locally one-to-one. Then, by the inverse function theorem, it follows that $\pi$ is a local diffeomorphism. So there exists a neighborhood $U$ of $(0,0)$ such that $\pi^{-1}: U \rightarrow M$ is one-to-one and smooth. Let $f\left(u^{1}, y^{2}\right)$ denote the $z$ coordinate of $\pi^{-1}\left(u^{1}, u^{2}\right)$, and set $X\left(u^{1}, u^{2}\right):=\left(u^{1}, u^{2}, f\left(u^{1}, u^{2}\right)\right)$. Show that $(U, X)$ is a proper regular patch. The previous exercise shows that local convexity of $M$ at $p$ depends on whether or not $f$ changes sign in a neighborhood of the origin. To examine this we need to recall the Taylor's formula for functions of two variables: $$ f\left(u_{1}, u_{2}\right)=f(0,0)+\sum_{i=1}^{2} D_{i} f(0,0)+\frac{1}{2} \sum_{i, j=1}^{2} D_{i j}\left(\xi^{1}, \xi^{2}\right) u^{i} u^{j}, $$ where $\left(\xi^{1}, \xi^{2}\right)$ is a point on the line connecting $\left(u^{1}, u^{2}\right)$ to $(0,0)$. Exercise 5. Prove the Taylor's formula given above. (Hints: First recall Taylor's formula for functions of one variable: $g(t)=g(0)+g^{\prime}(0) t+$ $(1 / 2) g^{\prime \prime}(s) t^{2}$, where $s \in[0, t]$. Then define $\gamma(t):=\left(t u^{1}, t u^{2}\right)$, set $g(t):=$ $f(\gamma(t))$, and apply Taylor's formula to $g$. Then chain rule will yield the desired result.) Next note that, by construction, $f(0,0)=0$. Further $D_{1} f(0,0)=0=$ $D_{2} f(0,0)$ as well. Thus $$ f\left(u_{1}, u_{2}\right)=\frac{1}{2} \sum_{i, j=1}^{2} D_{i j}\left(\xi^{1}, \xi^{2}\right) u^{i} u^{j} . $$ Hence to complete the proof of Theorem 1, it remains to show how the quanitity on the right hand side of the above equation is influenced by $K(p)$. To this end, recall the Monge-Ampere equation for curvature: $$ \operatorname{det}\left(\operatorname{Hess} f\left(\xi^{1}, \xi^{2}\right)\right)=K\left(f\left(\xi^{1}, \xi^{2}\right)\right)\left(1+\left\|\operatorname{grad} f\left(\xi^{1}, \xi^{2}\right)\right\|^{2}\right)^{2} . $$ Now note that $K(f(0,0))=K(p)$. Thus, by continuity, if $U$ is a sufficiently small neighborhood of $(0,0)$, the sign of $\operatorname{det}($ Hess $f)$ agrees with the sign of $K(p)$ throughout $U$. Finally, we need some basic facts about quadratic forms. A quadratic form is a function of two variables $Q: \mathbf{R}^{2} \rightarrow \mathbf{R}$ given by $$ Q(x, y)=a x^{2}+2 b x y+c y^{2} $$ where $a, b$, and $c$ are constants. $Q$ is said to be definite if $Q(x, y) \neq 0$ whenver $(x, y) \neq(0,0)$. Exercise 6. Show that if $a c-b^{2}>0$, then $Q$ is definite, and if $a c-b^{2}<0$, then $Q$ is not definite. (Hints: For the first part, suppose that $y \neq 0$, but $Q(x, y)=0$. Then $a x^{2}+2 b x y+c y^{2}=0$, which yields $a+2 b(x / y)+c(x / y)^{2}=0$. Thus the discriminant of this equation must be positive, which will yield a contradiction. The proof of the second part is similar). Theorem 1 follows from the above exercise. #### Ratio of areas In the previous subsection we gave a geometric interpretation for the sign of Gaussian curvature. Here we describe the geometric significance of the magnitude of $K$. If $V$ is a sufficiently small neighborhood of $p$ in $M$ (where $M$, as always, denotes a regular embedded surface in $\mathbf{R}^{3}$ ), then it is easy to show that there exists a patch $(U, X)$ centered at $p$ such that $X(U)=V$. The area of $V$ is then defined as follows: $$ \operatorname{Area}(V):=\iint_{U}\left\|D_{1} X \times D_{2} X\right\| d u^{1} d u^{2} $$ Using the chain rule, one can show that the above definition is independent of the the patch. Exercise 7. Let $V \subset \mathbf{S}^{2}$ be a region bounded in between a pair of great circles meeting each other at an angle of $\alpha$. Show that Area $(V)=2 \alpha$ (Hints: Let $U:=[0, \alpha] \times[0, \pi]$ and $X(\theta, \phi):=(\cos \theta \sin \phi, \sin \theta \sin \phi, \cos \phi)$. Show that $\left\|D_{1} X \times D_{2} X\right\|=|\sin \phi|$. Further, note that, after a rotation we may assume that $X(U)=V$. Then an integration will yield the desired result). Exercise 8. Use the previous exercise to show that the area of a geodesic triangle $T \subset \mathbf{S}^{2}$ (a region bounded by three great circles) is equal to sum of its angles minus $\pi$ (Hints: Use the picture below: $A+B+C+T=2 \pi$, and $A=2 \alpha-T, B=2 \beta-T$, and $C=2 \gamma-T)$. Let $V_{r}:=B_{r}(p) \cap M$. Then, if $r$ is sufficiently small, $V(r) \subset X(U)$, and, consequently, $U_{r}:=X^{-1}\left(V_{r}\right)$ is well defined. In particular, we may compute the area of $V_{r}$ using the patch $\left(U_{r}, X\right)$. In this section we show that $$ |K(p)|=\lim _{r \rightarrow 0} \frac{\operatorname{Area}\left(n\left(V_{r}\right)\right)}{\operatorname{Area}\left(V_{r}\right)} $$ Exercise 9. Recall that the mean value theorem states that $\iint_{U} f d u^{1} d u^{2}=$ $f\left(\bar{u}^{1}, \bar{u}^{2}\right) \operatorname{Area}(U)$, for some $\left(\bar{u}^{1}, \bar{u}^{2}\right) \in U$. Use this theorem to show that $$ \lim _{r \rightarrow 0} \frac{\operatorname{Area}\left(n\left(V_{r}\right)\right)}{\operatorname{Area}\left(V_{r}\right)}=\frac{\left\|D_{1} N(0,0) \times D_{2} N(0,0)\right\|}{\left\|D_{1} X(0,0) \times D_{2} X(0,0)\right\|} $$ (Recall that $N:=n \circ X$.) Exercise 10. Prove Lagrange's identity: for every pair of vectors $v, w \in \mathbf{R}^{3}$, $$ \|v \times w\|^{2}=\operatorname{det}\left|\begin{array}{cc} \langle v, v\rangle & \langle v, w\rangle \\ \langle w, v\rangle & \langle w, w\rangle \end{array}\right| . $$ Now set $g\left(u^{1}, u^{2}\right):=\operatorname{det}\left[g_{i j}\left(u^{1}, u^{2}\right)\right]$. Then, by the previous exercise it follows that $\left\|D_{1} X(0,0) \times D_{2} X(0,0)\right\|=\sqrt{g(0,0)}$. Hence, to complete the proof of the main result of this section it remains to show that $$ \left\|D_{1} N(0,0) \times D_{2} N(0,0)\right\|=K(p) \sqrt{g(0,0)} . $$ We prove the above formula using two different methods: $M E T H O D 1$ 1. Recall that $K(p):=\operatorname{det}\left(S_{p}\right)$, where $S_{p}:=-d n_{p}: T_{p} M \rightarrow T_{p} M$ is the shape operator of $M$ at $p$. Also recall that $D_{i} X(0,0), i=1,2$, form a basis for $T_{p} M$. Let $S_{i j}$ be the coefficients of the matrix representation of $S_{p}$ with respect to this basis, then $$ S_{p}\left(D_{i} X\right)=\sum_{j=1}^{2} S_{i j} D_{j} X . $$ Further, recall that $N:=n \circ X$. Thus the chain rule yields: $$ S_{p}\left(D_{i} X\right)=-d n\left(D_{i} X\right)=-D_{i}(n \circ X)=-D_{i} N . $$ Exercise 11. Verify the middle step in the above formula, i.e., show that $d n\left(D_{i} X\right)=D_{i}(n \circ X)$. From the previous two lines of formulas, it now follows that $$ -D_{i} N=\sum_{j=1}^{2} S_{i j} D_{j} X . $$ Taking the inner product of both sides with $D_{k} N, k=1,2$, we get $$ \left\langle-D_{i} N, D_{k} N\right\rangle=\sum_{j=1}^{2} S_{i j}\left\langle D_{j} X, D_{k} N\right\rangle . $$ Exercise 12. Let $F, G: U \subset \mathbf{R}^{2} \rightarrow \mathbf{R}^{3}$ be a pair of mappings such that $\langle F, G\rangle=0$. Prove that $\left\langle D_{i} F, G\right\rangle=-\left\langle F, D_{i} G\right\rangle$. Now recall that $\left\langle D_{j} X, N\right\rangle=0$. Hence the previous exercise yields: $$ \left\langle D_{j} X, D_{k} N\right\rangle=-\left\langle D_{k j} X, N\right\rangle=-l_{i j} . $$ Combining the previous two lines of formulas, we get: $\left\langle D_{i} N, D_{k} N\right\rangle=\sum_{k=1}^{2} S_{i j} l_{j k}$; which in matrix notation is equivalent to $$ \left[\left\langle D_{i} N, D_{j} N\right\rangle\right]=\left[S_{i j}\right]\left[l_{i j}\right] . $$ Finally, recall that $\operatorname{det}\left[\left\langle D_{i} N, D_{k} N\right\rangle\right]=\left\|D_{1} N \times D_{2} N\right\|^{2}, \operatorname{det}\left[S_{i j}\right]=K$, and $\operatorname{det}\left[l_{i j}\right]=K g$. Hence taking the determinant of both sides in the above equation, and then taking the square root yields the desired result. Next, we discuss the second method for proving that $\left\|D_{1} N \times D_{2} N\right\|=$ $K \sqrt{g}$. METHOD 2. Here we work with a special patch which makes the computations easier: Exercise 13. Show that there exist a patch $(U, X)$ centered at $p$ such that $\left[g_{i j}(0,0)\right]$ is the identity matrix. (Hint: Start with a Monge patch with respect to $\left.T_{p} M\right)$ Thus, if we are working with the coordinate patch referred to in the above exercise, $g(0,0)=1$, and, consequently, all we need is to prove that $\left\|D_{1} N(0,0) \times D_{2} N(0,0)\right\|=K(p)$. Exercise 14. Let $f: U \subset \mathbf{R}^{2} \rightarrow \mathbf{S}^{2}$ be a differentiable mapping. Show that $\left\langle D_{i} f\left(u^{1}, u^{2}\right), f\left(u^{1}, u^{2}\right)\right\rangle=0$ (Hints: note that $\langle f, f\rangle=1$ and differentiate). It follows from the previous exercise that $\left\langle D_{i} N, N\right\rangle=0$. Now recall that $N(0,0)=n \circ X(0,0)=n(p)$. Hence, we may conclude that $N_{i}(0,0) \in T_{p} M$. Further recall that $\left\{D_{1} X(0,0), D_{2} X(0,0)\right\}$ is now an orthonormal basis for $T_{p} M$ (because we have chosen $(U, X)$ so that $\left[g_{i j}(0,0)\right]$ is the identity matrix). Consequently, $$ D_{i} N=\sum_{k=1}^{2}\left\langle D_{i} N, D_{k} X\right\rangle D_{k} X, $$ where we have omitted the explicit reference to the point $(0,0)$ in the above formula in order to make the notation less cumbersome (it is important to keep in mind, however, that the above is valid only at $(0,0))$. Taking the inner product of both sides of this equation with $D_{j} N(0,0)$ yields: $$ \left\langle D_{i} N, D_{j} N\right\rangle=\sum_{k=1}^{2}\left\langle D_{i} N, D_{k} X\right\rangle\left\langle D_{k} X, D_{j} N\right\rangle . $$ Now recall that $\left\langle D_{i} N, D_{k} X\right\rangle=-\left\langle N, D_{i j} X\right\rangle=-l_{i j}$. Similarly, $\left\langle D_{k} X, D_{j} N\right\rangle=$ $-l_{k j}$. Thus, in matrix notation, the above formula is equivalent to the following: $$ \left[\left\langle D_{i} N, D_{j} N\right\rangle\right]=\left[l_{i j}\right]^{2} $$ Finally, recall that $K(p)=\operatorname{det}\left[l_{i j}(0,0)\right] / \operatorname{det}\left[g_{i j}(0,0)\right]=\operatorname{det}\left[l_{i j}(0,0)\right]$. Hence, taking the determinant of both sides of the above equation yields the desired result. #### Product of principal curvatures For every $v \in T_{p} M$ with $\|v\|=1$ we define the normal curvature of $M$ at $p$ in the direction of $v$ by $$ k_{v}(p):=\left\langle\gamma^{\prime \prime}(0), n(p)\right\rangle $$ where $\gamma:(-\epsilon, \epsilon) \rightarrow M$ is a curve with $\gamma(0)=p$ and $\gamma^{\prime}(0)=v$. Exercise 15. Show that $k_{v}(p)$ does not depend on $\gamma$. In particular, by the above exercise, we may take $\gamma$ to be a curve which lies in the intersection of $M$ with a plane which passes through $p$ and is normal to $n(p) \times v$. So, intuitively, $k_{v}(p)$ is a measure of the curvature of an orthogonal cross section of $M$ at $p$. Let $U T_{p} M:=\left\{v \in T_{p} M \mid\|v\|=1\right\}$ denote the unit tangent space of $M$ at $p$. The principal curvatures of $M$ at $p$ are defined as $$ k_{1}(p):=\min _{v} k_{v}(p), \quad \text { and } \quad k_{2}(p):=\max _{v} k_{v}(p), $$ where $v$ ranges over $U T_{p} M$. Our main aim in this subsection is to show that $$ K(p)=k_{1}(p) k_{2}(p) $$ Since $K(p)$ is the determinant of the shape operator $S_{p}$, to prove the above it suffices to show that $k_{1}(p)$ and $k_{2}(p)$ are the eigenvalues of $S_{p}$. First, we need to define the second fundamental form of $M$ at $p$. This is a bilinear map $\mathrm{II}_{p}: T_{p} M \times T_{p} M \rightarrow \mathbf{R}$ defined by $$ \mathrm{II}_{p}(v, w):=\left\langle S_{p}(v), w\right\rangle $$ We claim that, for all $v \in U T_{p} M$, $$ k_{v}(p)=\mathrm{II}_{p}(v, v) . $$ The above follows from the following computation $$ \begin{aligned} \left\langle S_{p}(v), v\right\rangle & =-\left\langle d n_{p}(v), v\right\rangle \\ & =-\left\langle(n \circ \gamma)^{\prime}(0), \gamma^{\prime}(0)\right\rangle \\ & =\left\langle(n \circ \gamma)(0), \gamma^{\prime \prime}(0)\right\rangle \\ & =\left\langle n(p), \gamma^{\prime \prime}(0)\right\rangle \end{aligned} $$ Exercise 16. Verify the passage from the second to the third line in the above computation, i.e., show that $-\left\langle(n \circ \gamma)^{\prime}(0), \gamma^{\prime}(0)\right\rangle=\left\langle(n \circ \gamma)(0), \gamma^{\prime \prime}(0)\right\rangle$ (Hint: Set $f(t):=\left\langle n(\gamma(t)), \gamma^{\prime}(t)\right\rangle$, note that $f(t)=0$, and differentiate.) So we conclude that $k_{i}(p)$ are the minimum and maximum of $\mathrm{II}_{p}(v)$ over $U T_{p} M$. Hence, all we need is to show that the extrema of $\mathrm{II}_{p}$ over $U T_{p} M$ coincide with the eigenvalues of $S_{p}$. Exercise 17. Show that $\mathrm{II}_{p}$ is symmetric, i.e., $\mathrm{II}_{p}(v, w)=\mathrm{II}_{p}(w, v)$ for all $v$, $w \in T_{p} M$. By the above exercise, $S_{p}$ is a self-adjoint operator, i.e, $\left\langle S_{p}(v), w\right\rangle=$ $\left\langle v, S_{p}(w)\right\rangle$. Hence $S_{p}$ is orthogonally diagonalizable, i.e., there exist orthonormal vectors $e_{i} \in T_{p} M, i=1,2$, such that $$ S_{p}\left(e_{i}\right)=\lambda_{i} e_{i} $$ By convention, we suppose that $\lambda_{1} \leq \lambda_{2}$. Now note that each $v \in U T_{p} M$ may be represented uniquely as $v=v^{1} e_{1}+v^{2} e_{2}$ where $\left(v^{1}\right)^{2}+\left(v^{2}\right)^{2}=1$. So for each $v \in U T_{p} M$ there exists a unique angle $\theta \in[0,2 \pi)$ such that $$ v(\theta):=\cos \theta e_{1}+\sin \theta e_{2} $$ Consequently, bilinearity of $\mathrm{II}_{p}$ yields $$ \mathrm{II}_{p}(v(\theta), v(\theta))=\lambda_{1} \cos ^{2} \theta+\lambda_{2} \sin ^{2} \theta . $$ Exercise 18. Verify the above claim, and show that minimum and maximum values of $\mathrm{II}_{p}$ are $\lambda_{1}$ and $\lambda_{2}$ respectively. Thus $k_{1}(p)=\lambda_{1}$, and $k_{2}(p)=\lambda_{2}$. The previous exercise completes the proof that $K(p)=k_{1}(p) k_{2}(p)$, and also yields the following formula which was discovered by Euler: $$ k_{v}(p)=k_{1}(p) \cos ^{2} \theta+k_{2}(p) \sin ^{2} \theta . $$ In particular, note that by the above formula there exists always a pair of orthogonal directions where $k_{v}(p)$ achieves its maximum and minimum values. These are known as the principal directions of $M$ at $p$. ## Lecture Notes 11 ### Intrinsic Metric and Isometries of Surfaces Let $M \subset \mathbf{R}^{3}$ be a regular embedded surface and $p, q \in M$, then we define $$ \operatorname{dist}_{M}(p, q):=\inf \{\operatorname{Length}[\gamma] \mid \gamma:[0,1] \rightarrow M, \gamma(0)=p, \gamma(1)=q\} $$ Exercise 1. Show that $\left(M, \operatorname{dist}_{M}\right)$ is a metric space. Lemma 2. Show that if $M$ is a $C^{1}$ surface, and $X \subset M$ is compact, then for every $\epsilon>0$, there exists $\delta>0$ such that $$ \left|\operatorname{dist}_{M}(p, q)-\|p-q\|\right| \leq \epsilon\|p-q\| $$ for all $p, q \in X$, with $\operatorname{dist}_{M}(p, q) \leq \delta$. Proof. Define $F: M \times M \rightarrow \mathbf{R}$ by $F(p, q):=\operatorname{dist}_{M}(p, q) /\|p-q\|$, if $p \neq q$ and $F(p, q):=1$ otherwise. We claim that $F$ is continuous. To see this let $p_{i}$ be a sequence of points of $M$ which converge to a point $p \in M$. We may assume that $p_{i}$ are contained in a Monge patch of $M$ centered at $p$ given by $$ X\left(u_{1}, u_{2}\right)=\left(u_{1}, u_{2}, h\left(u_{1}, u_{2}\right)\right) . $$ Let $x_{i}$ and $y_{i}$ be the $x$ and $y$ coordinates of $p_{i}$. If $p_{i}$ is sufficiently close to $p=(0,0)$, then, since $\nabla h(0,0)=0$, we can make sure that $$ \left\|\nabla h\left(t x_{i}, t y_{i}\right)\right\|^{2} \leq \epsilon, $$ for all $t \in[0,1]$ and $\epsilon>0$. Let $\gamma:[0,1] \rightarrow \mathbf{R}^{3}$ be the curve given by $$ \gamma(t)=\left(t x_{i}, t y_{i}, h\left(t x_{i}, t y_{i}\right)\right) $$ ${ }^{1}$ Last revised: October 27,2021 Then, since $\gamma$ is a curve on $M$, $$ \begin{aligned} \operatorname{dist}_{M}\left(p, p_{i}\right) & \leq \text { Length }[\gamma] \\ & =\int_{0}^{1} \sqrt{x_{i}^{2}+y_{i}^{2}+\left\langle\nabla h\left(t x_{i}, t y_{i}\right),\left(x_{i}, y_{i}\right)\right\rangle^{2}} d t \\ & \leq \int_{0}^{1} \sqrt{x_{i}^{2}+y_{i}^{2}+\epsilon\left(x_{i}^{2}+y_{i}^{2}\right)^{2}} d t \\ & \leq \sqrt{1+\epsilon} \sqrt{x_{i}^{2}+y_{i}^{2}} \\ & \leq(1+\epsilon)\left\|p-p_{i}\right\| \end{aligned} $$ So, for any $\epsilon>0$ we have $$ 1 \leq \frac{\operatorname{dist}_{M}\left(p, p_{i}\right)}{\left\|p-p_{i}\right\|} \leq 1+\epsilon $$ provided that $p_{i}$ is sufficiently close to $p$. We conclude then that $F$ is continuous. So $U:=F^{-1}([1,1+\epsilon])$ is an open subset of $M \times M$ which contains the diagonal $\Delta_{M}:=\{(p, p) \mid p \in M\}$. Since $\Delta_{X} \subset \Delta_{M}$ is compact, we may then choose $\delta$ so small that $V_{\delta}\left(\Delta_{X}\right) \subset U$, where $V_{\delta}\left(\Delta_{X}\right)$ denotes the open neighborhood of $\Delta_{X}$ in $M \times M$ which consists of all pairs of points $(p, q)$ with $\operatorname{dist}_{M}(p, q) \leq \delta$. Exercise 3. Does the above lemma hold also for $C^{0}$ surfaces? If $\gamma:[a, b] \rightarrow M$ is any curve then we may define $$ \begin{aligned} \operatorname{Length}_{M}[\gamma] & := \\ & \sup \left\{\sum_{i=1}^{k} \operatorname{dist}_{M}\left(\gamma\left(t_{i}\right), \gamma\left(t_{i-1}\right)\right) \mid\left\{t_{0}, \ldots, t_{k}\right\} \in \operatorname{Partition}[a, b]\right\} . \end{aligned} $$ Lemma 4. Length ${ }_{M}[\gamma]=$ Length $[\gamma]$. Proof. Note that $$ \operatorname{dist}_{M}\left(\gamma\left(t_{i}\right), \gamma\left(t_{i-1}\right)\right) \geq\left\|\gamma\left(t_{i}\right)-\gamma\left(t_{i-1}\right)\right\| . $$ Thus Length $_{M}[\gamma] \geq$ Length $[\gamma]$. Further, by the previous lemma, we can make sure that $$ \operatorname{dist}_{M}\left(\gamma\left(t_{i}\right), \gamma\left(t_{i-1}\right)\right) \leq(1+\epsilon)\left\|\gamma\left(t_{i}\right)-\gamma\left(t_{i-1}\right)\right\|, $$ which yields Length ${ }_{M}[\gamma] \leq(1+\epsilon)$ Length $[\gamma]$, for any $\epsilon>0$. We say that $f: M \rightarrow \bar{M}$ is an isometry provided that $$ \operatorname{dist}_{\bar{M}}(f(p), f(q))=\operatorname{dist}_{M}(p, q) . $$ Lemma 5. $f: M \rightarrow \bar{M}$ is an isometry, if and only if Length $[\gamma]=\operatorname{Length}[f \circ$ $\gamma]$ for all curves $\gamma:[a, b] \rightarrow M$. Proof. If $f$ is an isometry, then, by the previous lemma, $$ \text { Length }[\gamma]=\operatorname{Length}_{M}[\gamma]=\operatorname{Length}_{\bar{M}}[f \circ \gamma]=\operatorname{Length}_{M}[f \circ \gamma] . $$ The converse is clear. Exercise 6. Justify the middle equality in the last expression displayed above. Theorem 7. $f: M \rightarrow \bar{M}$ is an isometry if and only if for all $p \in M$, and $v, w \in T_{p} M$, $$ \left\langle d f_{p}(v), d f_{p}(w)\right\rangle=\langle v, w\rangle . $$ Proof. Suppose that $f$ is an isometry. Let $\gamma:(-\epsilon, \epsilon) \rightarrow M$ be a curve with $\gamma(0)=p$, and $\gamma^{\prime}(0)=v$. Then, by the previous lemma $$ \int_{-\epsilon}^{\epsilon}\left\|\gamma^{\prime}(t)\right\| d t=\int_{-\epsilon}^{\epsilon}\left\|(f \circ \gamma)^{\prime}(t)\right\| d t $$ Taking the limit of both sides as $\epsilon \rightarrow 0$ and applying the mean value theorem for integrals, yields then that $$ \|v\|=\left\|\gamma^{\prime}(0)\right\|=\left\|(f \circ \gamma)^{\prime}(0)\right\|=\left\|d f_{p}(v)\right\| . $$ Thus $d f$ preserves the norm, which implies that it must preserve the innerproduct as well (see the following exercise). Conversely, suppose that $\|v\|=\left\|d f_{p}(v)\right\|$. Then, if $\gamma:[a, b] \rightarrow M$ is any curve, we have $$ \int_{a}^{b}\left\|(f \circ \gamma)^{\prime}(t)\right\| d t=\int_{a}^{b}\left\|d f_{\gamma(t)}\left(\gamma^{\prime}(t)\right)\right\| d t=\int_{a}^{b}\left\|\gamma^{\prime}(t)\right\| d t . $$ So $f$ preserves the length of all curves, which, by the previous Lemma, shows that $f$ is an isometry. Exercise 8. Show that a linear function $F: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ preserves the norm $\|\cdot\|$ if and only if it preserves the inner product $\langle\cdot, \cdot\rangle$. ### Gauss's Theorem Egregium Lemma 9. Let $X: U \rightarrow M$ be a proper regular chart. Then $\bar{X}:=f \circ X: U \rightarrow$ $\bar{M}$ is a proper regular chart as well and $g_{i j}=\bar{g}_{i j}$ on $U$. Proof. Using the last theorem we have $$ \begin{aligned} \bar{g}_{i j}\left(u_{1}, u_{2}\right) & =\left\langle D_{i} \bar{X}\left(u_{1}, u_{2}\right), D_{j} \bar{X}\left(u_{1}, u_{2}\right)\right\rangle \\ & =\left\langle D_{i}(f \circ X)\left(u_{1}, u_{2}\right), D_{j}(f \circ X)\left(u_{1}, u_{2}\right)\right\rangle \\ & =\left\langle d f_{X\left(u_{1}, u_{2}\right)}\left(D_{i} X\left(u_{1}, u_{2}\right)\right), d f_{X\left(u_{1}, u_{2}\right)}\left(D_{j} X\left(u_{1}, u_{2}\right)\right)\right\rangle \\ & =\left\langle D_{i} X\left(u_{1}, u_{2}\right), D_{j} X\left(u_{1}, u_{2}\right)\right\rangle \\ & =g_{i j}\left(u_{1}, u_{2}\right) . \end{aligned} $$ Exercise 10. Justify the third equality in the last displayed expression above. Let $\mathcal{F}$ denote the set of functions $f: U \rightarrow \mathbf{R}$ where $U \subset \mathbf{R}^{2}$ is an open neighborhood of the origin. Lemma 11. There exists a mapping Briochi: $\mathcal{F} \times \mathcal{F} \times \mathcal{F} \rightarrow \mathcal{F}$ such that for any chart $X: U \rightarrow M$ centered at $p \in M$, $$ K(p)=\operatorname{Briochi}\left[g_{11}, g_{12}, g_{22}\right](0,0) . $$ Proof. Recall that $$ K(p)=\frac{\operatorname{det} l_{i j}(0,0)}{\operatorname{det} g_{i j}(0,0)}, $$ and, by Lagrange's identity, $$ l_{i j}=\left\langle X_{i j}, \frac{X_{1} \times X_{2}}{\left\|X_{1} \times X_{2}\right\|}\right\rangle=\frac{1}{\sqrt{\operatorname{det} g_{i j}}}\left\langle X_{i j}, X_{1} \times X_{2}\right\rangle, $$ where $X_{i j}:=D_{i j} X$, and $X_{i}:=D_{i} X$. Thus $$ K(p)=\frac{\operatorname{det}\left(\left\langle X_{i j}(0,0), X_{1}(0,0) \times X_{2}(0,0)\right\rangle\right)}{\left(\operatorname{det} g_{i j}(0,0)\right)^{2}} . $$ Next note that $$ \operatorname{det}\left(\left\langle X_{i j}, X_{1} \times X_{2}\right\rangle\right)=\left\langle X_{11}, X_{1} \times X_{2}\right\rangle\left\langle X_{22}, X_{1} \times X_{2}\right\rangle-\left\langle X_{12}, X_{1} \times X_{2}\right\rangle^{2} $$ The right hand side of the last expression may be rewritten as $$ \operatorname{det}\left(X_{11}, X_{1}, X_{2}\right) \operatorname{det}\left(X_{22}, X_{1}, X_{2}\right)-\left(\operatorname{det}\left(X_{12}, X_{1}, X_{2}\right)\right)^{2}, $$ where $(u, v, w)$ here denotes the matrix with columns $u, v$, and $w$. Recall that if $A$ is a square matrix with transpose $A^{T}$, then $\operatorname{det} A=\operatorname{det} A^{T}$. Thus the last expression displayed above is equivalent to $$ \operatorname{det}\left(\left(X_{11}, X_{1}, X_{2}\right)^{T}\left(X_{22}, X_{1}, X_{2}\right)\right)-\operatorname{det}\left(\left(X_{12}, X_{1}, X_{2}\right)^{T}\left(X_{12}, X_{1}, X_{2}\right)\right), $$ which in turn can be written as $$ \begin{aligned} & \operatorname{det}\left(\begin{array}{ccc} \left\langle X_{11}, X_{22}\right\rangle & \left\langle X_{11}, X_{1}\right\rangle & \left\langle X_{11}, X_{2}\right\rangle \\ \left\langle X_{1}, X_{22}\right\rangle & \left\langle X_{1}, X_{1}\right\rangle & \left\langle X_{1}, X_{2}\right\rangle \\ \left\langle X_{2}, X_{22}\right\rangle & \left\langle X_{2}, X_{1}\right\rangle & \left\langle X_{2}, X_{2}\right\rangle \end{array}\right) \\ & -\operatorname{det}\left(\begin{array}{ccc} \left\langle X_{12}, X_{12}\right\rangle & \left\langle X_{12}, X_{1}\right\rangle & \left\langle X_{12}, X_{2}\right\rangle \\ \left\langle X_{1}, X_{12}\right\rangle & \left\langle X_{1}, X_{1}\right\rangle & \left\langle X_{1}, X_{2}\right\rangle \\ \left\langle X_{2}, X_{12}\right\rangle & \left\langle X_{2}, X_{1}\right\rangle & \left\langle X_{2}, X_{2}\right\rangle \end{array}\right) . \end{aligned} $$ If we expand the above determinants along their first rows, then $\left\langle X_{11}, X_{22}\right\rangle$ and $\left\langle X_{12}, X_{22}\right\rangle$ will have the same coefficients. This implies that we can rewrite the last expression as $$ \begin{array}{rcc} \operatorname{det}\left(\begin{array}{ccc} \left\langle X_{11}, X_{22}\right\rangle-\left\langle X_{12}, X_{12}\right\rangle & \left\langle X_{11}, X_{1}\right\rangle & \left\langle X_{11}, X_{2}\right\rangle \\ \left\langle X_{1}, X_{22}\right\rangle & \left\langle X_{1}, X_{1}\right\rangle & \left\langle X_{1}, X_{2}\right\rangle \\ \left\langle X_{2}, X_{22}\right\rangle & \left\langle X_{2}, X_{1}\right\rangle & \left\langle X_{2}, X_{2}\right\rangle \end{array}\right) \\ & -\operatorname{det}\left(\begin{array}{ccc} 0 & \left\langle X_{12}, X_{1}\right\rangle & \left\langle X_{12}, X_{2}\right\rangle \\ \left\langle X_{1}, X_{12}\right\rangle & \left\langle X_{1}, X_{1}\right\rangle & \left\langle X_{1}, X_{2}\right\rangle \\ \left\langle X_{2}, X_{12}\right\rangle & \left\langle X_{2}, X_{1}\right\rangle & \left\langle X_{2}, X_{2}\right\rangle \end{array}\right) \end{array} $$ Now note that each of the entries in the above matrices can be expressed purely in terms of $g_{i j}$, since $$ \begin{gathered} \left\langle X_{i i}, X_{j}\right\rangle=\left\langle X_{i}, X_{j}\right\rangle_{i}-\left\langle X_{i}, X_{j i}\right\rangle=\left(g_{i j}\right)_{i}-\frac{1}{2}\left(g_{i i}\right)_{j}, \\ \left\langle X_{i j}, X_{i}\right\rangle=\frac{1}{2}\left\langle X_{i}, X_{i}\right\rangle_{j}=\frac{1}{2}\left(g_{i i}\right)_{j} \end{gathered} $$ and $$ \begin{aligned} \left\langle X_{11}, X_{22}\right\rangle-\left\langle X_{12}, X_{12}\right\rangle & =\left\langle X_{1}, X_{22}\right\rangle_{1}-\left\langle X_{1}, X_{12}\right\rangle_{2} \\ & =\left(g_{21}\right)_{21}-\frac{1}{2}\left(g_{11}\right)_{21}-\frac{1}{2}\left(g_{11}\right)_{2} \end{aligned} $$ Substituting the above values in the previous matrices, we define Briochi $\left[g_{11}, g_{22}, g_{33}\right]:=$ $$ \begin{array}{r} \frac{1}{\left(\operatorname{det}\left(g_{i j}\right)\right)^{2}}\left(\operatorname{det}\left(\begin{array}{ccc} \left(g_{21}\right)_{21}-\frac{1}{2}\left(g_{11}\right)_{21}-\frac{1}{2}\left(g_{11}\right)_{2} & \frac{1}{2}\left(g_{11}\right)_{1} & \frac{1}{2}\left(g_{11}\right)_{2} \\ \left(g_{21}\right)_{2}-\frac{1}{2}\left(g_{11}\right)_{2} & g_{11} & g_{12} \\ \frac{1}{2}\left(g_{22}\right)_{2} & g_{21} & g_{22} \end{array}\right)\right. \\ \left.-\operatorname{det}\left(\begin{array}{ccc} 0 & \frac{1}{2}\left(g_{11}\right)_{2} & \frac{1}{2}\left(g_{22}\right)_{1} \\ \frac{1}{2}\left(g_{11}\right)_{2} & g_{11} & g_{12} \\ \frac{1}{2}\left(g_{22}\right)_{1} & g_{21} & g_{22} \end{array}\right)\right) . \end{array} $$ Evaluating the above expression at $(0,0)$ yields that Gaussian curvature $K(p)$. Theorem 12. If $f: M \rightarrow \bar{M}$ is an isometry, then $\bar{K}(f(p))=K(p)$, where $K$ and $\bar{K}$ denote the Gaussian curvatures of $M$ and $\bar{M}$ respectively. Proof. Let $X: U \rightarrow M$ be a chart centered at $p$, then $\bar{X}:=f \circ X$ is a chart of $\bar{M}$ centered at $f(p)$. Let $g_{i j}$ and $\bar{g}_{i j}$ denote the coefficients of the first fundamental form with respect to the charts $X$ and $\bar{X}$ respectively. Then, using the previous two lemmas, we have $$ \begin{aligned} \bar{K}(f(p)) & =\operatorname{Briochi}\left[\bar{g}_{11}, \bar{g}_{12}, \bar{g}_{22}\right](0,0) \\ & =\operatorname{Briochi}\left[g_{11}, g_{12}, g_{22}\right](0,0) \\ & =K(p) . \end{aligned} $$ Exercise 13. Let $M \subset \mathbf{R}^{3}$ be a regular embedded surface and $p \in M$. Suppose that $K(p) \neq 0$. Does there exist a chart $X: U \rightarrow M$ such that $D_{1} X$ and $D_{2} X$ are orthonormal at all points of $U$. ## Lecture Notes 12 ### Gauss's formulas, and Christoffel Symbols Let $X: U \rightarrow \mathbf{R}^{3}$ be a proper regular patch for a surface $M$, and set $X_{i}:=$ $D_{i} X$. Then $$ \left\{X_{1}, X_{2}, N\right\} $$ may be regarded as a moving bases of frame for $\mathbf{R}^{3}$ similar to the Frenet Serret frames for curves. We should emphasize, however, two important differences: (i) there is no canonical choice of a moving bases for a surface or a piece of surface $\left(\left\{X_{1}, X_{2}, N\right\}\right.$ depends on the choice of the chart $\left.X\right)$; (ii) in general it is not possible to choose a patch $X$ so that $\left\{X_{1}, X_{2}, N\right\}$ is orthonormal (unless the Gaussian curvature of $M$ vanishes everywhere). The following equations, the first of which is known as Gauss's formulas, may be regarded as the analog of Frenet-Serret formulas for surfaces: $$ X_{i j}=\sum_{k=1}^{2} \Gamma_{i j}^{k} X_{k}+l_{i j} N, \quad \text { and } \quad N_{i}=-\sum_{j=1}^{2} l_{i}^{j} X_{j} . $$ The coefficients $\Gamma_{i j}^{k}$ are known as the Christoffel symbols, and will be determined below. Recall that $l_{i j}$ are just the coefficients of the second fundamental form. To find out what $l_{i}^{j}$ are note that $$ -l_{i k}=-\left\langle N, X_{i k}\right\rangle=\left\langle N_{i}, X_{k}\right\rangle=-\sum_{j=1}^{2} l_{i}^{j}\left\langle X_{j}, X_{k}\right\rangle=-\sum_{j=1}^{2} l_{i}^{j} g_{j k} $$ Thus $\left(l_{i j}\right)=\left(l_{i}^{j}\right)\left(g_{i j}\right)$. So if we let $\left(g^{i j}\right):=\left(g_{i j}\right)^{-1}$, then $\left(l_{i}^{j}\right)=\left(l_{i j}\right)\left(g^{i j}\right)$, which yields ${ }^{1}$ Last revised: October 27, 2021 $$ l_{i}^{j}=\sum_{k=1}^{2} l_{i k} g^{k j} $$ Exercise 1. What is $\operatorname{det}\left(l_{i}^{j}\right)$ equal to?. Exercise 2. Show that $N_{i}=d n\left(X_{i}\right)=-S\left(X_{i}\right)$. Next we compute the Christoffel symbols. To this end note that $$ \left\langle X_{i j}, X_{k}\right\rangle=\sum_{l=1}^{2} \Gamma_{i j}^{l}\left\langle X_{l}, X_{k}\right\rangle=\sum_{l=1}^{2} \Gamma_{i j}^{l} g_{l k} $$ which in matrix notation reads $$ \left(\begin{array}{c} \left\langle X_{i j}, X_{1}\right\rangle \\ \left\langle X_{i j}, X_{2}\right\rangle \end{array}\right)=\left(\begin{array}{c} \Gamma_{i j}^{1} g_{11}+\Gamma_{i j}^{2} g_{21} \\ \Gamma_{i j}^{1} g_{12}+\Gamma_{i j}^{2} g_{22} \end{array}\right)=\left(\begin{array}{ll} g_{11} & g_{21} \\ g_{12} & g_{22} \end{array}\right)\left(\begin{array}{c} \Gamma_{i j}^{1} \\ \Gamma_{i j}^{2} \end{array}\right) $$ So $$ \left(\begin{array}{c} \Gamma_{i j}^{1} \\ \Gamma_{i j}^{2} \end{array}\right)=\left(\begin{array}{ll} g_{11} & g_{21} \\ g_{12} & g_{22} \end{array}\right)^{-1}\left(\begin{array}{c} \left\langle X_{i j}, X_{1}\right\rangle \\ \left\langle X_{i j}, X_{2}\right\rangle \end{array}\right)=\left(\begin{array}{ll} g^{11} & g^{21} \\ g^{12} & g^{22} \end{array}\right)\left(\begin{array}{c} \left\langle X_{i j}, X_{1}\right\rangle \\ \left\langle X_{i j}, X_{2}\right\rangle \end{array}\right) $$ which yields $$ \Gamma_{i j}^{k}=\sum_{l=1}^{2}\left\langle X_{i j}, X_{l}\right\rangle g^{l k} $$ In particular, $\Gamma_{i j}^{k}=\Gamma_{j i}^{k}$. Next note that $$ \begin{aligned} & \left(g_{i j}\right)_{k}=\left\langle X_{i k}, X_{j}\right\rangle+\left\langle X_{i}, X_{j k}\right\rangle, \\ & \left(g_{j k}\right)_{i}=\left\langle X_{j i}, X_{k}\right\rangle+\left\langle X_{j}, X_{k i}\right\rangle, \\ & \left(g_{k i}\right)_{j}=\left\langle X_{k j}, X_{i}\right\rangle+\left\langle X_{k}, X_{i j}\right\rangle . \end{aligned} $$ Thus $$ \left\langle X_{i j}, X_{k}\right\rangle=\frac{1}{2}\left(\left(g_{k i}\right)_{j}+\left(g_{j k}\right)_{i}-\left(g_{i j}\right)_{k}\right) . $$ So we conclude that $$ \Gamma_{i j}^{k}=\sum_{l=1}^{2} \frac{1}{2}\left(\left(g_{l i}\right)_{j}+\left(g_{j l}\right)_{i}-\left(g_{i j}\right)_{l}\right) g^{l k} . $$ Note that the last equation shows that $\Gamma_{i j}^{k}$ are intrinsic quantities, i.e., they depend only on $g_{i j}$ (and derivatives of $g_{i j}$ ), and so are preserved under isometries. Exercise 3. Compute the Christoffel symbols of a surface of revolution. ### The Gauss and Codazzi-Mainardi Equations, Rie- mann Curvature Tensor, and a Second Proof of Gauss's Theorema Egregium Here we shall derive some relations between $l_{i j}$ and $g_{i j}$. Our point of departure is the simple observation that if $X: U \rightarrow \mathbf{R}^{3}$ is a $C^{3}$ regular patch, then, since partial derivatives commute, $$ X_{i j k}=X_{i k j} $$ Note that $$ \begin{aligned} X_{i j k} & =\left(\sum_{l=1}^{2} \Gamma_{i j}^{l} X_{l}+l_{i j} N\right)_{k} \\ & =\sum_{l=1}^{2}\left(\Gamma_{i j}^{l}\right)_{k} X_{l}+\sum_{l=1}^{2} \Gamma_{i j}^{l} X_{l k}+\left(l_{i j}\right)_{k} N+l_{i j} N_{k} \\ & =\sum_{l=1}^{2}\left(\Gamma_{i j}^{l}\right)_{k} X_{l}+\sum_{l=1}^{2} \Gamma_{i j}^{l}\left(\sum_{m=1}^{2} \Gamma_{l k}^{m} X_{m}+l_{l k} N\right)+\left(l_{i j}\right)_{k} N-l_{i j} \sum_{l=1}^{2} l_{k}^{l} X_{l} \\ & =\sum_{l=1}^{2}\left(\Gamma_{i j}^{l}\right)_{k} X_{l}+\sum_{l=1}^{2} \sum_{m=1}^{2} \Gamma_{i j}^{l} \Gamma_{l k}^{m} X_{m}+\sum_{l=1}^{2} \Gamma_{i j}^{l} l_{l k} N+\left(l_{i j}\right)_{k} N-\sum_{l=1}^{2} l_{i j} l_{k}^{l} X_{l} \\ & =\sum_{l=1}^{2}\left(\left(\Gamma_{i j}^{l}\right)_{k}+\sum_{p=1}^{2} \Gamma_{i j}^{p} \Gamma_{p k}^{l}-l_{i j} l_{k}^{l}\right) X_{l}+\left(\sum_{l=1}^{2} \Gamma_{i j}^{l} l_{l k}+\left(l_{i j}\right)_{k}\right) N . \end{aligned} $$ Switching $k$ and $j$ yields, $$ X_{i k j}=\sum_{l=1}^{2}\left(\left(\Gamma_{i k}^{l}\right)_{j}+\sum_{p=1}^{2} \Gamma_{i k}^{p} \Gamma_{p j}^{l}-l_{i k} l_{j}^{l}\right) X_{l}+\left(\sum_{l=1}^{2} \Gamma_{i k}^{l} l_{l j}+\left(l_{i k}\right)_{j}\right) N . $$ Setting the normal and tangential components of the last two equations equal to each other we obtain $$ \begin{aligned} \left(\Gamma_{i j}^{l}\right)_{k}+ & \sum_{p=1}^{2} \Gamma_{i j}^{p} \Gamma_{p k}^{l}-l_{i j} l_{k}^{l}=\left(\Gamma_{i k}^{l}\right)_{j}+\sum_{p=1}^{2} \Gamma_{i k}^{p} \Gamma_{p j}^{l}-l_{i k} l_{j}^{l}, \\ & \sum_{l=1}^{2} \Gamma_{i j}^{l} l_{l k}+\left(l_{i j}\right)_{k}=\sum_{l=1}^{2} \Gamma_{i k}^{l} l_{l j}+\left(l_{i k}\right)_{j} . \end{aligned} $$ These equations may be rewritten as $$ \begin{gathered} \left.\left(\Gamma_{i k}^{l}\right)_{j}-\left(\Gamma_{i j}^{l}\right)_{k}+\sum_{p=1}^{2}\left(\Gamma_{i k}^{p} \Gamma_{p j}^{l}-\Gamma_{i j}^{p} \Gamma_{p k}^{l}\right)=l_{i k} l_{j}^{l}-l_{i j} l_{k}^{l}, \quad \text { (Gauss }\right) \\ \sum_{l=1}^{2}\left(\Gamma_{i k}^{l} l_{l j}-\Gamma_{i j}^{l} l_{l k}\right)=\left(l_{i j}\right)_{k}-\left(l_{i k}\right)_{j}, \quad \text { (Codazzi-Mainardi) } \end{gathered} $$ and are known as the Gauss's equations and the Codazzi-Mainardi equations respectively. If we define the Riemann curvature tensor as $$ R_{i j k}^{l}:=\left(\Gamma_{i k}^{l}\right)_{j}-\left(\Gamma_{i j}^{l}\right)_{k}+\sum_{p=1}^{2}\left(\Gamma_{i k}^{p} \Gamma_{p j}^{l}-\Gamma_{i j}^{p} \Gamma_{p k}^{l}\right), $$ then Gauss's equation may be rewritten as $$ R_{i j k}^{l}=l_{i k} l_{j}^{l}-l_{i j} l_{k}^{l} $$ Now note that $$ \sum_{l=1}^{2} R_{i j k}^{l} g_{l m}=l_{i k} \sum_{l=1}^{2} l_{j}^{l} g_{l m}-l_{i j} \sum_{l=1}^{2} l_{k}^{l} g_{l m}=l_{i k} l_{j m}-l_{i j} l_{k m} . $$ In particular, if $i=k=1$ and $j=m=2$, then $$ \sum_{l=1}^{2} R_{121}^{l} g_{l 2}=l_{11} l_{22}-l_{12} l_{21}=\operatorname{det}\left(l_{i j}\right)=K \operatorname{det}\left(g_{i j}\right) . $$ So it follows that $$ K=\frac{R_{121}^{1} g_{12}+R_{121}^{2} g_{22}}{\operatorname{det}\left(g_{i j}\right)}, $$ which shows that $K$ is intrinsic and gives another proof of Gauss's Theorema Egregium. Exercise 4. Show that if $M=\mathbf{R}^{2}$, hen $R_{i j k}^{l}=0$ for all $1 \leq i, l, j, k \leq 2$ both intrinsically and extrinsically. Exercise 5. Show that (i) $R_{i j k}^{l}=-R_{i k j}^{l}$, hence $R_{i j j}^{l}=0$, and (ii) $R_{i j k}^{l}+$ $R_{j k i}^{l}+R_{k i j}^{l} \equiv 0$. Exercise 6. Compute the Riemann curvature tensor for $\mathbf{S}^{2}$ both intrinsically and extrinsically. ### Fundamental Theorem of Surfaces In the previous section we showed that if $g_{i j}$ and $l_{i j}$ are the coefficients of the first and second fundamental form of a patch $X: U \rightarrow M$, then they must satisfy the Gauss and Codazzi-Maindardi equations. These conditions turn out to be not only necessary but also sufficient in the following sense. Theorem 7 (Fundamental Theorem of Surfaces). Let $U \subset \mathbf{R}^{2}$ be an open neighborhood of the origin $(0,0)$, and $g_{i j}: U \rightarrow \mathbf{R}, l_{i j}: U \rightarrow \mathbf{R}$ be differentiable functions for $i, j=1,2$. Suppose that $g_{i j}=g_{j i}, l_{i j}=l_{j i}, g_{11}>0$, $g_{22}>0$ and $\operatorname{det}\left(g_{i j}\right)>0$. Further suppose that $g_{i j}$ and $l_{i j}$ satisfy the Gauss and Codazzi-Mainardi equations. Then there exists and open set $V \subset U$, with $(0,0) \in V$ and a regular patch $X: V \rightarrow \mathbf{R}$ with $g_{i j}$ and $l_{i j}$ as its first and second fundamental forms respectively. Further, if $Y: V \rightarrow \mathbf{R}^{3}$ is another regular patch with first and second fundamental forms $g_{i j}$ and $l_{i j}$, then $Y$ differs from $X$ by a rigid motion. ## Lecture Notes 13 ### The Covariant Derivative, Lie Bracket, and Rie- mann Curvature Tensor of $\mathbf{R}^{n}$ Let $A \subset \mathbf{R}^{n}, p \in A$, and $W$ be a tangent vector of $A$ at $p$, i.e., suppose there exists a curve $\gamma:(-\epsilon, \epsilon) \rightarrow A$ with $\gamma(0)=p$ and $\gamma^{\prime}(0)=W$. Then if $f: A \rightarrow \mathbf{R}$ is a function we define the (directional) derivative of $f$ with respect to $W$ at $p$ as $$ W_{p} f:=(f \circ \gamma)^{\prime}(0)=d f_{p}(W) . $$ Similarly, if $V$ is a vectorfield along $A$, i.e., a mapping $V: A \rightarrow \mathbf{R}^{n}, p \mapsto V_{p}$, we define the covariant derivative of $V$ with respect to $W$ at $p$ as $$ \bar{\nabla}_{W_{p}} V:=(V \circ \gamma)^{\prime}(0)=d V_{p}(W) . $$ Note that if $f$ and $V$ are $C^{1}$, then by definition they may be extended to an open neighborhood of $A$. So $d f_{p}$ and $d V_{p}$, and consequently $W_{p} f$ and $\bar{\nabla}_{W_{p}} V$ are well defined. In particular, they do not depend on the choice of the curve $\gamma$ or the extensions of $f$ and $V$. Exercise 1. Let $E_{i}$ be the standard basis of $\mathbf{R}^{n}$, i.e., $E_{1}:=(1,0, \ldots, 0)$, $E_{2}:=(0,1,0, \ldots, 0), \ldots, E_{n}:=(0, \ldots, 0,1)$. Show that for any functions $f: \mathbf{R}^{n} \rightarrow \mathbf{R}$ and vectorfield $V: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ $$ \left(E_{i}\right)_{p} f=D_{i} f(p) \quad \text { and } \quad \bar{\nabla}_{\left(E_{i}\right)_{p}} V=D_{i} V(p) $$ (Hint: Let $u_{i}:(-\epsilon, \epsilon) \rightarrow \mathbf{R}^{n}$ be given by $u_{i}(t):=p+t E_{i}$, and observe that $\left.\left(E_{i}\right)_{p} f=\left(f \circ u_{i}\right)^{\prime}(0), \bar{\nabla}_{\left(E_{i}\right)_{p}} V=\left(V \circ u_{i}\right)^{\prime}(0)\right)$. ${ }^{1}$ Last revised: December 11, 2021 The operation $\bar{\nabla}$ is also known as the standard Levi-Civita connection of $\mathbf{R}^{n}$. If $W$ is a tangent vectorfield of $A$, i.e., a mapping $W: A \rightarrow \mathbf{R}^{n}$ such that $W_{p}$ is a tangent vector of $A$ for all $p \in A$, then we set $$ W f(p):=W_{p} f \quad \text { and } \quad\left(\bar{\nabla}_{W} V\right)_{p}:=\bar{\nabla}_{W_{p}} V . $$ Note that $W f: A \rightarrow \mathbf{R}$ is a function and $\bar{\nabla}_{W} V$ is a vectorfield. Further, we define $$ (f W)_{p}:=f(p) W_{p} . $$ Thus $f W: A \rightarrow \mathbf{R}^{n}$ is a also a vector field. Exercise 2. Show that it $V=\left(V^{1}, \ldots, V^{n}\right)$, i.e., $V^{i}$ are the component functions of $V$, then $$ \bar{\nabla}_{W} V=\left(W V^{1}, \ldots, W V^{n}\right) . $$ Exercise 3. Show that if $Z$ is a tangent vectorfield of $A$ and $f: A \rightarrow \mathbf{R}$ is a function, then $$ \bar{\nabla}_{W+Z} V=\bar{\nabla}_{W} V+\bar{\nabla}_{Z} V, \quad \text { and } \quad \bar{\nabla}_{f W} V=f \bar{\nabla}_{W} V . $$ Further if $Z: A \rightarrow \mathbf{R}^{n}$ is any vectorfield, then $$ \bar{\nabla}_{W}(V+Z)=\bar{\nabla}_{W} V+\bar{\nabla}_{W} Z, \quad \text { and } \quad \bar{\nabla}_{W}(f V)=(W f) V+f \bar{\nabla}_{W} V . $$ Exercise 4. Note that if $V$ and $W$ are a pair of vectorfields on $A$ then $\langle V, W\rangle: A \rightarrow \mathbf{R}$ defined by $\langle V, W\rangle_{p}:=\left\langle V_{p}, W_{p}\right\rangle$ is a function on $A$, and show that $$ Z\langle V, W\rangle=\left\langle\bar{\nabla}_{Z} V, W\right\rangle+\left\langle V, \bar{\nabla}_{Z} W\right\rangle $$ If $V, W: A \rightarrow \mathbf{R}^{n}$ are a pair of vector fields, then their Lie bracket is the vector filed on $A$ defined by $$ [V, W]_{p}:=\bar{\nabla}_{V_{p}} W-\bar{\nabla}_{W_{p}} V . $$ Exercise 5. Show that if $A \subset \mathbf{R}^{n}$ is open, $V, W: A \rightarrow \mathbf{R}^{n}$ are a pair of vector fields and $f: A \rightarrow \mathbf{R}$ is a scalar, then $$ [V, W] f=V(W f)-W(V f) . $$ (Hint: First show that $V f=\langle V, \operatorname{grad} f\rangle$ and $W f=\langle W$, grad $f\rangle$ where $$ \operatorname{grad} f:=\left(D_{1} f, \ldots, D_{n} f\right) . $$ Next define $$ \operatorname{Hess} f(V, W):=\left\langle V, \nabla_{W} \operatorname{grad} f\right\rangle, $$ and show that Hess $f(V, W)=\operatorname{Hess} f(W, V)$. In particular, it is enough to show that Hess $f\left(E_{i}, E_{j}\right)=D_{i j} f$, where $\left\{E_{1}, \ldots, E_{n}\right\}$ is the standard basis for $\mathbf{R}^{n}$. Then Leibnitz rule yields that $$ \begin{gathered} V(W f)-W(V f) \\ =V\langle W, \operatorname{grad} f\rangle-W\langle V, \operatorname{grad} f\rangle \\ =\left\langle\nabla_{V} W, \operatorname{grad} f\right\rangle+\left\langle W, \nabla_{V} \operatorname{grad} f\right\rangle-\left\langle\nabla_{W} V, \operatorname{grad} f\right\rangle-\left\langle V, \nabla_{W} \operatorname{grad} f\right\rangle \\ =\langle[V, W], \operatorname{grad} f\rangle+\operatorname{Hess} f(W, V)-\operatorname{Hess} f(V, W) \\ =[V, W] f, \end{gathered} $$ as desired.) If $V$ and $W$ are tangent vectorfields on an open set $A \subset \mathbf{R}^{n}$, and $Z: A \rightarrow$ $\mathbf{R}^{n}$ is any vectorfield, then $$ \bar{R}(V, W) Z:=\bar{\nabla}_{V} \bar{\nabla}_{W} Z-\bar{\nabla}_{W} \bar{\nabla}_{V} Z-\bar{\nabla}_{[V, W]} Z $$ defines a vectorfield on $A$. If $Y$ is another vectorfield on $A$, then we may also define an associated scalar quantity by $$ \bar{R}(V, W, Z, Y):=\langle\bar{R}(V, W) Z, Y\rangle $$ which is known as the Riemann curvature tensor of $\mathbf{R}^{n}$. Exercise 6. Show that $\bar{R} \equiv 0$. ### The Induced Covariant Derivative on Surfaces; Gauss's Formulas revisited Note that if $M \subset \mathbf{R}^{3}$ is a regular embedded surface and $V, W: M \rightarrow \mathbf{R}^{3}$ are vectorfields on $M$. Then $\bar{\nabla}_{W} V$ may no longer be tangent to $M$. Rather, in general we have $$ \bar{\nabla}_{W} V=\left(\bar{\nabla}_{W} V\right)^{\top}+\left(\bar{\nabla}_{W} V\right)^{\perp} $$ where $\left(\bar{\nabla}_{W} V\right)^{\top}$ and $\left(\bar{\nabla}_{W} V\right)^{\perp}$ respectively denote the tangential and normal components of $\bar{\nabla}_{W} V$ with resect to $M$. More explicitly, if for each $p \in M$ we let $n(p)$ be a unit normal vector to $T_{p} M$, then $$ \left(\bar{\nabla}_{W} V\right)_{p}^{\perp}:=\left\langle\bar{\nabla}_{W_{p}} V, n(p)\right\rangle n(p) \quad \text { and } \quad\left(\bar{\nabla}_{W} V\right)^{\top}:=\bar{\nabla}_{W} V-\left(\bar{\nabla}_{W} V\right)^{\perp} . $$ Let $\mathcal{X}(M)$ denote the space of tangent vectorfield on $M$. Then We define the (induced) covariant derivative on $M$ as the mapping $\nabla: \mathcal{X}(M) \times \mathcal{X}(M) \rightarrow$ $\mathcal{X}(M)$ given by $$ \nabla_{W} V:=\left(\bar{\nabla}_{W} V\right)^{\top} $$ Exercise 7. Show that, with respect to tangent vectorfields on $M, \nabla$ satisfies all the properties which were listed for $\bar{\nabla}$ in Exercises ?? and ??. Next we derive an explicit expression for $\nabla$ in terms of local coordinates. Let $X: U \rightarrow M$ be a proper regular patch centered at a point $p \in M$, i.e., $X(0,0)=p$, and set $$ \bar{X}_{i}:=X_{i} \circ X^{-1} \text {. } $$ Then $\bar{X}_{i}$ are vectorfields on $X(U)$, and for each $q \in X(U),\left(\bar{X}_{i}\right)_{q}$ forms a basis for $T_{q} M$. Thus on $X(U)$ we have $$ V=\sum_{i} V^{i} \bar{X}_{i}, \quad \text { and } \quad W=\sum_{i} W^{i} \bar{X}_{i} $$ for some functions $V^{i}, W^{i}: X(U) \rightarrow \mathbf{R}$. Consequently, on $X(U)$, $$ \begin{aligned} \nabla_{W} V & =\nabla_{\left(\sum_{j} W^{j} \bar{X}_{j}\right)}\left(\sum_{i} V^{i} \bar{X}_{i}\right) \\ & =\sum_{j}\left(W^{j} \nabla_{\bar{X}_{j}}\left(\sum_{i} V^{i} \bar{X}_{i}\right)\right) \\ & =\sum_{j}\left(W^{j} \sum_{i}\left(\bar{X}_{j} V^{i}+V^{i} \nabla_{\bar{X}_{j}} \bar{X}_{i}\right)\right) \\ & =\sum_{j} \sum_{i}\left(W^{j}\left(\bar{X}_{j} V^{i}\right)+W^{j} V^{i} \nabla_{\bar{X}_{j}} \bar{X}_{i}\right) . \end{aligned} $$ Next note that if we define $u_{j}:(-\epsilon, \epsilon) \rightarrow \mathbf{R}^{2}$ by $u_{j}(t):=t E_{j}$, where $E_{1}:=$ $(1,0)$ and $E_{2}:=(0,1)$. Then $X \circ u_{i}:(-\epsilon, \epsilon) \rightarrow M$ are curves with $X \circ u_{i}(0)=$ $p$ and $\left(X \circ u_{i}\right)^{\prime}(0)=X_{i}(0,0)=\bar{X}_{i}(p)$. Thus by the definitions of $\nabla$ and $\bar{\nabla}$ we have $$ \begin{aligned} \nabla_{\left(\bar{X}_{j}\right)_{p}} \bar{X}_{i} & =\left(\bar{\nabla}_{\left(\bar{X}_{j}\right)_{p}} \bar{X}_{i}\right)^{\top} \\ & =\left(\left(\bar{X}_{i} \circ\left(X \circ u_{j}\right)\right)^{\prime}(0)\right)^{\top} \\ & =\left(\left(X_{i} \circ u_{j}\right)^{\prime}(0)\right)^{\top} \end{aligned} $$ Now note that, by the chain rule, $$ \left(X_{i} \circ u_{j}\right)^{\prime}(0)=D X_{i}\left(u_{j}(0)\right) D u_{j}(0)=X_{i j}(0,0) . $$ Exercise 8. Verify the last equality above. Thus, by Gauss's formula, $$ \begin{aligned} \nabla_{\left(\bar{X}_{j}\right)_{p}} \bar{X}_{i} & =\left(X_{i j}(0,0)\right)^{\top} \\ & =\left(\sum_{k} \Gamma_{i j}^{k}(0,0) X_{k}(0,0)+l_{i j}(0,0) N(0,0)\right)^{\top} \\ & =\sum_{k} \Gamma_{i j}^{k}\left(X^{-1}(p)\right) X_{k}\left(X^{-1}(p)\right) \\ & =\sum_{k} \Gamma_{i j}^{k}\left(X^{-1}(p)\right)\left(\bar{X}_{k}\right)_{p} . \end{aligned} $$ In particular if we set $\bar{X}_{i j}:=X_{i j} \circ X^{-1}$ and define $\bar{\Gamma}_{i j}^{k}: X(U) \rightarrow \mathbf{R}$ by $\bar{\Gamma}_{i j}^{k}:=\Gamma_{i j}^{k} \circ X^{-1}$, then we have $$ \nabla_{\bar{X}_{j}} \bar{X}_{i}=\left(\bar{X}_{i j}\right)^{\top}=\sum_{k} \bar{\Gamma}_{i j}^{k} \bar{X}_{k} $$ which in turn yields $$ \nabla_{W} V=\sum_{j} \sum_{i}\left(W^{j} \bar{X}_{j} V^{i}+W^{j} V^{i} \sum_{k} \bar{\Gamma}_{i j}^{k} \bar{X}_{k}\right) . $$ Now recall that $\Gamma_{i j}^{k}$ depends only on the coefficients of the first fundamental form $g_{i j}$. Thus it follows that $\nabla$ is intrinsic: Exercise 9. Show that if $f: M \rightarrow \widetilde{M}$ is an isometry, then $$ \widetilde{\nabla}_{d f(W)} d f(V)=d f\left(\nabla_{W} V\right) $$ where $\widetilde{\nabla}$ denotes the covariant derivative on $\widetilde{M}$ (Hint: It is enough to show that $\left.\left\langle\widetilde{\nabla}_{d f\left(\bar{X}_{i}\right)} d f\left(\bar{X}_{j}\right), d f\left(\bar{X}_{l}\right)\right\rangle=\left\langle d f\left(\nabla_{\bar{X}_{i}} \bar{X}_{j}\right), d f\left(\bar{X}_{l}\right)\right\rangle\right)$. Next note that if $n: X(U) \rightarrow \mathbf{S}^{2}$ is a local Gauss map then $$ \left\langle\nabla_{W} V, n\right\rangle=-\left\langle V, \nabla_{W} n\right\rangle=-\langle V, d n(W)\rangle=\langle V, S(W)\rangle, $$ where, recall that, $S$ is the shape operator of $M$. Thus $$ \left(\bar{\nabla}_{W_{p}} V\right)^{\perp}=\left\langle V, S\left(W_{p}\right)\right\rangle n(p) $$ which in turn yields $$ \bar{\nabla}_{W} V=\nabla_{W} V+\langle V, S(W)\rangle n $$ This is Gauss's formula and implies the expression that we had derived earlier in local coordinates. Exercise 10. Verify the last sentence. ## Lecture Notes 14 ### The Induced Lie Bracket on Surfaces; The Self- Adjointness of the Shape Operator Revisited If $V, W$ are tangent vectorfields on $M$, then we define $$ [V, W]_{M}:=\nabla_{V} W-\nabla_{W} V $$ which is again a tangent vector field on $M$. Note that since, as we had verified in an earlier exercise, $S$ is self-adjoint, the Gauss's formula yields that $$ \begin{aligned} {[V, W] } & =\bar{\nabla}_{V} W-\bar{\nabla}_{W} V \\ & =\nabla_{W} V-\nabla_{V} W+(\langle V, S(W)\rangle-\langle W, S(V)\rangle) n \\ & =[V, W]_{M} . \end{aligned} $$ In particular if $V$ and $W$ are tangent vectorfields on $M$, then $[V, W]$ is also a tangent vectorfield. Let us also recall here, for the sake of completeness, the proof of the selfadjointness of $S$. To this end it suffices to show that if $E_{i}, i=1,2$, is a basis for $T_{p} M$, then $\left\langle E_{i}, S_{p}\left(E_{j}\right)\right\rangle=\left\langle S_{p}\left(E_{i}\right), E_{j}\right\rangle$. In particular we may let $E_{i}=X_{i}(0,0)$, where $X: U \rightarrow M$ is a regular patch of $M$ centered at $p$. Now note that $$ \left\langle X_{i}, S_{p}\left(X_{j}\right)\right\rangle=-\left\langle X_{i}, d n_{p}\left(X_{j}\right)\right\rangle=-\left\langle X_{i},(n \circ X)_{j}\right\rangle=\left\langle X_{i j},(n \circ X)\right\rangle . $$ Since the right hand side of the above expression is symmetric with respect to $i$ and $j$, the right hand side must be symmetric as well, which completes the proof that $S$ is self-adjoint. Note that while the above proof is short and elegant one might object to it on the ground that it uses local coordinates. On the other hand, if we can ${ }^{1}$ Last revised: December 11, 2021 give an independent proof that $[V, W]_{M}=[V, W]$, then we would have an alternative proof that $S$ is self-adjoint. To this end note that $$ [V, W]^{\top}=\left(\bar{\nabla}_{V} W\right)^{\top}-\left(\bar{\nabla}_{W} V\right)^{\top}=\nabla_{V} W-\nabla_{W} V=[V, W]_{M} $$ Thus to prove that $[V, W]_{M}=[V, W]$ it is enough to show that $[V, W]^{\top}=$ $[V, W]$, i.e., $[V, W]$ is tangent to $M$. To see this note that if $f: M \rightarrow \mathbf{R}$ is any function, and $\bar{f}: U \rightarrow \mathbf{R}$ denoted an extension of $f$ to an open neighborhood of $M$, then $$ [V, W] \bar{f}=[V, W]^{\top} \bar{f}+[V, W]^{\perp} \bar{f}=[V, W]^{\top} f+[V, W]^{\perp} \bar{f} . $$ So if we can show that the left hand side of the above expression depends only on $f(\operatorname{not} \bar{f})$, then it would follow that the right hand side must also be independent of $\bar{f}$, which can happen only if $[V, W]^{\perp}$ vanishes. Hence it remains to show that $[V, W] \bar{f}=[V, W] f$. To see this recall that by a previous exercise $$ [V, W] \bar{f}=V(W \bar{f})-W(V \bar{f}) . $$ But since $V$ and $W$ are tangent to $M, V \bar{f}=V f$ and $W \bar{f}=W f$. Thus the right hand side of the above equality depends only on $f$, which completes the proof. Exercise 1. Verifythe next to last statement. ### The Riemann Curvature Tensor of Surfaces; The Gauss and Codazzi Mainardi Equations, and The- orema Egregium Revisited If $V, W, Z$ are tangent vectorfields on $M$, then $$ R(V, W) Z:=\nabla_{V} \nabla_{W} Z-\nabla_{W} \nabla_{V} Z-\nabla_{[V, W]} Z $$ gives a tangent vectorfield on $M$. Note that this operation is well defined, because, as we verified in the previous section, $[V, W]$ is tangent to $M$. If $Y$ is another tangent vectorfield on $M$, then we may also define an associated scalar quantity by $$ R(V, W, Z, Y):=\langle R(V, W) Z, Y\rangle $$ which is the Riemann curvature tensor of $M$, and, as we show below, coincides with the quantity of the same name which we had defined earlier in terms of local coordinates. To this end first recall that $$ \bar{R}(V, W) Z:=\bar{\nabla}_{V} \bar{\nabla}_{W} Z-\bar{\nabla}_{W} \bar{\nabla}_{V} Z-\bar{\nabla}_{[V, W]} Z=0 $$ as we had shown in an earlier exercise. Next note that, by Gauss's formula, $$ \begin{aligned} \bar{\nabla}_{V} \bar{\nabla}_{W} Z & =\bar{\nabla}_{V}\left(\nabla_{W} Z+\langle S(W), Z\rangle n\right) \\ & =\bar{\nabla}_{V}\left(\nabla_{W} Z\right)+\bar{\nabla}_{V}(\langle S(W), Z\rangle n) \\ & =\nabla_{V} \nabla_{W} Z+\left\langle S(V), \nabla_{W} Z\right\rangle n+V\langle S(W), Z\rangle n+\langle S(W), Z\rangle \nabla_{V} n . \end{aligned} $$ Also recal that, since $\langle n, n\rangle=1$, $$ \nabla_{V} n:=\left(\bar{\nabla}_{V} n\right)^{\top}=\bar{\nabla}_{V} n=d n(V)=S(V) . $$ Thus $$ \begin{aligned} & \bar{\nabla}_{V} \bar{\nabla}_{W} Z=\nabla_{V} \nabla_{W} Z+\langle S(W), Z\rangle S(V) \\ &+\left(\left\langle S(V), \nabla_{W} Z\right\rangle+\left\langle\nabla_{V} S(W), Z\right\rangle+\left\langle S(W), \nabla_{V} Z\right\rangle\right) n . \end{aligned} $$ Simlilarly, $$ \begin{aligned} &-\bar{\nabla}_{W} \bar{\nabla}_{V} Z=-\nabla_{W} \nabla_{V} Z-\langle S(V), Z\rangle S(W) \\ &-\left(\left\langle S(W), \nabla_{V} Z\right\rangle+\left\langle\nabla_{W} S(V), Z\right\rangle+\left\langle S(V), \nabla_{W} Z\right\rangle\right) n . \end{aligned} $$ Also note that $$ -\bar{\nabla}_{[V, W]} Z=-\nabla_{[V, W]} Z-\langle S([V, W]), Z\rangle n . $$ Adding the last three equations yields $$ \begin{aligned} & \bar{R}(V, W) Z=R(V, W) Z+\langle S(W), Z\rangle S(V)-\langle S(V), Z\rangle S(W) \\ &+\left(\left\langle\nabla_{V} S(W), Z\right\rangle-\left\langle\nabla_{W} S(V), Z\right\rangle-\langle S([V, W]), Z\rangle\right) n . \end{aligned} $$ Since the left hand side of the above equation is zero, each of the tangential and normal components of the right hand side must vanish as well. These respectively yield: $$ R(V, W) Z=\langle S(W), Z\rangle S(V)-\langle S(V), Z\rangle S(W) $$ and $$ \nabla_{V} S(W)-\nabla_{W} S(V)=S([V, W]) $$ which are the Gauss and Codazzi-Mainardi equations respectively. In particular, in local coordinates they take on the forms which we had derived earlier. Exercise 2. Verify the last sentence above. Finally note that by Gauss's equation $$ \langle R(V, W) W, V\rangle=\langle S(V), V\rangle\langle S(W), W\rangle-\langle S(W), V\rangle\langle S(V), W\rangle $$ In particular, if $V$ and $W$ are orthonormal, then $$ \langle R(V, W) W, V\rangle=\operatorname{det}(S)=K $$ Thus we obtain yet another proof of the Theorema Egregium, which, in this latest reincarnation, does not use local coordinates. Exercise 3. Show that if $V$ and $W$ are general vectorfields (not necessarily orthonormal), then $$ K=\frac{R(V, W, W, V)}{\|V \times W\|^{2}} $$ ## Lecture Notes 15 ### The Geodesic Curvature Let $\alpha: I \rightarrow M$ be a unit speed curve lying on a surface $M \subset \mathbf{R}^{3}$. Then the absolute geodesic curvature of $\alpha$ is defined as $$ \left|\kappa_{g}\right|:=\left\|\left(\alpha^{\prime \prime}\right)^{\top}\right\|=\left\|\alpha^{\prime \prime}-\left\langle\alpha^{\prime \prime}, n(\alpha)\right\rangle n(\alpha)\right\|, $$ where $n$ is a local Gauss map of $M$ in a neighborhood of $\alpha(t)$. In particular note that if $M=\mathbf{R}^{2}$, then $\left|\kappa_{g}\right|=\kappa$, i.e., absolute geodesic curvature of a curve on a surface is a gneralization of the curvature of curves in the plane. Exercise 1. Show that the absolute geodesic curvature of great circles in a sphere and helices on a cylinder are everywhere zero. Similarly, the (signed) geodesic curvature generalizes the notion of the signed curvature of planar curves and may be defined as follows. We say that a surface $M \subset \mathbf{R}^{3}$ is orientable provided that there exists a (global) Gauss map $n: M \rightarrow \mathbf{S}^{2}$, i.e., a continuous mapping which satisfies $n(p) \in T_{p} M$, for all $p \in M$. Note that if $n$ is a global Gauss map, then so is $-n$. In particular, any orientable surface admits precisely two choices for its global Gauss map. Once we choose a Gauss map $n$ for an orientable surface, then $M$ is said to be oriented. If $M$ is an oriented surface (with global Gauss map $n$ ), then, for every $p \in M$, we define a mapping $J: T_{p} M \rightarrow T_{p} M$ by $$ J V:=n \times V . $$ Exercise 2. Show that if $M=\mathbf{R}^{2}$, and $n=(0,0,1)$, then $J$ is counter clockwise rotation about the origin by $\pi / 2$. ${ }^{1}$ Last revised: December 11, 2021 Then the geodesic curvature of a unit speed curve $\alpha: I \rightarrow M$ is given by $$ \kappa_{g}:=\left\langle\alpha^{\prime \prime}, J \alpha^{\prime}\right\rangle $$ Note that, since $J \alpha^{\prime}$ is tangent to $M$, $$ \left\langle\alpha^{\prime \prime}, J \alpha^{\prime}\right\rangle=\left\langle\left(\alpha^{\prime \prime}\right)^{\top}, J \alpha^{\prime}\right\rangle $$ Further, since $\left\|\alpha^{\prime}\right\|=1, \alpha^{\prime \prime}$ is orthogonal to $\alpha^{\prime}$, which in turn yields that the projection of $\alpha^{\prime \prime}$ into the tangent plane is either parallel or antiparallel to $J \alpha^{\prime}$. Thus $\kappa_{g}>0$ when the projection of $\alpha^{\prime \prime}$ is parallel to $J \alpha^{\prime}$ and is negative otherwise. Note that if the curvature of $\alpha$ does not vanish (so that the principal normal $N$ is well defined), then $$ \kappa_{g}=\kappa\langle N, J T\rangle . $$ In particular geodesic curvature is invariant under reparametrizations of $\alpha$. Exercise 3. Let $\mathbf{S}^{2}$ be oriented by its outward unit normal, i.e., $n(p)=p$, and compute the geodesic curvature of the circles in $\mathbf{S}^{2}$ which lie in planes $z=h,-1<h<1$. Assume that all these circles are oriented consistently with respect to the rotation about the $z$-axis. Next we derive an expression for $\kappa_{g}$ which does not require that $\alpha$ have unit speed. To this end, let $s: I \rightarrow[0, L]$ be the arclength function of $\alpha$, and recall that $\bar{\alpha}:=\alpha \circ s^{-1}:[0, L] \rightarrow M$ has unit speed. Thus $$ \kappa_{g}=\bar{\kappa}_{g}(s)=\left\langle\bar{\alpha}^{\prime \prime}(s), J \bar{\alpha}^{\prime}(s)\right\rangle . $$ Now recall that $\left(s^{-1}\right)^{\prime}=1 /\left\|\alpha^{\prime}\right\|$. Thus by chain rule. $$ \bar{\alpha}^{\prime}(t)=\alpha^{\prime}\left(s^{-1}(t)\right) \cdot \frac{1}{\left\|\alpha^{\prime}\left(s^{-1}(t)\right)\right\|} . $$ Further, differentiating both sides of the above equation yields $$ \bar{\alpha}^{\prime \prime}=\alpha^{\prime \prime}\left(s^{-1}\right) \cdot \frac{1}{\left\|\alpha^{\prime}\left(s^{-1}\right)\right\|^{2}}+\alpha^{\prime}\left(s^{-1}\right) \cdot \frac{-\left\langle\alpha^{\prime \prime}\left(s^{-1}\right), \alpha^{\prime}\left(s^{-1}\right)\right\rangle}{\left\|\alpha^{\prime}\left(s^{-1}\right)\right\|^{3}} . $$ Substituting these values into the last expression for $\bar{\kappa}_{g}$ above yiels $$ \kappa_{g}=\frac{\left\langle\alpha^{\prime \prime}, J \alpha^{\prime}\right\rangle}{\left\|\alpha^{\prime}\right\|^{3}} . $$ Exercise 4. Verify the last two equations. Next we show that the geodesic curvature is intrinsic, i.e., it is invariant under isometries of the surface. To this end define $\tilde{\alpha}^{\prime}: \alpha(I) \rightarrow \mathbf{R}^{3}$ be the vectorfiled along $\alpha(I)$ given by $$ \tilde{\alpha}^{\prime}(\alpha(t))=\alpha^{\prime}(t) $$ Then one may immediatley check that $$ \alpha^{\prime \prime}(t)=\bar{\nabla}_{\alpha^{\prime}(t)} \tilde{\alpha}^{\prime} $$ Thus $$ \left\langle\alpha^{\prime \prime}, J \alpha^{\prime}\right\rangle=\left\langle\left(\alpha^{\prime \prime}\right)^{\top}, J \alpha^{\prime}\right\rangle=\left\langle\nabla_{\alpha^{\prime}} \tilde{\alpha}^{\prime}, J \alpha^{\prime}\right\rangle $$ and it follows that $$ \kappa_{g}=\frac{\left\langle\nabla_{\alpha^{\prime}} \tilde{\alpha}^{\prime}, J \alpha^{\prime}\right\rangle}{\left\|\alpha^{\prime}\right\|^{3}} . $$ We say that a curve $\alpha: I \rightarrow M$ is a geodesic provided that its geodesic curvature $\kappa_{g} \equiv 0$. Exercise 5. Show that if $\alpha$ is a geodesic, then it must have constant speed. Exercise 6. Show that if $\alpha$ is parametrized by arclength, then $$ \left|\kappa_{g}\right|=\left\|\nabla_{\alpha^{\prime}} \tilde{\alpha}^{\prime}\right\| $$ Exercise 7. Show that $\alpha$ is a geodesic if and only if $\nabla_{\alpha^{\prime}} \tilde{\alpha}^{\prime} \equiv 0$. Now recall that $\nabla$ is intrinic, which immediately implies that $\left|\kappa_{g}\right|$ is intrinsic by the last exercise. Thus to complete the proof that $\kappa_{g}$ is intrinsic it remains to show that $J$ is intrinsic. To see this let $X: U \rightarrow M$ be a local patch, then $$ J X_{i}=\sum_{j=1}^{2} b_{i j} X_{j} $$ In particular, $$ J X_{1}=b_{11} X_{1}+b_{12} X_{2} $$ Now note that $$ 0=\left\langle J X_{1}, X_{1}\right\rangle=b_{11} g_{11}+b_{12} g_{21} $$ Further, $$ g_{11}=\left\langle X_{1}, X_{1}\right\rangle=\left\langle J X_{1}, J X_{1}\right\rangle=b_{11}^{2} g_{11}+2 b_{11} b_{12} g_{12}+b_{12}^{2} g_{22} . $$ Solving for $b_{21}$ in the next to last equation, and substituting in the last equation yields $$ g_{11}=b_{11}^{2} g_{11}-2 b_{11}^{2} g_{11}+b_{11}^{2} \frac{g_{11}^{2}}{g_{21}^{2}} g_{22}=b_{11}^{2}\left(-g_{11}+\frac{g_{11}^{2}}{g_{21}^{2}} g_{22}\right) . $$ Thus $b_{11}$ may be computed in terms of $g_{i j}$ which in turn yields that $b_{12}$ may be computed in terms of $g_{i j}$ as well. So $J X_{1}$ nay be expressed intrinsically. Similarly, $J X_{2}$ may be expressed intrinsically as well. So we conclude that $J$ is intrinsic. ### Geodesics in Local Coordinates Here we will derive a system of ordinary differential equations, in terms of any local coordinates, whose solutions yield geodesics. To this end let $X: U \rightarrow M$ be a patch, and $\alpha: I \rightarrow X(U)$ be a unit speed one-to-one curve. Then we may write $$ X(u(t))=\alpha(t) $$ by letting $u(t):=X^{-1}(\alpha(t))$. Next note that, if $u_{i}$ denotes the coordinates of $u$, i.e., $u(t)=\left(u_{1}(t), u_{2}(t)\right)$, then by the chain rule, $$ \alpha^{\prime}=\sum_{i=1}^{2} X_{i} u_{i}^{\prime} $$ which in turn yields $$ \alpha^{\prime \prime}=\sum_{i, j=1}^{2} X_{i j} u_{i}^{\prime} u_{j}^{\prime}+X_{i} u_{i}^{\prime \prime}=\sum_{i, j, k=1}^{2}\left(\Gamma_{i j}^{k} X_{k}+\ell_{i j} N\right) u_{i}^{\prime} u_{j}^{\prime}+X_{i} u_{i}^{\prime \prime}, $$ by Gauss's formula. Consequently, $$ \left(\alpha^{\prime \prime}\right)^{\top}=\sum_{i, j, k=1}^{2}\left(\Gamma_{i j}^{k} X_{k}\right) u_{i}^{\prime} u_{j}^{\prime}+X_{i} u_{i}^{\prime \prime}=\sum_{i, j, k=1}^{2}\left(\Gamma_{i j}^{k} u_{i}^{\prime} u_{j}^{\prime}+u_{k}^{\prime \prime}\right) X_{k} . $$ So, since $\left|\kappa_{g}\right|=\left\|\left(\alpha^{\prime \prime}\right)^{\top}\right\|$, we conclude that $\alpha$ is a geodesic if and only if $$ \sum_{i, j=1}^{2}\left(\Gamma_{i j}^{k} u_{i}^{\prime} u_{j}^{\prime}+u_{k}^{\prime \prime}\right)=0 $$ for $k=1,2$. In other words, for $\alpha$ to be a geodesic the following two equations must be satisfied: $$ \begin{aligned} & u_{1}^{\prime \prime}+\Gamma_{11}^{1}\left(u_{1}^{\prime}\right)^{2}+2 \Gamma_{12}^{1} u_{1}^{\prime} u_{2}^{\prime}+\Gamma_{22}^{1}\left(u_{2}^{\prime}\right)^{2}=0 \\ & u_{2}^{\prime \prime}+\Gamma_{11}^{2}\left(u_{1}^{\prime}\right)^{2}+2 \Gamma_{12}^{2} u_{1}^{\prime} u_{2}^{\prime}+\Gamma_{22}^{2}\left(u_{2}^{\prime}\right)^{2}=0 \end{aligned} $$ Exercise 8. Write down the equations of the geodesic in a surface of revolution. In particular, verify that the great circles in a sphere are geodesics. ### Parallel Translation Here we will give another interpretation for the concept of geodesic curvature. Let $\alpha: I \rightarrow M$ be a simple curve and $V$ be a vector field on $M$. We say that $V$ is parallel along $\alpha$ provided that $$ \nabla_{\alpha^{\prime}(t)} V=\left((V \circ \alpha)^{\prime}\right)^{\top}=0 . $$ for all $t \in I$. Recall that $\alpha$ is a geodesic if and only if its velocity is parallel (i.e., $\nabla_{\alpha^{\prime}(t)} \tilde{\alpha}^{\prime} \equiv 0$ ) Exercise 9. Show that if $V$ is parallel along $\alpha$, then its length is constant. Exercise 10. Show that if $V$ and $W$ are a pair of parallel vector fields along $\alpha$, then the angle between them is constant. Proposition 11. If $\alpha$ is a unit speed curve on a surface, and $V$ is a parallel vector field along $\alpha$, which makes an angle $\phi$ with the tangent vector of $\alpha$, then $\kappa_{g}=\phi^{\prime}$. Proof. We may assume that $V$ has unit length. Then we may write: $$ V=\cos (\phi) T+\sin (\phi) J T . $$ Computing $\left\|\nabla_{T}\right\|$, and setting it equal to zero yields the desired result. Corollary 12. The total geodesic curvature of a curve on a surface is equal to the total rotation of a parallel vector field along the curve, i.e. $$ \int_{a}^{b} \kappa_{g}=\phi(b)-\phi(a) $$ where $\phi$ is an angle function between $T$ and $V$. ## Lecture Notes 16 ## The Gauss-Bonnet theorem ### The Local Gauss-Bonnet theorem Let $M \subset \mathbf{R}^{3}$ be a surface which is oriented by a choice of Gauss map $n: M \rightarrow$ $\mathbf{S}^{2}$. Recall that $n$ then induces a rotation on each tangent plane $T_{p} M$ by $J V:=n \times V$ for all $V \in T_{p} M$, which we call counter clockwise rotation (in analogy with the operation $J V:=(0,0,1) \times V$ for vectors in $\mathbf{R}^{2}=\mathbf{R}^{2} \times\{0\} \subset$ $\mathbf{R}^{3}$ ). A regular region $\Omega \subset M$ is a compact set whose boundary $\partial \Omega$ consists of a finite number of piecewise $C^{2}$ simple closed curves. We orient $\partial \Omega$ by choosing continuous unit tangent vectors $T$ along each of its $C^{2}$ segments so that $J T$ points into $\Omega$. Then we say that $\Omega$ is positively oriented, and it is in this sense that the geodesic curvature of $\partial \Omega$ is to be measured. The points where the $C^{2}$ segments of $\partial \Omega$ meet are called the corners of $\Omega$. The turning angle $\alpha \in[-\pi, \pi]$ of $\Omega$ at a corner is defined as $\pi-\alpha^{\prime}$ where $\alpha^{\prime} \in[0,2 \pi]$ is the interior angle of $\Omega$ at that corner. We say $\Omega$ is simply connected if it is homeomorphic to a disk, or equivalently, $\Omega$ can be covered by a coordinate chart $X: U \rightarrow M$ where $U$ is a disk, and $\partial \Omega$ has only one component. Theorem 1 (Local Gauss-Bonnet theorem). Let $\Omega \subset M$ be a positively oriented simply connected regular region with exterior angles $\alpha_{i}$. Then $$ \int_{\Omega} K+\int_{\partial \Omega} \kappa_{g}+\sum_{i=1}^{k} \alpha_{i}=2 \pi . $$ The following exercise shows that to prove the above theorem, we may assume that $\Omega$ is arbitrarily small Exercise 2. Let $\Omega$ be a simply connected regular region in $M$ and $\Omega_{1}, \Omega_{2}$ be a pair of subregions obtained by drawing a curve across $\Omega$ whose end points ${ }^{1}$ Last revised: December 11, 2021 lie on $\partial \Omega$. Show that if the local Gauss-Bonnet theorem holds on each of the regions $\Omega_{1}$ and $\Omega_{2}$, then it holds on $\Omega$. ### Proof of a special case of Gauss-Bonnet Theorem Here we give a quick proof of the local Gauss-Bonnet Theorem in the special case where $\Omega$ has no corners, and $K>0$ on $\Omega$. Since $\operatorname{det}(d n)=K \neq 0$ on $\Omega$, then, by the inverse function theorem, $n$ is locally one-to-one on $\Omega$. So we may subdivide $\Omega$ into a finite number of subregions so that $n$ is one-to-one on each subregion. By the last exercise, it suffices to show that the GaussBonnet theorem holds in each subregion. So we replace $\Omega$ with one of these subregions. Then, by the area formula, $$ \int_{\Omega} K=\int_{\Omega} \operatorname{det}(d n)=\operatorname{area}(n(\Omega)) $$ Let $V$ be a vector field along $\partial \Omega$ obtained by taking a tangent vector of $M$ at a point $p$ of $\partial \Omega$ and parallel translating it all around $\partial \Omega$ until it comes back to $p$. Furthermore, let $\gamma:[a, b] \rightarrow M$ be a unit speed parametrization for $\partial \Omega$ which is consistent with the orientation of $\Omega$, i.e., $J \gamma^{\prime}$ points into $\Omega$. There exists a continuous function $\theta:[a, b] \rightarrow \mathbf{R}$ such that $\theta(t)$ is an angle between $V(t)$ and $\gamma^{\prime}(t)$, i.e., $$ V(t)=\cos (\theta(t)) \gamma^{\prime}(t)+\sin (\theta(t)) J \gamma^{\prime}(t), $$ where $V(t):=V(\gamma(t))$. Then $$ \int_{\partial \Omega} \kappa_{g}=\theta(b)-\theta(a) . $$ Next note that $V$ may also be regarded as a tangent vector field along the curve $n(\partial \Omega)$ in $\mathbf{S}^{2}$, because $V(t) \perp n(\gamma(t))$, and thus $V(t) \in T_{n(\gamma(t))} \mathbf{S}^{2}$. Furthermore, since $V$ is parallel along $\partial \Omega, V^{\prime} \| n$. So $V$ is also a parallel vector field along $n(\partial \Omega)$ in $\mathbf{S}^{2}$. Conseqently, $$ \int_{n(\partial \Omega)} \kappa_{g}=\theta(b)-\theta(a)=\int_{\partial \Omega} \kappa_{g} . $$ So the Gauss-Bonnet theorem holds for $\Omega$ provided that $$ \operatorname{area}(n(\Omega))+\int_{n(\partial \Omega)} \kappa_{g}=2 \pi . $$ It can be shown that, if we approximate $n(\partial \Omega)$ with geodesic polygonal curves, then $\int_{n(\partial \Omega)} \kappa_{g}$ is the limit of the sum of turning angles of the polygons. So it suffices to show that the area of a geodesic polygon in $\mathbf{S}^{2}$ is equal to $2 \pi$ minus the sum of its exterior angles. Earlier we established this result for geodesic triangles in $\mathbf{S}^{2}$, which yields the general result for polygons via a subdivision. This completes the proof of the special case of the local GaussBonnet theorem. Exercise 3. Extend the above proof to the case where $K<0$. ### Proof of the Local Gauss-Bonnet Theorem Let $\gamma:[a, b] \rightarrow \partial \Omega$ be a unit speed piecewise $C^{2}$ parametrization for $\partial \Omega$. We may extend $\gamma$ to a periodic function on $\mathbf{R}$ by setting $\gamma(t+b-a):=\gamma(t)$. Then there are a finite number of points $t_{1}, \ldots, t_{k} \in[a, b]$ such that $\gamma\left(t_{i}\right)$ are the corners of $\Omega$ and $\gamma$ is $C^{2}$ on each of the intervals $\left[t_{i}, t_{i+1}\right]$, where we set $t_{k+1}:=t_{1}+b-a$. In particular $\gamma^{\prime}$ exists on the interior of each interval, and one sided derivatives $\gamma_{ \pm}^{\prime}$ are well-defined at each $t_{i}$. We assume that $\gamma$ is consistent with the orientation of $\Omega$, i.e., $J \gamma^{\prime}$ points into $\Omega$. Let $X: U \rightarrow M$ be a local coordinate system with $\Omega \subset X(U)$. We set $\tilde{\Omega}:=X^{-1}(\Omega)$ and $\tilde{\gamma}:=X^{-1}(\gamma)$. Let $V: U \rightarrow \mathbf{R}^{3}$ be a smooth unit vector field $\Omega$, and set $W:=J V$. Then $(V, W)$ is an orthonormal frame on $\Omega$. It will be convenient to extend $\gamma$ to a periodic function on $\mathbf{R}$ by setting $\gamma(t+b-a):=\gamma(t)$. Then for $i=1, \ldots, k$, there are continuous functions $\theta_{i}:\left[t_{i}, t_{i+1}\right] \rightarrow \mathbf{R}$ such that $$ \gamma^{\prime}(t):=\cos \left(\theta_{i}(t)\right) V(t)+\sin \left(\theta_{i}(t)\right) W(t) $$ where $V(t):=V(\tilde{\gamma}(t))$, and $W(t):=W(\tilde{\gamma}(t))$. We are going to compute $\kappa_{g}$ in terms of $V, W$ as follows. First note that $$ J \gamma^{\prime}=-\sin \left(\theta_{i}\right) V+\cos \left(\theta_{i}\right) W $$ Then a straight forward computation shows that $$ \gamma^{\prime \prime}=\theta_{i}^{\prime} J \gamma^{\prime}+\cos \left(\theta_{i}\right) V^{\prime}+\sin \left(\theta_{i}\right) W^{\prime} $$ Another simple computation yields $$ \kappa_{g}=\left\langle\gamma^{\prime \prime}, J \gamma^{\prime}\right\rangle=\theta_{i}^{\prime}+\left\langle V^{\prime}, W\right\rangle $$ Thus $$ \begin{aligned} \int_{\partial \Omega} \kappa_{g} & =\sum_{i=1}^{k} \int_{t_{i}}^{t_{i+1}} \kappa_{g} \\ & =\sum_{i=1}^{k} \int_{t_{i}}^{t_{i+1}} \theta_{i}^{\prime}+\sum_{i=1}^{k} \int_{t_{i}}^{t_{i+1}}\left\langle V^{\prime}, W\right\rangle \\ & =\sum_{i=1}^{k} \Delta \theta_{i}+\int_{\partial \Omega}\left\langle V^{\prime}, W\right\rangle \end{aligned} $$ where $\Delta \theta_{i}:=\theta_{i}\left(t_{i+1}\right)-\theta_{i}\left(t_{i}\right)$. Now let us define the total rotation of $\partial \Omega$ with respect to $V$ as $$ \operatorname{rot}_{V}(\partial \Omega):=\sum_{i=1}^{k} \Delta \theta_{i}+\sum_{i=1}^{k} \alpha_{i} $$ Then we have $$ \int_{\partial \Omega} \kappa_{g}+\sum_{i=1}^{k} \alpha_{i}=\operatorname{rot}_{V}(\partial \Omega)+\int_{\partial \Omega}\left\langle V^{\prime}, W\right\rangle . $$ So to prove the local Gauss-Bonnet theorem, it suffices to show that 1. $\operatorname{rot}_{V}(\partial \Omega)=2 \pi$, 2. $\int_{\partial \Omega}\left\langle V^{\prime}, W\right\rangle=-\int_{\Omega} K$ #### First part of the proof This part follows from Hopf's rotation index theorem for planar curves, through a series of reductions as we describe below. Step 1. First we reduce the problem to the case where $\Omega$ has no corners. To see this let $\partial \Omega_{i}:=\gamma\left(\left[t_{i}, t_{i+1}\right]\right)$ denote the sides of $\gamma$, and $\theta_{i}:\left[t_{i}, t_{i+1}\right] \rightarrow \mathbf{R}$ be the corresponding turning angle functions. We may deform each $\partial \Omega_{i}$ near its initial boundary point $\gamma\left(t_{i}\right)$, so that $\gamma_{+}^{\prime}\left(t_{i}\right)$ turns around $\gamma\left(t_{i}\right)$ until it coincides with $\gamma_{-}^{\prime}\left(t_{i}\right)$, and the exterior angle $\alpha_{i}$ vanishes. Specifically, if $\alpha_{i}>0$, we perturb $\partial \Omega_{i}$ so that $\gamma_{+}^{\prime}\left(t_{i}\right)$ turns in the counterclockwise direction, and if $\alpha_{i}<0$, we perturb $\partial \Omega_{i}$ so that $\gamma_{+}^{\prime}\left(t_{i}\right)$ turns in the clockwise direction. Then $\theta_{i}\left(t_{i}\right) \rightarrow \theta_{i}\left(t_{i}\right)-\alpha_{i}$. Consequently, $\Delta \theta_{i} \rightarrow \Delta \theta_{i}+\alpha_{i}$ and so the sum $\Delta \theta_{i}+\alpha_{i}$ remains constant. Thus we may smoothen out all corners of $\Omega$ without effecting $\operatorname{rot}_{V}(\partial \Omega)$. So we may assume that $\partial \Omega$ is a $C^{1}$ closed curve, as claimed. Now, let $\gamma:[a, b] \rightarrow M$ be a $C^{1}$ closed curve which traces $\partial \Omega$ counterclockwise. Then there exists a continous function $\theta:[a, b] \rightarrow \mathbf{R}$ which yields an angle between $V(t)$ and $\gamma^{\prime}(t)$. So $\operatorname{rot}_{\partial \Omega}(V)=\theta(b)-\theta(a)$. But since $V(a)=V(b), \theta(b)=\theta(a)+2 k \pi$. So $\operatorname{rot}_{V}(\partial \Omega)=2 k \pi$. It remains then to show that $k=1$. Step 2. Now we show that $\Omega$ may be assumed to lie in $\mathbf{R}^{2}$. To see this note that $\operatorname{rot}_{V}(\partial \Omega)$ remains constant under any continuous deformation of $\partial \Omega$ and its tangent vectors through $C^{1}$ curves, or any continuous change of $V$, since $\operatorname{rot}_{V}(\partial \Omega)$ can assume only a discrete set of values, as discussed above. In particular, deforming the metric of $M$ to a Euclidean metric is not going to effect $\operatorname{rot}_{V}(\partial \Omega)$. More explicitly, let $\tilde{\gamma}:=X^{-1}(\gamma), \tilde{V}:=d X^{-1}(V)$, and $\tilde{\theta}(t)$ be the continuous angle function between $\tilde{\gamma}^{\prime}(t)$ and $\tilde{V}(t)$ with respect to the metric $g$ that $X$ induces on $U$, i.e., $g_{p}(v, w):=\left\langle d X_{p}(v), d X_{p}(w)\right\rangle$, for all $p \in U$, and $v, w \in \mathbf{R}^{2}$. Then $\tilde{\theta}(t)=\theta(t)$. So $$ \operatorname{rot}_{V}(\partial \Omega)=\operatorname{rot}_{\tilde{V}}(\partial \tilde{\Omega}) $$ Now let $g_{E}$ be the Euclidean metric on $U$, and consider the one parameter family of metrics $$ g^{s}:=(1-s) g+s g_{E} . $$ Furthermore, let $\tilde{\theta}^{s}(t)$ be the angle function between $\tilde{\gamma}^{\prime}(t)$ and $\tilde{V}(t)$ computed with respect to $g^{s}$. Since $\tilde{\theta}^{s}$ depends continuously on $s, \operatorname{rot}_{\tilde{V}}(\partial \tilde{\Omega})$ remains unchanged under the metric deformation. So we may assume that $\Omega \subset \mathbf{R}^{2}$. Step 3. Finally we show that $V$ may be assumed to be a constant vector field, which will complete the reduction to Hopf's theorem. To this end, we may assume that $U$ is a disc centered at the origin $o$ of $\mathbf{R}^{n}$. Let $\gamma_{\lambda}(t):=\lambda \gamma(t)$ be the rescaling of $\gamma$ by a factor $\lambda<1$. Then $\gamma_{\lambda}$ remains in $U$ and thus $V\left(\gamma_{\lambda}(t)\right)$ will be well defined. Since $\gamma_{\lambda}$ and its tangent vectors change continuously with $\lambda$, it follows that $\operatorname{rot}_{V}(\partial \Omega)=\operatorname{rot}_{V}\left(\partial \Omega_{\lambda}\right)$. Since $V$ is continuous, we may choose $\lambda$ so small that $V$ on $\Omega_{\lambda}$ is arbitrarily close to the value of $V$ at the origin, $V(o)$, as we may wish. In particular, we may assume that $V \neq-V(o)$ on $\Omega_{\lambda}$. Then the one parameter family of vector fields $$ V^{s}(p):=\frac{(1-s) V(p)+s V(o)}{\|(1-s) V(p)+s V(o)\|} $$ will be well-defined on $\Omega_{\lambda}$ and will continuosly change $V$ to the constant vector field $V(o)$ on $\Omega_{\lambda}$. This completes the proof, since $\operatorname{rot}_{V}(\partial \Omega)$ remains constant under continous deformations of $V$. Note 4. We may set $V:=X_{1} /\left\|X_{1}\right\|$, which will eliminate the need for Step 3 of the proof above. Note 5. If we assume that $X$ is conformal, and set $V:=X_{1} /\left\|X_{1}\right\|$, then the entire proof of the first part above follows immediately from Hopf's rotation theorem for piecewise $C^{1}$ curves which we had discussed earlier. #### Second part of the proof Suppose that $\tilde{\gamma}(t)=(x(t), y(t))$. Then $V^{\prime}(t)=V_{1} x^{\prime}(t)+V_{2} y^{\prime}(t)$. So, using Green's theorem, we can compute that $$ \begin{aligned} \int_{a}^{b}\left\langle V^{\prime}, W\right\rangle d t & =\int_{a}^{b}\left(\left\langle V_{1}, W\right\rangle x^{\prime}(t)+\left\langle V_{2}, W\right\rangle y^{\prime}(t)\right) d t \\ & =\int_{\partial \tilde{\Omega}}\left\langle V_{1}, W\right\rangle d x+\left\langle V_{2}, W\right\rangle d y \\ & =\int_{\tilde{\Omega}}\left(\left\langle V_{2}, W\right\rangle_{1}-\left\langle V_{1}, W\right\rangle_{2}\right) d x d y \\ & =\int_{\tilde{\Omega}}\left(\left\langle V_{2}, W_{1}\right\rangle-\left\langle V_{1}, W_{2}\right\rangle\right) d x d y \end{aligned} $$ So it remains to check that $$ \left\langle V_{2}, W_{1}\right\rangle-\left\langle V_{1}, W_{2}\right\rangle=-K \sqrt{g}, $$ where recall that $g=\operatorname{det}\left(g_{i j}\right)$. To establish the above formula we need to rewrite the left hand side in terms of $X$. To this end note that since $X_{1}$ and $X_{2}$ for a basis over $\Omega$, we have $$ V=a X_{1}+b X_{2}, \quad W=c X_{1}+d X_{2}, $$ for some functions $a, b, c, d$ on $\Omega$. Before performing the above substitutions note that $$ \left\langle V_{2}, W_{1}\right\rangle=\left\langle\left\langle V_{2}, N\right\rangle N+\left\langle V_{2}, W\right\rangle W, W_{1}\right\rangle=\left\langle V_{2}, N\right\rangle\left\langle W_{1}, N\right\rangle . $$ Similarly, $$ \left\langle V_{1}, W_{2}\right\rangle=\left\langle\left\langle V_{1}, N\right\rangle N+\left\langle V_{1}, W\right\rangle W, W_{2}\right\rangle=\left\langle V_{1}, N\right\rangle\left\langle W_{2}, N\right\rangle . $$ So $$ \begin{aligned} \left\langle V_{2}, W_{1}\right\rangle-\left\langle V_{1}, W_{2}\right\rangle & =\left\langle V_{2}, N\right\rangle\left\langle W_{1}, N\right\rangle-\left\langle V_{1}, N\right\rangle\left\langle W_{2}, N\right\rangle \\ & =\left\langle V, N_{2}\right\rangle\left\langle W, N_{1}\right\rangle-\left\langle V, N_{1}\right\rangle\left\langle W, N_{2}\right\rangle \end{aligned} $$ Now the substitutions we had mentioned above yield that $$ \begin{aligned} \left\langle V_{2}, W_{1}\right\rangle-\left\langle V_{1}, W_{2}\right\rangle & =(a d-b c)\left(\left\langle X_{1}, N_{2}\right\rangle\left\langle X_{2}, N_{1}\right\rangle-\left\langle X_{1}, N_{1}\right\rangle\left\langle X_{2}, N_{2}\right\rangle\right) \\ & =(a d-b c)\left(\left\langle X_{12}, N\right\rangle\left\langle X_{21}, N\right\rangle-\left\langle X_{11}, N\right\rangle\left\langle X_{22}, N\right\rangle\right) \\ & =-(a d-b c) \operatorname{det}\left(\ell_{i j}\right) \\ & =-(a d-b c) K g, \end{aligned} $$ Since $K=\operatorname{det}\left(\ell_{i j}\right) / g$. Finally note that $$ 1=\|V \times W\|=(a d-b c)\left\|X_{1} \times X_{2}\right\|=(a d-b c) \sqrt{g}, $$ which completes the proof. ### The general Gauss-Bonnet theorem Here we generalize the local Gauss-Bonnet theorem to the case where the region $\Omega$ may not be simply connected. A regular region is triagular provided that it is simply connected and has exactly three corners, or vertices. Each segment of the boundary between a pair of vertices will be called an edge of the region. By a triangulation of a regular region $\Omega$ we mean a partition of $\Omega$ into a finite collection of triangular regions $T_{i}$, called faces of the triangulation, such that whenever $T_{i} \cap T_{j} \neq \emptyset$ then either $T_{i} \cap T_{j}$ is a common vertex or a common edge of $T_{i}$ and $T_{j}$. The Euler Characteritic of $\Omega$ is defined as $$ \chi(\Omega):=V-E+F, $$ where $V, E$, and $F$ are the number of (distinct) vertices, edges, and faces of a triangulation of $\Omega$. Theorem 6 (General Gauss-Bonnet theorem). Let $\Omega \subset M$ be a regular region with exterior angles $\alpha_{i}$. Then $$ \int_{\Omega} K+\int_{\partial \Omega} \kappa_{g}+\sum_{i=1}^{k} \alpha_{i}=2 \pi \chi(\Omega) . $$ Note that the above theorem shows that the Euler characteristic is independent of the choice of triangulation. Also, the above theorem applies to the case where $M$ is compact connected surface without boundary, and $\Omega$ is all of $M$, in which case we obtain: $$ \int_{M} K=2 \pi \chi(M) . $$ The proof of the above theorem follows from applying the Gauss-Bonnet theorem to the faces of a triangulaion, and then summing over all the faces. Let $T_{i}$ denote the faces of a triangulation of $\Omega$. Then $$ \int_{T_{i}} K+\int_{\partial T_{i}} \kappa_{g}+\sum_{j=1}^{3} \alpha_{i j}=2 \pi . $$ Note that the geodesic curvatures are computed with respect to the counterclockwise orientation of the boundary of each face. Thus the geodesic curvature of each edge in the interior of $\Omega$ will have opposite signs when computed with respect to faces adjacent to that edge. Hence if we sum the above equality over all faces of the triangulation we obtain $$ 2 \pi F=\int_{\Omega} K+\int_{\partial \Omega} \kappa_{g}+\sum_{i=1}^{F} \sum_{j=1}^{3} \alpha_{i j} . $$ Hence it remains to show that $$ \sum_{i=1}^{F} \sum_{j=1}^{3} \alpha_{i j}=\sum_{i=1}^{V_{e x t}} \beta_{i}+2 \pi(E-V), $$ where $\beta_{i}$ are the turning angles of $\Omega$ at the corners of $\partial \Omega$, or the number of external vertices of the triangulation which we denote by $V_{e x t}$. To see this note that $$ \begin{aligned} \sum_{i=1}^{F} \sum_{j=1}^{3} \alpha_{i j} & =\sum_{i=1}^{F} \sum_{j=1}^{3}\left(\pi-\alpha_{i j}^{\prime}\right) \\ & =3 \pi F-\sum_{i=1}^{F} \sum_{j=1}^{3} \alpha_{i j}^{\prime} \\ & =3 \pi F-2 \pi V_{i n t}-\sum_{i=1}^{V_{e x t}}\left(\pi-\beta_{i}\right) \\ & =\left(3 F-2 V_{i n t}-V_{e x t}\right) \pi+\sum_{i=1}^{V_{e x t}} \beta_{i} \end{aligned} $$ where $\alpha_{i j}^{\prime}$ are interior angles of the triangular regions, and $V_{i n t}$ are the number of vertices in the interior of $\Omega$. Next note that $3 F=2 E_{\text {int }}+E_{\text {ext }}$ where $E_{\text {int }}$ and $E_{\text {ext }}$ denote respectively the number of interior and exterior edges of the triangulation. So $$ \begin{aligned} 3 F-2 V_{i n t}-V_{\text {ext }} & =2 E_{i n t}+E_{\text {ext }}-2 V_{i n t}-V_{\text {ext }} \\ & =2 E_{i n t}+2 E_{\text {ext }}-2 V_{i n t}-2 V_{\text {ext }} \\ & =2(E-V), \end{aligned} $$ which completes the argument. Here we have used the fact that $E_{\text {ext }}=V_{\text {ext }}$. Thus $E_{\text {ext }}-V_{\text {ext }}=0=2 E_{\text {ext }}-2 V_{\text {ext }}$. ### Applications of the Gauss-Bonnet theorem All the following exercises follow fairly quickly from Gauss-Bonnet theorem. Exercise 7. Show that the sum of the angles in a triangle is $\pi$. Exercise 8. Show that the total geodesic curvature of a simple closed planar curve is $2 \pi$. Exercise 9. Show that the Gaussian curvature of a surface which is homeomorphic to the torus must always be equal to zero at some point. Exercise 10. Show that a simple closed curve with total geodesic curvature zero on a sphere bisects the area of the sphere. Exercise 11. Show that there exists at most one closed geodesic on a cylinder with negative curvature. Exercise 12. Show that the area of a geodesic polygon with $k$ vertices on a sphere of radius 1 is equal to the sum of its angles minus $(k-2) \pi$. Exercise 13. Let $p$ be a point of a surface $M, T$ be a geodesic triangle which contains $p$, and $\alpha, \beta, \gamma$ be the angles of $T$. Show that $$ K(p)=\lim _{T \rightarrow p} \frac{\alpha+\beta+\gamma-\pi}{\operatorname{Area}(T)} $$ In particular, note that the above proves Gauss's Theorema Egregium. Exercise 14. Show that the sum of the angles of a geodesic triangle on a surface of positive curvature is more than $\pi$, and on a surface of negative curvature is less than $\pi$. Exercise 15. Show that on a simply connected surface of negative curvature two geodesics emanating from the same point will never meet. Exercise 16. Let $M$ be a surface homeomorphic to a sphere in $\mathbf{R}^{3}$, and let $\Gamma \subset M$ be a closed geodesic. Show that each of the two regions bounded by $\Gamma$ have equal areas under the Gauss map. Exercise 17. Compute the area of the pseudo-sphere, i.e. the surface of revolution obtained by rotating a tractrix.
Textbooks
\begin{definition}[Definition:Standard Affine Structure on Vector Space] Let $E$ be a vector space. Let $\EE$ be the underlying set of $E$. Let $+$ denote the addition operation $E \times E \to E$, viewed as a mapping $\EE \times E \to \EE$. Let $-$ denote the subtraction operation $E \times E \to E$, viewed as a mapping $\EE \times \EE \to E$. Then the set $\EE$, together with the vector space $E$ and the operations $+,-$, is called the '''standard affine structure''' on the vector space $E$. \end{definition}
ProofWiki
Optimal design for dynamical modeling of pest populations MBE Home The role of structural viscoelasticity in deformable porous media with incompressible constituents: Applications in biomechanics August 2018, 15(4): 961-991. doi: 10.3934/mbe.2018043 Quantifying the survival uncertainty of Wolbachia-infected mosquitoes in a spatial model Martin Strugarek 1,2,, , Nicolas Vauchelet 3, and Jorge P. Zubelli 4, AgroParisTech, 16 rue Claude Bernard, 75231 Paris Cedex 05, France Sorbonne Université, Université Paris-Diderot SPC, CNRS, INRIA, Laboratoire Jacques-Louis Lions, équipe Mamba, F-75005 Paris, France LAGA - UMR 7539 Institut Galilée, Université Paris 13, 99, avenue Jean-Baptiste Clément 93430 Villetaneuse, France IMPA, Estrada Dona Castorina, 110 Jardim Botânico 22460-320, Rio de Janeiro, RJ, Brazil * Corresponding author: M. Strugarek. Received July 24, 2017 Accepted January 03, 2018 Published March 2018 Figure(6) Artificial releases of Wolbachia-infected Aedes mosquitoes have been under study in the past yearsfor fighting vector-borne diseases such as dengue, chikungunya and zika.Several strains of this bacterium cause cytoplasmic incompatibility (CI) and can also affect their host's fecundity or lifespan, while highly reducing vector competence for the main arboviruses. We consider and answer the following questions: 1) what should be the initial condition (i.e. size of the initial mosquito population) to have invasion with one mosquito release source? We note that it is hard to have an invasion in such case. 2) How many release points does one need to have sufficiently high probability of invasion? 3) What happens if one accounts for uncertainty in the release protocol (e.g. unequal spacing among release points)? We build a framework based on existing reaction-diffusion models for the uncertainty quantification in this context,obtain both theoretical and numerical lower bounds for the probability of release successand give new quantitative results on the one dimensional case. Keywords: Reaction-diffusion equation, Wolbachia, uncertainty quantification, population replacement, mosquito release protocol. Mathematics Subject Classification: Primary: 35K57, 35B40, 92D25; Secondary: 60H30. Citation: Martin Strugarek, Nicolas Vauchelet, Jorge P. Zubelli. Quantifying the survival uncertainty of Wolbachia-infected mosquitoes in a spatial model. Mathematical Biosciences & Engineering, 2018, 15 (4) : 961-991. doi: 10.3934/mbe.2018043 http://www.cdc.gov/zika/transmission/index.html, 2016. Google Scholar L. Alphey, Genetic control of mosquitoes, Annual Review of Entomology, 59 (2014), 205-224. doi: 10.1146/annurev-ento-011613-162002. Google Scholar L. Alphey, A. McKemey, D. Nimmo, O. M. Neira, R. Lacroix, K. Matzen and C. Beech, Genetic control of Aedes mosquitoes, Pathogens and Global Health, 107 (2013), 170-179. doi: 10.1179/2047773213Y.0000000095. Google Scholar N. H. Barton and M. Turelli, Spatial waves of advance with bistable dynamics: Cytoplasmic and genetic analogues of Allee effects, The American Naturalist, 178 (2011), E48-E75. doi: 10.1086/661246. Google Scholar N. Barton and G. Hewitt, Adaptation, speciation and hybrid zones, Nature, 341 (1989), 497-503. doi: 10.1038/341497a0. Google Scholar N. Barton and S. Rouhani, The probability of fixation of a new karyotype in a continuous population, Evolution, 45 (1991), 499-517. doi: 10.1111/j.1558-5646.1991.tb04326.x. Google Scholar S. Bhatt, P. W. Gething, O. J. Brady, J. P. Messina, A. W. Farlow, C. L. Moyes, J. M. Drake, J. S. Brownstein, A. G. Hoen, O. Sankoh, M. F. Myers, D. B. George, T. Jaenisch, G. R. W. Wint, C. P. Simmons, T. W. Scott, J. J. Farrar and S. I. Hay, The global distribution and burden of dengue, Nature, 496 (2013), 504-507. doi: 10.1038/nature12060. Google Scholar M. S. C. Blagrove, C. Arias-Goeta, C. Di Genua, A.-B. Failloux and S. P. Sinkins, A Wolbachia wMel transinfection in Aedes albopictus is not detrimental to host fitness and inhibits Chikungunya virus, PLoS Neglected Tropical Diseases, 7 (2013), e2152. doi: 10.1371/journal.pntd.0002152. Google Scholar M. H. T. Chan and P. S. Kim, Modeling a Wolbachia invasion using a slow-fast dispersal reaction-diffusion approach, Bulletin of Mathematical Biology, 75 (2013), 1501-1523. doi: 10.1007/s11538-013-9857-y. Google Scholar P. R. Crain, J. W. Mains, E. Suh, Y. Huang, P. H. Crowley and S. L. Dobson, Wolbachia infections that reduce immature insect survival: Predicted impacts on population replacement, BMC Evolutionary Biology, 11 (2011), p290. doi: 10.1186/1471-2148-11-290. Google Scholar Y. Du and H. Matano, Convergence and sharp thresholds for propagation in nonlinear diffusion problems, Journal of the European Mathematical Society, 12 (2010), 279-312. doi: 10.4171/JEMS/198. Google Scholar G. L. C. Dutra, L. M. B. dos Santos, E. P. Caragata, J. B. L. Silva, D. A. M. Villela, R. Maciel-de Freitas and L. A. Moreira, From Lab to Field: The influence of urban landscapes on the invasive potential of Wolbachia in Brazilian Aedes aegypti mosquitoes, PLoS Neglected Tropical Diseases, 9 (2015), e0003689. doi: 10.1371/journal.pntd.0003689. Google Scholar P. Erdos and A. Rényi, On a classical problem of probability theory, Magyar. Tud. Akad. Mat. Kutato Int. Kozl., 6 (1961), 215-220. Google Scholar A. Fenton, K. N. Johnson, J. C. Brownlie and G. D. D. Hurst, Solving the Wolbachia paradox: Modeling the tripartite interaction between host, Wolbachia, and a natural enemy, The American Naturalist, 178 (2011), 333-342. doi: 10.1086/661247. Google Scholar P. A. Hancock and H. C. J. Godfray, Modelling the spread of Wolbachia in spatially heterogeneous environments, Journal of The Royal Society Interface, 9 (2012), p253. doi: 10.1098/rsif.2012.0253. Google Scholar P. A. Hancock, S. P. Sinkins and H. C. J. Godfray, Population dynamic models of the spread of Wolbachia, The American Naturalist, 177 (2011), 323-333. doi: 10.1086/658121. Google Scholar P. A. Hancock, S. P. Sinkins and H. C. J. Godfray, Strategies for introducing Wolbachia to reduce transmission of mosquito-borne diseases, PLoS Neglected Tropical Diseases, 5 (2011), e1024. doi: 10.1371/journal.pntd.0001024. Google Scholar A. A. Hoffmann, I. Iturbe-Ormaetxe, A. G. Callahan, B. L. Phillips, K. Billington, J. K. Axford, B. Montgomery, A. P. Turley and S. L. O'Neill, Stability of the wMel Wolbachia infection following invasion into Aedes aegypti populations, PLoS Neglected Tropical Diseases, 8 (2014), e3115. doi: 10.1371/journal.pntd.0003115. Google Scholar A. A. Hoffmann, B. L. Montgomery, J. Popovici, I. Iturbe-Ormaetxe, P. H. Johnson, F. Muzzi, M. Greenfield, M. Durkan, Y. S. Leong, Y. Dong, H. Cook, J. Axford, A. G. Callahan, N. Kenny, C. Omodei, E. A. McGraw, P. A. Ryan, S. A. Ritchie, M. Turelli and S. L. O'Neill, Successful establishment of Wolbachia in Aedes populations to suppress dengue transmission, Nature, 476 (2011), 454-457. doi: 10.1038/nature10356. Google Scholar H. Hughes and N. F. Britton, Modeling the use of Wolbachia to control dengue fever transmission, Bulletin of Mathematical Biology, 75 (2013), 796-818. doi: 10.1007/s11538-013-9835-4. Google Scholar V. A. Jansen, M. Turelli and H. C. J. Godfray, Stochastic spread of Wolbachia, Proceedings of the Royal Society of London B: Biological Sciences, 275 (2008), 2769-2776. doi: 10.1098/rspb.2008.0914. Google Scholar K. N. Johnson, The impact of Wolbachia on virus infection in mosquitoes, Viruses, 7 (2015), 5705-5717. doi: 10.3390/v7112903. Google Scholar R. Maciel-de Freitas, R. Souza-Santos, C. T. Codeço and R. Lourenço-de Oliveira, Influence of the spatial distribution of human hosts and large size containers on the dispersal of the mosquito Aedes aegypti within the first gonotrophic cycle, Medical and Veterinary Entomology, 24 (2010), 74-82. doi: 10.1111/j.1365-2915.2009.00851.x. Google Scholar H. Matano and P. Poláčik, Dynamics of nonnegative solutions of one-dimensional reaction-diffusion equations with localized initial data. Part i: A general quasiconvergence theorem and its consequences, Communications in Partial Differential Equations, 41 (2016), 785-811. doi: 10.1080/03605302.2016.1156697. Google Scholar C. B. Muratov and X. Zhong, Threshold phenomena for symmetric-decreasing radial solutions of reaction-diffusion equations, Discrete and Continuous Dynamical Systems, 37 (2017), 915-944. doi: 10.3934/dcds.2017038. Google Scholar T. H. Nguyen, H. L. Nguyen, T. Y. Nguyen, S. N. Vu, N. D. Tran, T. N. Le, Q. M. Vien, T. C. Bui, H. T. Le, S. Kutcher, T. P. Hurst, T. T. H. Duong, J. A. L. Jeffery, J. M. Darbro, B. H. Kay, I. Iturbe-Ormaetxe, J. Popovici, B. L. Montgomery, A. P. Turley, F. Zigterman, H. Cook, P. E. Cook, P. H. Johnson, P. A. Ryan, C. J. Paton, S. A. Ritchie, C. P. Simmons, S. L. O'Neill and A. A. Hoffmann, Field evaluation of the establishment potential of wMelPop Wolbachia in Australia and Vietnam for dengue control, Parasites & Vectors, 8 (2015), p563. doi: 10.1186/s13071-015-1174-x. Google Scholar M. Otero, N. Schweigmann and H. G. Solari, A stochastic spatial dynamical model for Aedes aegypti, Bulletin of Mathematical Biology, 70 (2008), 1297-1325. doi: 10.1007/s11538-008-9300-y. Google Scholar T. Ouyang and J. Shi, Exact multiplicity of positive solutions for a class of semilinear problems, Journal of Differential Equations, 146 (1998), 121-156. doi: 10.1006/jdeq.1998.3414. Google Scholar T. Ouyang and J. Shi, Exact multiplicity of positive solutions for a class of semilinear problem, Ⅱ, Journal of Differential Equations, 158 (1999), 94-151. doi: 10.1016/S0022-0396(99)80020-5. Google Scholar P. Poláčik, Threshold solutions and sharp transitions for nonautonomous parabolic equations on $\mathbb{R}^N$, Archive for Rational Mechanics and Analysis, 199 (2011), 69-97. doi: 10.1007/s00205-010-0316-8. Google Scholar S. Rouhani and N. Barton, Speciation and the ''Shifting Balance" in a continuous population, Theoretical Population Biology, 31 (1987), 465-492. doi: 10.1016/0040-5809(87)90016-5. Google Scholar M. Strugarek and N. Vauchelet, Reduction to a single closed equation for 2 by 2 reaction-diffusion systems of Lotka-Volterra type, SIAM Journal on Applied Mathematics, 76 (2016), 2060-2080. doi: 10.1137/16M1059217. Google Scholar M. Turelli, Cytoplasmic incompatibility in populations with overlapping generations, Evolution, 64 (2010), 232-241. doi: 10.1111/j.1558-5646.2009.00822.x. Google Scholar F. Vavre and S. Charlat, Making (good) use of Wolbachia: What the models say, Current Opinion in Microbiology, 15 (2012), 263-268. doi: 10.1016/j.mib.2012.03.005. Google Scholar D. A. M. Villela, C. T. Codeço, F. Figueiredo, G. A. Garcia, R. Maciel-de Freitas and C. J. Struchiner, A Bayesian hierarchical model for estimation of abundance and spatial density of Aedes aegypti, PLoS ONE, 10 (2015), e0123794. doi: 10.1371/journal.pone.0123794. Google Scholar T. Walker, P. H. Johnson, L. A. Moreira, I. Iturbe-Ormaetxe, F. D. Frentiu, C. J. McMeniman, Y. S. Leong, Y. Dong, J. Axford, P. Kriesner, A. L. Lloyd, S. A. Ritchie, S. L. O'Neill and A. A. Hoffmann, The wMel Wolbachia strain blocks dengue and invades caged Aedes aegypti populations, Nature, 476 (2011), 450-453. doi: 10.1038/nature10355. Google Scholar H. L. Yeap, P. Mee, T. Walker, A. R. Weeks, S. L. O'Neill, P. Johnson, S. A. Ritchie, K. M. Richardson, C. Doig, N. M. Endersby and A. A. Hoffmann, Dynamics of the "Popcorn" Wolbachia infection in outbred Aedes aegypti informs prospects for mosquito vector control, Genetics, 187 (2011), 583-595. doi: 10.1534/genetics.110.122390. Google Scholar H. L. Yeap, G. Rasic, N. M. Endersby-Harshman, S. F. Lee, E. Arguni, H. Le Nguyen and A. A. Hoffmann, Mitochondrial DNA variants help monitor the dynamics of Wolbachia invasion into host populations, Heredity, 116 (2016), 265-276. doi: 10.1038/hdy.2015.97. Google Scholar B. Zheng, M. Tang, J. Yu and J. Qiu, Wolbachia spreading dynamics in mosquitoes with imperfect maternal transmission, Journal of Mathematical Biology, 76 (2018), 235-263. doi: 10.1007/s00285-017-1142-5. Google Scholar A. Zlatos, Sharp transition between extinction and propagation of reaction, Journal of the American Mathematical Society, 19 (2006), 251-263. doi: 10.1090/S0894-0347-05-00504-7. Google Scholar Figure 1. Profile of $f$ defined in (2) (left) and of its anti-derivative $F$ (right) with parameters given by (5). Figure 2. Time dynamics with three different initial releases belonging to the set $RP_{50}^2(N)$ of (10), with $N/(N+N_0) = 0.75$. Integration is performed on the domain $[-L, L]$ with $L = 50 \textrm{km}$. The release box is plotted in dashed red on the first picture of each configuration. Left: Release box $[-2 L/3, 2 L/3]^2$. Center: Release box $[-L/2, L/2]^2$. Right: Release box $[-L/12.5, L/12.5]^2$. From top to bottom: increasing time $t \in \{0, 1, 25, 50, 75\}$, in days. The color indicates the value of $p$ (with the scale on the right). Figure 3. Comparison of minimal invasion radii $R_{\alpha}$ (obtained by energy) in dashed line and $L_{\alpha}$ (obtained by critical bubbles) in solid line, varying with the maximal infection frequency level $\alpha$. The scale is such that $\sigma=1$. Figure 4. Two $G_{\sigma}$ profiles and their sum (in thick line). The level $G_{\sigma} (0)$ is the dashed line. On the left, $h=\sqrt{2\log(2)\sigma}$. On the right, $h>\sqrt{2\log(2)\sigma}$. Figure 5. Under-estimation $\beta^{\lambda, R^*} (-L, L)$ of introduction success probability for $L$ ranging from $R^*/2 = 5.49$ to $3 R^*/2 = 16.47$. The seven curves correspond to increasing number of release points. (From bottom to top: $20$ to $80$ release points). Figure 6. Effect of losing the constant $2 \sqrt{2 \log(2)}$ in Proposition 6: under-estimation $\beta^{\lambda, R^*} (-L, L)$ of introduction success probability for $L$ ranging from $R^*/2 = 5.49$ to $3 R^*/2 = 16.47$, with $80$ release points. Xin Li, Xingfu Zou. On a reaction-diffusion model for sterile insect release method with release on the boundary. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2509-2522. doi: 10.3934/dcdsb.2012.17.2509 Keng Deng. On a nonlocal reaction-diffusion population model. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 65-73. doi: 10.3934/dcdsb.2008.9.65 M. Grasselli, V. Pata. A reaction-diffusion equation with memory. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1079-1088. doi: 10.3934/dcds.2006.15.1079 Heather Finotti, Suzanne Lenhart, Tuoc Van Phan. Optimal control of advective direction in reaction-diffusion population models. Evolution Equations & Control Theory, 2012, 1 (1) : 81-107. doi: 10.3934/eect.2012.1.81 Keng Deng, Yixiang Wu. Asymptotic behavior for a reaction-diffusion population model with delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 385-395. doi: 10.3934/dcdsb.2015.20.385 Bang-Sheng Han, Zhi-Cheng Wang. Traveling wave solutions in a nonlocal reaction-diffusion population model. Communications on Pure & Applied Analysis, 2016, 15 (3) : 1057-1076. doi: 10.3934/cpaa.2016.15.1057 Zhaosheng Feng. Traveling waves to a reaction-diffusion equation. Conference Publications, 2007, 2007 (Special) : 382-390. doi: 10.3934/proc.2007.2007.382 Nick Bessonov, Gennady Bocharov, Tarik Mohammed Touaoula, Sergei Trofimchuk, Vitaly Volpert. Delay reaction-diffusion equation for infection dynamics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2073-2091. doi: 10.3934/dcdsb.2019085 Perla El Kettani, Danielle Hilhorst, Kai Lee. A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5615-5648. doi: 10.3934/dcds.2018246 Henri Berestycki, Nancy Rodríguez. A non-local bistable reaction-diffusion equation with a gap. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 685-723. doi: 10.3934/dcds.2017029 Elena Trofimchuk, Sergei Trofimchuk. Admissible wavefront speeds for a single species reaction-diffusion equation with delay. Discrete & Continuous Dynamical Systems - A, 2008, 20 (2) : 407-423. doi: 10.3934/dcds.2008.20.407 Tomás Caraballo, José A. Langa, James C. Robinson. Stability and random attractors for a reaction-diffusion equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 875-892. doi: 10.3934/dcds.2000.6.875 Elena Beretta, Cecilia Cavaterra. Identifying a space dependent coefficient in a reaction-diffusion equation. Inverse Problems & Imaging, 2011, 5 (2) : 285-296. doi: 10.3934/ipi.2011.5.285 Michio Urano, Kimie Nakashima, Yoshio Yamada. Transition layers and spikes for a reaction-diffusion equation with bistable nonlinearity. Conference Publications, 2005, 2005 (Special) : 868-877. doi: 10.3934/proc.2005.2005.868 Takanori Ide, Kazuhiro Kurata, Kazunaga Tanaka. Multiple stable patterns for some reaction-diffusion equation in disrupted environments. Discrete & Continuous Dynamical Systems - A, 2006, 14 (1) : 93-116. doi: 10.3934/dcds.2006.14.93 Samira Boussaïd, Danielle Hilhorst, Thanh Nam Nguyen. Convergence to steady state for the solutions of a nonlocal reaction-diffusion equation. Evolution Equations & Control Theory, 2015, 4 (1) : 39-59. doi: 10.3934/eect.2015.4.39 Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020033 Tarik Mohammed Touaoula, Mohammed Nor Frioui, Nikolay Bessonov, Vitaly Volpert. Dynamics of solutions of a reaction-diffusion equation with delayed inhibition. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020193 Henri Berestycki, Luca Rossi. Reaction-diffusion equations for population dynamics with forced speed I - The case of the whole space. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 41-67. doi: 10.3934/dcds.2008.21.41 Henri Berestycki, Luca Rossi. Reaction-diffusion equations for population dynamics with forced speed II - cylindrical-type domains. Discrete & Continuous Dynamical Systems - A, 2009, 25 (1) : 19-61. doi: 10.3934/dcds.2009.25.19 Martin Strugarek Nicolas Vauchelet Jorge P. Zubelli
CommonCrawl
\begin{document} \title{Bayesian Regression for a Dirichlet Distributed Response using Stan} \paragraph{Abstract} For an observed response that is composed by a set -- or vector -- of positive values that sum up to 1, the Dirichlet distribution~\cite{bolshev2018} is a helpful mathematical construction for the quantification of the data-generating mechanics underlying this process. In applications, these response-sets are usually denoted as \textsl{proportions}, or \textsl{compositions of proportions}, and by means of covariates, one wishes to manifest the underlying signal -- by changes in the value of these covariates -- leading to differently distributed response compositions. This article gives a brief introduction into this class of regression models, and based on a recently developed formulation \cite{maier2014}, illustrates the implementation in the Bayesian inference framework \textsf{Stan}. \section{Introduction} Let us denote by $\mathbf{y}$ a vector of proportion values that is an observation of a random variable~$\mathbf{Y}$. As an application example, we can access blood sample composition data via R add-on package \textsf{DirichletReg} \cite{maier2015}. The data set \texttt{BloodSamples} gives 36 compositions of Alb., Pre-Alb., Glob. A, and Glob. B in relation to two types of diseases (14 patients suffer from disease A, 16 from disease B and 6 are unclassified). The first 6 observations show the following values: \begin{verbatim} library("DirichletReg") Bld <- BloodSamples head(Bld) Albumin Pre.Albumin Globulin.A Globulin.B Disease New A1 0.348 0.197 0.201 0.254 A No A2 0.386 0.239 0.141 0.242 A No A3 0.471 0.240 0.089 0.200 A No A4 0.427 0.245 0.111 0.217 A No A5 0.346 0.230 0.204 0.219 A No A6 0.485 0.231 0.101 0.183 A No \end{verbatim} \begin{figure} \caption{Blood sample data: Grey points (lines indicate connections of proportions as coming from a shared observation unit) show the original data, black points the respective sample means.} \label{fig:blood:samples:descriptive} \end{figure} Figure~\ref{fig:blood:samples:descriptive} on page~\pageref{fig:blood:samples:descriptive} visualizes the data. The task for a Dirichlet regression model is here to work out the differences in the composition of blood compartments by differences in the levels of a binary disease variable (\texttt{Disease}, unclassified observations are dropped). As $\mathbf{Y}$ distributes in $C\geq 2$ dimensions, ie. $$ Y=\left\{\left(Y_1,\ldots, Y_C\right)^\top;\,Y_1+\ldots+Y_C=1\right\}, $$ one observation $\mathbf{y}_i=\left(y_{1,i},\ldots,y_{C,i}\right)^\top$ for observation unit $i$ is a vector of values $y_{c,i}$, $c=1, \ldots, C$, with $\sum_{i=1}^{C}y_{c,i}=1$. For each $Y_{c}$, one usually assumes a strictly positive component below $1$, ie. $Y_{c}\in\left(0, 1\right)\forall c\in\left\{1, \ldots, C\right\}$. \paragraph{Zero observations} In application, zero observations -- ie. $y_{c,i}=0$ -- are usually often present, either because the corresponding proportion $y_{c,i}$ is completely absent, or because $y_{c,i}$ is only observable with certain precision and consequently at risk to fall below a detection limit. Several strategies have been introduced of how to deal with, and how to distinguish among zero observations (see \cite{martinfernandezetal2003} for an overview). For the remainder of this article, we assume that any zero observation comes from the class of \textsl{rounded zeroes} \cite{martinfernandezetal2003}: \begin{quote} "One typical example of rounded zero is the zero in a component of a particular mineral which indicates that no quantifiable proportion of the mineral has been recorded by the measurement process. Because this kind of zeros is usually understood as 'a trace too small to measure', it seems reasonable to replace them by a suitable small value [...]." \cite{martinfernandezetal2003} \end{quote} In case of zero observations, the transformation proposed by \cite{smithsonverkuilen2006} -- applied by \cite{maier2014} and implemented in \textsf{R} package \textsf{DirichReg} \cite{maier2015} in function \texttt{DR\_data} as argument \texttt{trafo} -- is applied: \begin{align} y_{c,i}= \begin{cases} y_{c,i},&\text{ if }y_{c,i}>0,\\ \dfrac{y_{c,i}\cdot\left(n-1\right)+\frac{1}{C}}{n},&\text{ else,} \end{cases} \end{align} where $n$ is the number of observation units. \paragraph{Dirichlet distribution} The Dirichlet distribution~\cite{bolshev2018} is defined as: \begin{align} f\left(\mathbf{y}\vert\boldsymbol{\alpha}\right) = \dfrac{1}{B\left(\boldsymbol{\alpha}\right)}\prod\limits_{c=1}^{C}y_{c}^{\alpha_{c}-1} \end{align} with shape parameter vector $\boldsymbol{\alpha}=\left(\alpha_1,\ldots,\alpha_C\right)^\top$, where the \textsl{multinomial beta function}, $B\left(\boldsymbol{\alpha}\right)=\left(\prod_{c=1}^{C}\Gamma\left(\alpha_{c}\right)\right)/\left(\Gamma\left(\prod_{c=1}^{C}\alpha_{c}\right)\right)$ serves as a normalizing constant, and $\Gamma\left(z\right)=\int_0^{\infty}t^{x-1}\exp\left(-t\right)\partial t$ denotes the \textsl{Gamma function}. In the following, $\mathbf{Y}\sim D\left(\boldsymbol{\alpha}\right)$ will denote that a random variable $\mathbf{Y}=\left(Y_1,\ldots,Y_C\right)^\top$ is Dirichlet distributed. From this parametrisation, it is evident that the elements of $\boldsymbol{\alpha}$ must control location as well as scale of the distribution, where the sum $\alpha_0:=\sum_{c=1}^{C}\alpha_c$ of all $C$ components of parameter vector $\boldsymbol{\alpha}$ is interpreted as precision, ie. "the higher this value, the more density is near the expected value" \cite{maier2014}, while component-wise expected values are given by the ratio of parameters with precision, ie. \begin{align} E\left(Y_c\right)=\dfrac{\alpha_c}{\alpha_0} \end{align} Two direct consequences of this definition: \begin{itemize} \item Expected values are proportions summing up to 1, and by this \item the vector of $C$ expected values only has $C-1$ degrees of freedom: if we hold $C-1$ expected values fixed on certain values, the last expected value is given by the difference to 1. \end{itemize} \paragraph{Regression modeling for Dirichlet distribution} \cite{maier2014} introduces two parameter formulations for Dirichlet regression models, where here the so-called \textsl{alternative formulation} will be used: \begin{align} \mathbf{Y}_{i}\vert\mathbf{x}_i\sim D\left(\boldsymbol{\mu}_i,\theta\right), \end{align} with expectation vector $\boldsymbol{\mu}_i=\left(\mu_{1,i},\ldots,\mu_{C,i}\right)^\top$, expectation vector element $\mu_{c,i}=E\left(Y_{c,i}\vert\mathbf{x}_i\right)$, component index $c=1,\ldots,C$, and precision $\theta_i=\alpha_{0,i}$. Using component-wise coefficient vectors~$\boldsymbol{\beta}_c$ for components $c=1,\ldots,C$, we can set-up linear predictors $\eta_{c,i}$ defining $\mu_{c,i}$ as: $$ \eta_{c,i}=\mathbf{x}_i^\top\boldsymbol{\beta}_c, $$ where for one reference component $\tilde{c}$, all elements of $\boldsymbol{\beta}$ are equal to $0$, i.e. $\boldsymbol{\beta}_{\tilde{c}}=\mathbf{0}$, to guarantee identifiability as a consequence of the degrees of freedom reduced to $C-1$ (Usually, $\tilde{c}$ is selected as $1$ or $C$, here as $C$). Plugging these linear predictors into a ratio of model components -- after applying the exponential function guaranteeing positivity --, the component-wise expected values conditional on $\mathbf{x}_{i}$ are defined as: $$ \mu_{c,i}=\dfrac{\exp\left(\eta_{c,i}\right)}{\sum\limits_{d=1}^{C}\exp\left(\eta_{i,d}\right)}. $$ Applying again the exponential function, and introducing coefficient vector $\boldsymbol{\gamma}$, we can express the global precision conditional on $\mathbf{x}_{i}$ as: $$ \theta_{i}=\exp\left(\mathbf{z}_i^\top\boldsymbol{\gamma}\right). $$ We get to densities: \begin{align} f\left(\mathbf{y}_{i}\vert\left(\mu_{i,1},\mu_{i,2},\ldots,\mu_{c,i}\right)^\top,\theta_{i}\right)=\nonumber\\=f\left(\mathbf{y}_{i}\vert\boldsymbol{\mu}_{i},\theta_{i}\right), \end{align} and likelihood: \begin{align} L\left(\left\{\mathbf{y}_i;\,i=1,\ldots,n\right\}\right)=\prod\limits_{i=1}^{n} f\left(\mathbf{y}_{i}\vert\boldsymbol{\mu}_{i},\theta_{i}\right). \end{align} \paragraph{Why this manuscript?} As seen above, one component is redundant since it is a linear combination of the others, but in practice we desire to know the relationship between the explanatory variables and the outcomes in all component. Often the components are equally important in the view of the researcher and it is not sensible to relegate one to reference status. The use of a Bayesian simulation framework allows us -- by the use of simple post-estimation calculations -- to quantify uncertainty in both the coefficient scales, as well as on the proportion scale. We are able to have a look on all quantities as an ensemble, get to statistical inference statements for all model components, and by this, are able to move between the 'x in c' and 'c across x' perspectives. This is a great advantage in comparison to the ML framework \cite{maier2015}, where predictions are also easily achieved and readily implemented, but the expression of uncertainties for all quantities and/or on basis of proportions becomes quickly tedious, as handling variances and covariances is rather difficult for nonlinear combinations. \cite{vandermerwe2018} recently proposed a Bayesian solution to this practical limitation based on a penalized likelihood formulation respecting restrictions on the degrees of freedom as described above. Following the simulation from the joint posterior, \cite{vandermerwe2018} applies post-simulation calculations "to ensure that fitted expected values sum to one for each observation" \cite{vandermerwe2018}: $$ \mu_{c,i,S}^{\text{adj}}=\dfrac{\mu_{c,i,S}^{\text{sim}}}{\sum\limits_{d=1}^{C}\mu_{d,i,S}^{\text{sim}}}. $$ This is a somewhat counter-intuitive step, as the sampling of coefficients is performed such that it should respect the degrees of freedom restriction by suitably penalizing the likelihood. Further, it introduces a certain removal of sampling variation that was initially accepted during the Bayesian sampling algorithm as being plausible with respect observed data and statistical model. By this one looses a direct linkage between the Bayesian model expressed by the joint posterior, and the adjusted samples from the joint posterior $\mu_{c,i,S}^{\text{adj}}$. In order to overcome this issue, but still be able to remain in the Bayesian framework that \cite{vandermerwe2018} introduced with good reason: \begin{quote} "Specifically, we use the Bayesian simulation framework, which holds many advantages. It allows us to directly quantify uncertainty in both the coefficients and the means. Also, when moving to a predictive framework, construction of predictive densities is relatively straightforward." \cite{vandermerwe2018} \end{quote} I will introduce a \textsf{Stan} implementation using one response component as reference status as used in the ML estimation framework by \cite{maier2014}, and demonstrate how post-estimation calculations overcome the limitations induced by reference status definition in the ML estimation framework. This post-estimation technique is simple: calculate the expected value $\mu_{C,i}$ -- conditional on the covariate values of an observation unit $i$ -- of reference component $C$ as the difference of the sum of the other expected values $\mu_1,\ldots,\mu_{C-1}$ to the boundary condition that all must sum up to 1: $$ \mu_{C,i,s}=\dfrac{1}{\sum\limits_{d=1}^{C-1}\mu_{d,i,s}}, $$ -- this so far not different to what the we need to do to get to an estimate for $\mu_C$ in the ML framework -- repeatedly for each sample $s=1,\ldots,S$ from the posterior. By this we are equipped with not only one estimate, but a full distribution of samples for $\mu_C$ -- in the same fashion as for the other component -- and consequently, uncertainty measures for all response component can be expressed -- with, of course, however still reduced degrees of freedom of $C-1$. \cite{vandermerwe2018} based his software implementation on Gibbs sampling \cite{gelfandsmith1990} via the \textsf{OpenBUGS} program \cite{lunnetal2009}, but as I personally feel much more comfortable and experienced using the alternative Bayesian inference framework \textsf{Stan}, I will introduce the new approach using this alternative approach. However, any researcher should technically be able to perform her/his calculations in a similar way in the \textsf{OpenBUGS} framework. The remainder of this article is organized as follows: Section 2 introduces the Bayesian model as well as the \textsf{Stan} estimation framework for the Dirichlet regression model, and Section 3 shows an application example. \section{Methods and software implementation} We move to a Bayesian framework and introduce normal priors with mean 0 and variance $5^2$ on all model parameters: \begin{align} \mathbf{y}_{i}\vert\mathbf{x}_i,\mathbf{z}_i&\sim D\left(\boldsymbol{\mu}_{i},\theta_{i}\right),\\ \mu_{c,i}&=\exp\left(\mathbf{x}_i^\top\boldsymbol{\beta}_c\right),\nonumber\\ \beta_{j,c}&\sim N\left(0,5^2\right),\quad j=1,\ldots,p_{\boldsymbol{\beta}},\quad \forall c\neq\tilde{c},\nonumber\\ \beta_{j,\tilde{c}}&=0,\quad j=1,\ldots,p_{\boldsymbol{\beta}},\nonumber\\ \gamma_{j,c}&\sim N\left(0,5^2\right),\quad j=1,\ldots,p_{\boldsymbol{\gamma}}.\nonumber \end{align} One can consider these normal priors as weakly-informative, as they certainly don't act uniformly on the coefficients, but as the informativeness of a prior does not depend on the absolute value per se, but rather on how flat it is in the region of high likelihood and vice versa, we can still consider that they let the data speak for themselves in almost any application with unit-scale covariates, as one can, for those unit-scale covariates, assume a region of high likelihood for coefficients in an interval of only a few units around 0. \subsection{Software implementation} The Bayesian estimation of additive regression models with Dirichlet distributed responses is implemented in \textsf{Stan}~\cite{carpenteretal2017} and practically applied using \textsf{RStan}~\cite{rstan2016}. \textsf{RStan} functions as an \textsf{R}~\cite{rcoreteam2017} interface to \textsf{Stan} using common \textsf{R} syntax. \textsf{Stan} is statistical software written in \textsf{C++} which operationalizes Bayesian inference by drawing samples from a regression model's joint posterior in the form of Markov chains. \textsf{Stan} is my personal generic Bayesian software framework -- it's a probabilistic programming language -- of choice as it is the current gold-standard~\cite{monnahanetal2017}, and I, the author, personally feel more comfortable and experienced with it in comparison to \textsf{JAGS}~\cite{plummer2003}, \textsf{Bugs}~\cite{gilksetal1994} and the like. \vskip 2mm In contrast to the alternatives for Bayesian analyzes, \textsf{Stan} uses Hamiltonian Monte Carlo~\cite{duaneetal1987} and the No-U-Turn Sampler (NUTS)~\cite{hoffmangelman2014} which require fewer samples for chain convergence though at the cost of increased computation time per sample. In each of the further applications, a single model fit incorporates four chains with 2000 iterations each, where first 1000 iterations of each chain are discarded as warmup leaving in total 4000 post-warmup samples. These are defaults as for example also chosen by \textsf{brms}~\cite{buerkner2017}. \vskip 2mm A \textsf{Stan} implementation for the Dirichlet regression model including covariate effects for the variables transported by \texttt{X} -- \textsf{R} base function \texttt{model.matrix(...)} is a convenient tool for preparation: \begin{verbatim} stan_code <- ' data { int<lower=1> N; // total number of observations int<lower=2> ncolY; // number of categories int<lower=2> ncolX; // number of predictor levels matrix[N,ncolX] X; // predictor design matrix matrix[N,ncolY] Y; // response variable real sd_prior; // Prior standard deviation } parameters { matrix[ncolY-1,ncolX] beta_raw; // coefficients (raw) real theta; } transformed parameters{ real exptheta = exp(theta); matrix[ncolY,ncolX] beta; // coefficients for (l in 1:ncolX) { beta[ncolY,l] = 0.0; } for (k in 1:(ncolY-1)) { for (l in 1:ncolX) { beta[k,l] = beta_raw[k,l]; } } } model { // prior: theta ~ normal(0,sd_prior); for (k in 1:(ncolY-1)) { for (l in 1:ncolX) { beta_raw[k,l] ~ normal(0,sd_prior); } } // likelihood for (n in 1:N) { vector[ncolY] logits; for (m in 1:ncolY){ logits[m] = X[n,] * transpose(beta[m,]); } transpose(Y[n,]) ~ dirichlet(softmax(logits) * exptheta); } } ' \end{verbatim} Supplement S1 gives an implementation of alternative parametrization \cite{maier2014} Dirichlet regression including varying precision by varying covariates. \section{Results for the Blood Samples Application} First, let's see how these data is analyzed using \textsf{DirichletReg}: \vskip 2mm \begin{verbatim} Bld <- na.omit(Bld) Bld$Smp <- DR_data(Bld[, 1:4]) \end{verbatim} We get a warning that some values needed to be corrected such that all proportions for one observation unit sum up to 1: \begin{verbatim} Warning in DR_data(Bld[, 1:4]) : not all rows sum up to 1 => normalization forced \end{verbatim} The ML estimate is calculate via: \begin{verbatim} m <- DirichReg(Smp ~ Dis. | 1, Bld, model = "alternative", base = 4) \end{verbatim} We get the following coefficient estimates: \begin{verbatim} (b <- unlist(coef(m))) beta.Alb..(Interc.) beta.Alb..Dis.B 0.63010700 -0.25191609 beta.Pre.Alb..(Interc.) beta.Pre.Alb..Dis.B 0.06274025 -0.30952737 beta.Glob.A.(Interc.) beta.Glob.A.Dis.B -0.48628655 -0.18189666 gamma.gamma.(Interc.) 4.22272495 \end{verbatim} And can calculate expected values by: \begin{verbatim} (B <- matrix(nrow = 4, ncol = 2, c(b[1:6], rep(0, 2)), byrow = T)) [,1] [,2] [1,] 0.63010700 -0.2519161 [2,] 0.06274025 -0.3095274 [3,] -0.48628655 -0.1818967 [4,] 0.00000000 0.0000000 (eB <- t(exp(apply(B, MAR = 1, FUN = cumsum)))) [,1] [,2] [1,] 1.8778115 1.4596416 [2,] 1.0647502 0.7813070 [3,] 0.6149056 0.5126391 [4,] 1.0000000 1.0000000 (mu <- cbind(eB[, 1]/colSums(eB)[1], eB[, 2]/colSums(eB)[2])) [,1] [,2] [1,] 0.4120296 0.3888657 [2,] 0.2336276 0.2081494 [3,] 0.1349227 0.1365731 [4,] 0.2194201 0.2664118 \end{verbatim} \vskip 2mm Now using the Bayesian estimation using \textsf{Stan} begins with compiling the previously introduced \textsf{Stan} code: \begin{verbatim} library("rstan") prg <- stan_model(model_code = stan_code) \end{verbatim} We then need to translate the covariates into a design matrix \texttt{X}: \begin{verbatim} X <- as.matrix(model.matrix(lm(Albumin ~ Disease, data = Bld))) X <- matrix(nrow = nrow(X), ncol = ncol(X), data = as.numeric(X)) \end{verbatim} Define response object \texttt{Y}: \begin{verbatim} Y <- Bld$Smp \end{verbatim} Provide everything as a list: \begin{verbatim} D <- list(N = nrow(Y), ncolY = ncol(Y), ncolX = ncol(X), X = X, Y = Y, sd_prior = 1) \end{verbatim} And finally estimate the parameters using function \texttt{sampling} from the \textsf{RStan} package: \begin{verbatim} fit1 <- sampling(prg, data = D, chains = 4, iter = 2000, cores = 4, control = list(adapt_delta = 0.95, max_treedepth = 20), refresh = 100) \end{verbatim} Using \texttt{extract}, we can access the posterior samples: \begin{verbatim} B <- extract(fit1)$beta \end{verbatim} A small helping-function will assist us with calculating the expected values: \begin{verbatim} simplex <- function(x){ exp(x)/sum(exp(x)) } \end{verbatim} We can plot everything using the following code: \begin{verbatim} plot(1:4, Bld[1, 1:4], ylim = c(0, 0.6), type = "n", xaxt = "n", las = 1, xlab = "", ylab = "Proportion", main = "Disease A", xlim = c(0.6, 4.4)) abline(h = seq(0, 0.6, by = 0.1), col = "grey", lty = 3) axis(1, at = 1:4, labels = names(Bld)[1:4], las = 2) aux <- t(apply(B[, , 1], MAR = 1, FUN = simplex)) apply(subset(Bld, Disease == "A")[, 1:4], MAR = 1, FUN = points, pch = 16, col = "grey") lines(apply(subset(Bld, Disease == "A")[, 1:4], MAR = 2, FUN = mean), type = "b", pch = 16, cex = 1.2, lwd = 2) lines(apply(aux, MAR = 2, FUN = quantile, prob = 0.975), type = "b", pch = 4, lty = 2, col = my_colors[2]) lines(apply(aux, MAR = 2, FUN = quantile, prob = 0.025), type = "b", pch = 4, lty = 2, col = my_colors[2]) lines(apply(aux, MAR = 2, FUN = mean), lwd = 2, col = my_colors[2], type = "b", pch = 16) plot(1:4, Bld[1, 1:4], ylim = c(0, 0.6), type = "n", xaxt = "n", las = 1, xlab = "", ylab = "Proportion", main = "Disease B", xlim = c(0.6, 4.4)) abline(h = seq(0, 0.6, by = 0.1), col = "grey", lty = 3) axis(1, at = 1:4, labels = names(Bld)[1:4], las = 2) aux <- t(apply(B[, , 1] + B[, , 2], MAR = 1, FUN = simplex)) apply(subset(Bld, Disease == "B")[, 1:4], MAR = 1, FUN = points, pch = 16, col = "grey") lines(apply(subset(Bld, Disease == "B")[, 1:4], MAR = 2, FUN = mean), type = "b", pch = 16, cex = 1.2, lwd = 2) lines(apply(aux, MAR = 2, FUN = quantile, prob = 0.975), type = "b", pch = 4, lty = 2, col = my_colors[2]) lines(apply(aux, MAR = 2, FUN = quantile, prob = 0.025), type = "b", pch = 4, lty = 2, col = my_colors[2]) lines(apply(aux, MAR = 2, FUN = mean), lwd = 2, col = my_colors[2], type = "b", pch = 16) \end{verbatim} Figure~\ref{fig:blood:samples:results} visualizes the outcome of these \textsf{R} plotting commands. Note that by the reduced degrees of freedom, results should be seen only in the ensemble as the outcome of one proportion influences all the other proportions' outcomes, and vice versa. Table~\ref{result:table} shows the results on the parameter level and compares ML to Bayes for several different prior choices. As can be seen, different prior choices only have an practically irrelevant influence on the parameters' posterior distribution. \begin{figure} \caption{Results for the blood sample data: Grey points show the original data, black points show the respective sample mean, red points give the estimated expected values conditional on disease based on the Bayesian approach, and black crosses show 95\% credible (point-wise) intervals based on the Bayesian estimation approach.} \label{fig:blood:samples:results} \end{figure} \vskip 2mm \begin{table} \centering \begin{minipage}[l]{8cm} Maximum Likelihood:$\vphantom{N(0,1^2)}$ \\ \begin{tabular}{rrrr} \hline & 2.5\% & Est. & 97.5\% \\ \hline Alb.:(Interc.) & 0.46 & 0.63 & 0.80 \\ Alb.:Dis.B & -0.47 & -0.25 & -0.03 \\ Pre.Alb.:(Interc.) & -0.12 & 0.06 & 0.25 \\ Pre.Alb.:Dis.B & -0.56 & -0.31 & -0.06 \\ Glob.A:(Interc.) & -0.70 & -0.49 & -0.27 \\ Glob.A:Dis.B & -0.47 & -0.18 & 0.11 \\ Glob.B:(Interc.) & 0.00 & 0.00 & 0.00 \\ Glob.B:Dis.B & 0.00 & 0.00 & 0.00 \\ \hline \end{tabular} \end{minipage} \begin{minipage}[l]{8cm} Bayes with $N(0,1^2)$ prior: \\ \begin{tabular}{rrrr} \hline & 2.5\% & Est. & 97.5\% \\ \hline Alb.:(Interc.) & 0.45 & 0.62 & 0.80 \\ Alb.:Dis.B & -0.47 & -0.24 & -0.00 \\ Pre.Alb.:(Interc.) & -0.14 & 0.06 & 0.26 \\ Pre.Alb.:Dis.B & -0.57 & -0.30 & -0.03 \\ Glob.A:(Interc.) & -0.71 & -0.49 & -0.26 \\ Glob.A:Dis.B & -0.48 & -0.18 & 0.13 \\ Glob.B:(Interc.) & 0.00 & 0.00 & 0.00 \\ Glob.B:Dis.B & 0.00 & 0.00 & 0.00 \\ \hline \end{tabular} \end{minipage} \vskip 2mm \begin{minipage}[l]{8cm} Bayes with $N(0,5^2)$ prior: \\ \begin{tabular}{rrrr} \hline & 2.5\% & Est. & 97.5\% \\ \hline Alb.:(Interc.) & 0.46 & 0.63 & 0.80 \\ Alb.:Dis.B & -0.48 & -0.25 & -0.02 \\ Pre.Alb.:(Interc.) & -0.14 & 0.06 & 0.25 \\ Pre.Alb.:Dis.B & -0.57 & -0.30 & -0.05 \\ Glob.A:(Interc.) & -0.72 & -0.49 & -0.27 \\ Glob.A:Dis.B & -0.48 & -0.18 & 0.12 \\ Glob.B:(Interc.) & 0.00 & 0.00 & 0.00 \\ Glob.B:Dis.B & 0.00 & 0.00 & 0.00 \\ \hline \end{tabular} \end{minipage} \begin{minipage}[l]{8cm} Bayes with $N(0,50^2)$ prior: \\ \begin{tabular}{rrrr} \hline & 2.5\% & Est. & 97.5\% \\ \hline Alb.:(Interc.) & 0.46 & 0.63 & 0.80 \\ Alb.:Dis.B & -0.48 & -0.25 & -0.02 \\ Pre.Alb.:(Interc.) & -0.13 & 0.06 & 0.26 \\ Pre.Alb.:Dis.B & -0.58 & -0.31 & -0.05 \\ Glob.A:(Interc.) & -0.71 & -0.49 & -0.27 \\ Glob.A:Dis.B & -0.48 & -0.18 & 0.11 \\ Glob.B:(Interc.) & 0.00 & 0.00 & 0.00 \\ Glob.B:Dis.B & 0.00 & 0.00 & 0.00 \\ \hline \end{tabular} \end{minipage} \caption{Results for Maximum Likelihood and Bayesian estimation in the blood sample data on the parameter level: Comparison between ML and Bayes with several different prior choices.} \label{result:table} \end{table} \section{Discussion} We have introduced an implementation of a Bayesian estimation framework for Dirichlet regression that combines the advantage of identifiability -- by selection of a reference category -- with post-estimation flexibility -- by use of a Bayesian simulation algorithm. Directly to be used \textsf{Stan} code was introduced, and an application demonstrates the modeling capabilities of this solution. {} \beginsupplement \section{\textsf{Stan} implementation of the alternative parametrization including varying precision} The following code uses design matrix object \texttt{X} both for expectation and precision: \begin{verbatim} stan_code_var_theta <- ' data { int<lower=1> N; // total number of observations int<lower=2> ncolY; // number of categories int<lower=2> ncolX; // number of predictor levels matrix[N,ncolX] X; // predictor design matrix matrix[N,ncolY] Y; // response variable real sd_prior_beta; // Prior standard deviation real sd_prior_theta; // Prior standard deviation } parameters { matrix[ncolY-1,ncolX] beta_raw; // coefficients (raw) matrix[1,ncolX] theta; // coefficients } transformed parameters{ matrix[ncolY,ncolX] beta; // coefficients for (l in 1:ncolX) { beta[ncolY,l] = 0.0; } for (k in 1:(ncolY-1)) { for (l in 1:ncolX) { beta[k,l] = beta_raw[k,l]; } } } model { // priors for (l in 1:ncolX) { for (k in 1:(ncolY-1)) { beta_raw[k,l] ~ normal(0,sd_prior_beta); } theta[1,l] ~ normal(0,sd_prior_theta); } // likelihood for (n in 1:N) { vector[ncolY] logits; real exptheta; for (m in 1:ncolY){ logits[m] = X[n,] * transpose(beta[m,]); } exptheta = exp(X[n,] * transpose(theta[1,])); logits = softmax(logits); transpose(Y[n,]) ~ dirichlet(logits * exptheta); } } ' \end{verbatim} \end{document}
arXiv
Publications of the Astronomical Society of Australia Alert added. Go to My account > My alerts to manage your alert preferences. Add alert Recommend this journal Email your librarian or administrator to recommend adding this journal to your organisation's collection. URL: /core/journals/publications-of-the-astronomical-society-of-australia Who would you like to send this to? * You are leaving Cambridge Core and will be taken to this journal's article submission site. Leave now Add alert Add alert Published on behalf of the Astronomical Society of Australia Dawes reviews Latest volume Back volumes Contains open access Editor: Stanislav Shabala University of Tasmania, Australia Publications of the Astronomical Society of Australia (PASA) publishes new and significant research in astronomy and astrophysics. PASA covers a wide range of topics within astronomy, including multi-wavelength observations, theoretical modelling, computational astronomy and visualisation. PASA also maintains its heritage of publishing results on southern hemisphere astronomy and on astronomy with Australian facilities.PASA publishes research papers, review papers and special series on topical issues, making use of expert international reviewers and an experienced Editorial Board. As an electronic-only journal, PASA publishes paper by paper, ensuring a rapid publication rate. There are no page charges. PASA's Editorial Board approve a certain number of papers per year to be published Open Access without a publication fee. Cover image: Created by Natasha Hurley-Walker (Curtin / ICRAR) and the GLEAM Team. Please contact Natasha via nhw at icrar.org if you would like to use this image. Review of Aboriginal astronomy and navigation: A Western Australian focus Patricia A. Forster Publications of the Astronomical Society of Australia, Volume 38 High spectral resolution multi-tone Spacecraft Doppler tracking software: Algorithms and implementations G. Molera Calvés, S. V. Pogrebenko, J. F. Wagner, G. Cimò, L. I. Gurvits, T. M. Bocanegra-Bahamón, D. A. Duev, N. V. Nunes Simulating infrared spectro-photometric surveys with a Spritz L. Bisigello, C. Gruppioni, F. Calura, A. Feltre, F. Pozzi, C. Vignali, L. Barchiesi, G. Rodighiero, M. Negrello, F. J. Carrera, K. M. Dasyra, J. A. Fernández-Ontiveros, M. Giard, E. Hatziminaoglou, H. Kaneda, E. Lusso, M. Pereira-Santaella, P. G. Pérez González, C. Ricci, D. Schaerer, L. Spinoglio, L. Wang The Rapid ASKAP Continuum Survey Paper II: First Stokes I Source Catalogue Data Release Catherine L. Hale, D. McConnell, A. J. M. Thomson, E. Lenc, G. H. Heald, A. W. Hotan, J. K. Leung, V. A. Moss, T. Murphy, J. Pritchard, E. M. Sadler, A. J. Stewart, M. T. Whiting A calibration and imaging strategy at 300 MHz with the Murchison Widefield Array (MWA) Jaiden H. Cook, Nicholas Seymour, Marcin Sokolowski The radioactive nuclei $^{\textbf{26}}\mathrm{Al}$ and $^{\textbf{60}}\mathrm{Fe}$ in the Cosmos and in the solar system R. Diehl, M. Lugaro, A. Heger, A. Sieverding, X. Tang, K. A. Li, E. T. Li, C. L. Doherty, M. G. H. Krause, A. Wallner, N. Prantzos, H. E. Brinkman, J. W. den Hartogh, B. Wehmeyer, A. Yagüe López, M. M. M. Pleintinger, P. Banerjee, W. Wang Under an iron sky: On the entropy at the start of the Universe Luke A. Barnes, Geraint F. Lewis The MWA long baseline Epoch of reionisation survey—I. Improved source catalogue for the EoR 0 field C. R. Lynch, T. J. Galvin, J. L. B. Line, C. H. Jordan, C. M. Trott, J. K. Chege, B. McKinley, M. Johnston-Hollitt, S. J. Tingay Tweets by CUP_SciEng PASA blog feed (Cambridge Core Blog) A very fast survey of the radio sky: The Rapid ASKAP Continuum Survey (RACS) 01 December 2020, David McConnell The Rapid ASKAP Continuum Survey (RACS) is exactly that: rapid, taking just 300 hours with the new Australian SKA Pathfinder radio-telescope. It covers... Searching 10 Million stars to see if anyone is home 24 September 2020, Chenoa Tremblay Are we alone in the Universe? This fundamental question has propelled the curiosity of writers and scientists alike for hundreds of years. Although we continue... Astronomers find fastest-growing black hole known in space 22 June 2018, ANU Newsroom and Christian Wolf Astronomers at Australian National University have found the fastest-growing black hole known in the Universe, describing it as a monster that devours a mass... 2020 Impact Factor: 13 out of 68 Astronomy & Astrophysics 2020 Journal Citation Reports © Clarivate Analytics Expanding Confusion: Common Misconceptions of Cosmological Horizons and the Superluminal Expansion of the Universe Tamara M. Davis, Charles H. Lineweaver Publications of the Astronomical Society of Australia, Volume 21, Issue 1 The Murchison Widefield Array: The Square Kilometre Array Precursor at Low Radio Frequencies S. J. Tingay, R. Goeke, J. D. Bowman, D. Emrich, S. M. Ord, D. A. Mitchell, M. F. Morales, T. Booler, B. Crosse, R. B. Wayth, C. J. Lonsdale, S. Tremblay, D. Pallot, T. Colegate, A. Wicenec, N. Kudryavtseva, W. Arcus, D. Barnes, G. Bernardi, F. Briggs, S. Burns, J. D. Bunton, R. J. Cappallo, B. E. Corey, A. Deshpande, L. Desouza, B. M. Gaensler, L. J. Greenhill, P. J. Hall, B. J. Hazelton, D. Herne, J. N. Hewitt, M. Johnston-Hollitt, D. L. Kaplan, J. C. Kasper, B. B. Kincaid, R. Koenig, E. Kratzenberg, M. J. Lynch, B. Mckinley, S. R. Mcwhirter, E. Morgan, D. Oberoi, J. Pathikulangara, T. Prabu, R. A. Remillard, A. E. E. Rogers, A. Roshi, J. E. Salah, R. J. Sault, N. Udaya-Shankar, F. Schlagenhaufer, K. S. Srivani, J. Stevens, R. Subrahmanyan, M. Waterson, R. L. Webster, A. R. Whitney, A. Williams, C. L. Williams, J. S. B. Wyithe
CommonCrawl
\begin{document} \title{Multivariable quantum signal processing (M-QSP): prophecies of the two-headed oracle} \author{Zane M.\ Rossi}\affiliation{ Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA} \orcid{0000-0002-7718-654X} \author{Isaac L.\ Chuang} \affiliation{ Department of Physics, Department of Electrical Engineering and Computer Science, and Co-Design Center for Quantum Advantage, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA} \begin{abstract} \noindent Recent work shows that quantum signal processing (QSP) and its multi-qubit lifted version, quantum singular value transformation (QSVT), unify and improve the presentation of most quantum algorithms. QSP/QSVT characterize the ability, by alternating ansätze, to obliviously transform the singular values of subsystems of unitary matrices by polynomial functions; these algorithms are numerically stable and analytically well-understood. That said, QSP/QSVT require consistent access to a \emph{single} oracle, saying nothing about computing \emph{joint properties} of two or more oracles; these can be far cheaper to determine given an ability to pit oracles against one another coherently. This work introduces a corresponding theory of QSP over multiple variables: M-QSP. Surprisingly, despite the non-existence of the fundamental theorem of algebra for multivariable polynomials, there exist necessary and sufficient conditions under which a desired \emph{stable} multivariable polynomial transformation is possible. Moreover, the classical subroutines used by QSP protocols survive in the multivariable setting for non-obvious reasons, and remain numerically stable and efficient. Up to a well-defined conjecture, we give proof that the family of achievable multivariable transforms is as loosely constrained as could be expected. The unique ability of M-QSP to \emph{obliviously} approximate \emph{joint functions} of multiple variables coherently leads to novel speedups incommensurate with those of other quantum algorithms, and provides a bridge from quantum algorithms to algebraic geometry. \end{abstract} \maketitle \section{Introduction} Recent advances in the theory of quantum algorithms have led to a powerful method, quantum singular value transformation (QSVT), for applying polynomial transformations to the singular values of sub-blocks of unitary processes \cite{gslw_19}. These algorithms demonstrate the ability of simple circuit ansätze to expressively and efficiently control quantum subsystem dynamics. In addition to improving the performance of many known quantum algorithms \cite{rall_21, coherent_ham_sim_21}, QSVT has great explanatory utility: unifying the presentation of most major known quantum algorithms \cite{mrtc_21}. This includes Hamiltonian simulation \cite{lloyd_hamiltonian_qsvt_21, coherent_ham_sim_21}, search \cite{gslw_19}, phase estimation \cite{rall_21}, quantum walks \cite{gslw_19}, fidelity estimation \cite{gilyen_fidelity_22}, sophisticated techniques for measurement \cite{petz_recovery_20}, and channel discrimination \cite{rossi_qht_21}. QSVT has found purchase in surprisingly disparate subfields, from undergirding a general theory of quantum-inspired classical algorithms for low-rank machine learning \cite{chia_20}, to quantum cryptographic protocols with zero-knowledge properties \cite{lombardi_21}. At its core, QSVT is a sophisticated, lifted, multi-qubit extension to quantum signal processing (QSP) \cite{lyc_16_equiangular_gates, lc_17_simultation, lc_19_qubitization}, which itself completely characterizes the achievable polynomial transformations of a scalar value embedded in a single-qubit rotation. Again, although the transformations possible with QSP are with respect to a simple circuit ansatz, they are surprisingly general, and it is the simplicity of this ansatz in conjunction with its expressibility that leads to QSP's and QSVT's numerical stability \cite{haah_2019, chao_machine_prec_20, dong_efficient_phases_21} and usefulness. This single-qubit algorithm can be lifted to arbitrarily large systems of qubits through the identification of natural qubit-like subspaces within high-dimensional unitaries, leading to the famed exponential performance improvements of quantum algorithms for, e.g., matrix inversion \cite{hhl_09}, factoring \cite{shor_99}, and simulation \cite{feynman_18}. A natural extension to QSP considers the scenario when a computing party is given access to not just one oracle encoding a scalar signal, but two such oracles whose relation is unknown in general, as in Fig.~\ref{fig:m_qsp_sibyl_diagram}. One can view this setting as a game, or else a coherent interrogation between a quantum querent and a novel \emph{two-headed} unitary oracle. As in standard QSP, the querent hopes to decide on hidden properties of the oracle(s). The motivation for this extension stems from a general interest in inference/communication: if one is interested only in \emph{joint} properties of two signals, rather than \emph{absolute} properties, do there exist corresponding realizable efficiencies in methods to decide on these properties? Moreover, can these transformations be done entirely \emph{coherently}, as is one major benefit of QSP/QSVT? Any setting wherein one hopes to subject a quantum system to multiple interleaved unitary operations, and equivalently where one hopes to talk of multivariable polynomial transformations (of general interest in classical and quantum computer science both \cite{ding_m_pke_09, aaronson_q_money_12, anshu_21}), promotes investigation into multivariable analogues of QSP/QSVT. \begin{figure} \caption{A meeting between the intrepid querent and the two-headed quantum oracle. The querent can hold a quantum state and may submit it to the unitary action of \emph{either} head, $A$ or $B$, whose actions are consistent. The querent may submit again and again to either oracle (possibly a different one each time) with this same state, and may intersperse their own unitary gates $U_n$ along the way before finally measuring. The querent seeks, of course, to interpret the oracle's fragile mystery.} \label{fig:m_qsp_sibyl_diagram} \end{figure} These considerations, in conjunction with inspiration by the success of standard QSP, suggest the need for a theory of multivariable polynomial transforms embedded in unitary matrices. The theory of multivariable polynomials, even outside the scope of quantum information, is substantively more complex for fundamental reasons in algebraic geometry including the loss of the fundamental theorem of algebra, and thus provides a highly non-trivial elaboration on the theory of QSP-like algorithms. The fruit of this effort is worthwhile, though, and provides additional methods for showing separations between the performance of quantum algorithms for inference and their classical counterparts, as well as an opportunity to leverage the deeply-studied mathematical subfield of algebraic geometry in many dimensions for purposes of practical computing interest. This work makes progress toward a comprehensive theory of achievable multivariable polynomial transformations for QSP-like ansätze, constrained mostly to the two-variable setting. In generic terms, as opposed to standard QSP which is the study of achievable functions from the circle to SU(2), M-QSP considers functions from the multitorus to SU(2) \begin{equation} \underset{\text{Standard QSP}}{\mathbb{T} \rightarrow \text{SU(2)}} \quad\quad\quad \underset{\text{M-QSP}}{\mathbb{T}^n \rightarrow \text{SU(2)}}, \end{equation} where $\mathbb{T}$ is the set of complex numbers of modulus one. We will primarily study the case $n = 2$. The business of QSP \cite{haah_2019, lyc_16_equiangular_gates, lc_17_simultation, lc_19_qubitization, gslw_19} and M-QSP comes in two fundamental directions: (1) describing how the parameters $\Phi$ defining a circuit ansatz are taken to polynomial transforms (the so-called $\Phi \mapsto P, Q$ direction), and (2) providing simple conditions under which a suitable \emph{partial specification} of polynomial transforms $\tilde{P}, \tilde{Q}$ (still useful for solving a desired algorithmic problem) can be `completed' and their corresponding $\Phi$ calculated (the so-called $\tilde{P}, \tilde{Q} \mapsto \Phi$ direction). These directions are made explicit in Sec.~\ref{sec:qsp_constructions}. Study of such functions relies on powerful extensions of results from single-variable functional analysis and approximation theory to multivariable settings. To keep this elaboration organized, we introduce theorems of standard QSP and M-QSP in parallel, taking care to state assumptions, methods, and underlying theorems that distinguish these two paths. The loss of foundational theorems in the multivariable setting means M-QSP theorems are more strongly qualified. This structure is presented diagrammatically in Figure~\ref{fig:conjecture_flow_diagram}, and discussed in Sec.~\ref{sec:paper_outline}. \begin{figure} \caption{A summary of the major theorems and related conjectures of this work. Within standard QSP (top half), the theorems are given without qualification; Thm.~\ref{thm:partial_qsp_properties} is shown to depend entirely on the single-variable Fejér-Riesz lemma. Crossing the horizontal dotted line to M-QSP (bottom half), we give parallel theorems, save Thm.~\ref{thm:major_m_qsp_properties} now depends on Conjecture~\ref{conj:m_qsp_factorability}, and Thm.~\ref{thm:partial_m_qsp_properties}, now depends on the intricate multivariable Fejér-Riesz theorem (FRT), itself reliant on Conjecture~\ref{conj:m_qsp_factorability} to yield valid M-QSP phases for a desired multivariable polynomial transformation. The relation between the FRT and Conjecture~\ref{conj:m_qsp_factorability} (the FRT = QSP conjecture) is discussed in Remark~\ref{rem:conjecture_frt_relation}.} \label{fig:conjecture_flow_diagram} \end{figure} \subsection{Prior work} As this work proposes a substantively distinct algorithm drawing on methods/assumptions common in the study of QSP, we give a roadmap to prior major work. This work can be laid along three broad directions: (1) fundamental work on the analytic form of of QSP/QSVT protocols, (2) detailed work on the stability of numerically implementing optimizations over, and the classical subroutines of, QSP/QSVT and (3) concrete applications of QSP/QSVT as a `meta-algorithm' or `algorithmic framework' to previously unconsidered problems. (1) The theory of QSP has its origin in the study of composite pulse techniques in NMR \cite{lyc_16_equiangular_gates}, though its first instantiation by name appeared in service of improved methods for Hamiltonian simulation \cite{lc_17_simultation, lc_19_qubitization}. These papers fleshed out theorems on the structure and numerical stability of QSP. The curious ability to locate invariant qubit-like subspaces in larger Hilbert spaces and perform QSP simultaneously within them, \emph{obliviously to the eigenbases or singular vector bases of these subsystems as well as the eigenvalues or singular values}, led to the far expanded QSVT \cite{gslw_19}, whose uses, robustness, and applications \cite{mrtc_21, gslw_19} have been recently explored. Finally, rephrasing QSVT in terms of Hamiltonian simulation \cite{lloyd_hamiltonian_qsvt_21} has both simplified the presentation and in some ways brought this algorithmic story full circle. Ongoing work continues to simplify the presentation of these algorithms. (2) Notably, initially proposed constructive methods for determining the defining parameters $\Phi$ of a QSP ansatz were known to be numerically unstable. Surprisingly, extensive results have since shown that there exist novel, stable, divide-and-conquer classical methods for determining these parameters \cite{chao_machine_prec_20, dong_efficient_phases_21, haah_2019}. In addition to standard approximative constraints for the embedded polynomial transformations, new work has also investigated more sophisticated constraints and their effect on QSP performance \cite{sarkar_density_22}. Moreover, beyond algorithms for polynomial approximation and QSP phase read-off, recent forays into symmetrized (restricted) QSP ansätze \cite{sym_qsp_21} have proven that the relevant loss landscapes are, under some reasonable restrictions, convex. (3) Finally, QSP/QSVT have recently been applied to a wide variety of subfields, both in and out of quantum information. These include Hamiltonian simulation \cite{coherent_ham_sim_21}, phase estimation \cite{rall_21}, quantum zero-knowledge proofs \cite{lombardi_21}, classical quantum-inspired machine learning algorithms \cite{chia_20}, semi-definite programming \cite{q_sdp_solvers_20}, quantum adiabatic methods \cite{lin_eig_filter_20}, the approximation of correlation functions \cite{rall_correlation_20}, the approximation of fidelity \cite{gilyen_fidelity_22}, recovery maps \cite{petz_recovery_20}, and fast inversion of linear systems \cite{tong_inversion_21}. Efforts continue to bring further computational problems into the fold of QSP/QSVT. This work, by merit of considering a distinct ansatz that precludes many of the proof methods of standard QSP, is somewhat incommensurate with the prior work given. However, there is good reason to believe that much of the wonderful studies into the numerical stability of standard QSP may be (with suitable modification) applicable in the multivariable setting. Moreover, we hope that a variety of new, previously unconsidered problems from the classical world may now fall under the purview of M-QSP, drawing on its use of theorems originally designed for understanding autoregressive filter design and image analysis \cite{mv_frt_04}. In a sense this work advocates going back to the \emph{bare metal} of QSP algorithms: reconsidering its basic ansatz, elaborating on tweaks to its setting, and demarcating a family of \emph{QSP-like} ansätze toward a better understanding of quantum algorithms. \subsection{Paper outline and informal theorem statements} \label{sec:paper_outline} This work introduces multivariable QSP (M-QSP) in the terms of QSP (and in parallel with it), and thus assumes some familiarity with the constitutive theorems of the latter. Concretely, Sec.~\ref{sec:qsp_review} introduces two fundamental theorems of QSP that are argued to be representative of the statements one would like to be able to make to pragmatically understand its use. Next, Sec.~\ref{sec:alg_geo_review} briefly covers concepts in algebraic geometry that will support the major theorems of M-QSP in Sec.~\ref{sec:m_qsp_review}. The theorems of Sec.~\ref{sec:m_qsp_review}, given casual statements below, are modelled quite closely after those of standard QSP, but require far more involved techniques to resolve, as shown in Appendix~\ref{appx:m_qsp_proofs}. Here, we give the informal statements of the major results of this work. Depiction of their constitutive assumptions and relations is given in Figure~\ref{fig:conjecture_flow_diagram}. \begin{theorem*} \emph{Unitary form of M-QSP.} Informal statement of Theorem~\ref{thm:major_m_qsp_properties}. All M-QSP protocols, suitably defined by an alternating ansatz (Def.~\ref{def:m_qsp}), have a simple unitary form with parity, norm, and determinant constraints, up to the resolution of Conjecture~\ref{conj:m_qsp_factorability} (here named the FRT = QSP conjecture). \end{theorem*} \begin{theorem*} \emph{Positive extensions for M-QSP.} Informal statement of Theorem~\ref{thm:partial_m_qsp_properties}. Given a desired multivariable polynomial transform that satisfies parity and norm constraints simpler than those in Theorem~\ref{thm:major_m_qsp_properties}, the ability to find an M-QSP protocol that embeds this transform depends solely on whether a well-defined matrix of Fourier components of a related transform has low rank, under the assumption the related transform is \emph{stable} (Def.~\ref{def:stable_polynomials}). \end{theorem*} \begin{corollary*} \emph{Uniqueness of stable positive extensions for M-QSP.} Informal statement of Corollary~\ref{cor:m_qsp_phase_readoff}. Given a unitary matrix of a valid M-QSP protocol, the real numbers parameterizing the M-QSP protocol can be determined by an efficient classical algorithm. \end{corollary*} In addition to showing presenting basic theorems M-QSP up to certain assumptions, we conclude this work by providing a few worked examples in Sec.~\ref{sec:worked_examples}, explicitly providing decision problems (here in a noiseless setting) for which M-QSP provides an intuitive and quantitatively easy to show improvement over other algorithms. We present problems for which the approach of M-QSP is natural, and for which no other obvious quantum algorithmic methods are known. In essence, we show the ability for M-QSP protocols to decide on \emph{joint properties} of pairs of oracles, where these properties cannot be determined as efficiently (or even deterministically) by serial or parallel standard QSP protocols using each oracle individually; in other words, there exist scenarios where it is far better to compute functions coherently `under the hood' of a unitary evolution, followed by a precise measurement, as opposed to classically following an estimative process. We show that there exist non-trivial examples of algorithmic advantage in query complexity for M-QSP protocols which permit no efficient reductions to a single-variable settings (Problem~\ref{prob:multi_channel_discrimination}), and provide accompanying geometric intuitions for why this is the case. Finally, and more abstractly, taking inspiration from standard QSP in its natural connection to the famed Chebyshev polynomials, we use M-QSP to define one infinite family of multivariable Chebyshev-like polynomials and discuss their significance. In turn, we point toward the usefulness of M-QSP in studying the theory of orthogonal polynomials, which has great relevance to the theory of positive extensions and signal processing (both classical and quantum). For the interested reader, discussion on the outlook for M-QSP, its caveats, its open problems, and its position in the pantheon of quantum algorithms, is presented in Sec.~\ref{sec:discussion}. \section{Construction and analysis of M-QSP} \label{sec:qsp_constructions} This section has two goals: (1) a quick re-introduction of the major theorems (and moral takeaways) of QSP drawn from \cite{gslw_19} (with some alternative proofs), followed by (2) a series of appropriate definitions, lemmas, and finally analogous theorems for M-QSP. Proofs, applications, caveats, and worked examples, are left to Secs.~\ref{sec:discussion}, \ref{sec:worked_examples}, and Appendix~\ref{appx:m_qsp_proofs}. \subsection{Review of standard (single-variable) QSP} \label{sec:qsp_review} In standard QSP a computing party is given oracle access to a unitary operation $A(x) = \exp{(i\arccos{x}\,\sigma_x)}$ for some unknown $x \in [-1,1]$; this oracle performs a consistent, unknown rotation about a known axis on the Bloch sphere. By repeatedly applying this oracle, interspersed with known rotations about a separate orthogonal (in this case $\sigma_z$) axis, the unknown signal can be cleverly correlated with its previous applications to generate complex polynomial functions of the parameter $x$. This transformation can be done \emph{oblivious} to $x$. The circuit for QSP, as well as its defining parameters, are depicted in Figure~\ref{fig:m_qsp_specification}. Characterizing the possible polynomial transforms is the business of QSP, and leads to surprising applications and performance improvements for many known quantum algorithms as discussed previously. We quote two important theorems; the first (Theorem~\ref{thm:major_qsp_properties}) characterizes the explicit unitary form of the QSP ansatz, while the second (Theorem~\ref{thm:partial_qsp_properties}) introduces a simple necessary and sufficient condition (and an implicit constructive method) under which a desired polynomial transform can be embedded. We refer readers to the original proofs of these theorems, and provide an alternative proof for Theorem~\ref{thm:partial_qsp_properties} in Appendix~\ref{appx:m_qsp_proofs}. \begin{theorem} \label{thm:major_qsp_properties} Unitary form of quantum signal processing (QSP), i.e., $\Phi \mapsto P, Q$. Theorem 3 in \cite{gslw_19}. Let $n \in \mathbb{N}$. There exists $\Phi = \{\phi_0, \phi_1, \cdots, \phi_n\} \in \mathbb{R}^{n + 1}$ such that for all $x \in [-1, 1]$: \begin{equation} \label{eq:qsp_def} U_{\Phi}(x) = e^{i\phi_0\sigma_z}\prod_{k = 1}^{n} \left(A(x)\, e^{i\phi_k \sigma_z}\right) = \begin{pmatrix} P(x) & i\sqrt{1 - x^2}Q(x)\\ i\sqrt{1 - x^2}Q^*(x) & P^*(x) \end{pmatrix}, \end{equation} where $A(x) = \exp{(i\arccos{x}\,\sigma_x)}$ if and only if $P, Q \in \mathbb{C}[x]$ such that \begin{enumerate}[label=(\arabic*)] \item $\text{deg}(P) \leq n$ and $\text{deg}(Q) \leq n - 1$. \item $P$ has parity-$n \pmod 2$ and $Q$ has parity-$(n-1)\pmod 2$. \item For all $x \in [-1, 1]$ the relation $|P(x)|^2 + (1 - x^2)|Q(x)|^2 = 1$ holds. \end{enumerate} Note here and elsewhere $\sigma_x, \sigma_z$ are the usual single-qubit Pauli matrices with non-zero entries $\pm 1$. \end{theorem} \begin{theorem} \label{thm:partial_qsp_properties} Reconstruction of QSP protocols from partial embeddings, i.e., $\tilde{P}, \tilde{Q} \mapsto \Phi$. Theorem 5 in \cite{gslw_19}. Let $n \in \mathbb{N}$ fixed. Let $\tilde{P}, \tilde{Q} \in \mathbb{R}[x]$. There exists some $P, Q \in \mathbb{C}[x]$ satisfying conditions (1-3) of Theorem~\ref{thm:major_qsp_properties} such that $\tilde{P} = \Re(P)$ and $\tilde{Q} = \Re(Q)$ if and only if $\tilde{P}, \tilde{Q}$ satisfy conditions (1-2) of Theorem~\ref{thm:major_qsp_properties} and for all $x \in [-1, 1]$ \begin{equation} |\tilde{P}(x)|^2 + (1 - x^2)|\tilde{Q}(x)|^2 \leq 1. \end{equation} The same holds if we replace real parts by imaginary parts and additionally $\tilde{P} \equiv 0$ or $\tilde{Q} \equiv 0$ can be chosen for simplicity. An alternative proof to that of \cite{gslw_19} is given in Appendix \ref{appx:m_qsp_proofs}, relying on the single-variable Fejér-Riesz lemma, working in the Laurent picture. A similar method is used in the multivariable case. \end{theorem} The two theorems above were selected out of \emph{many} given in the major references \cite{gslw_19, lyc_16_equiangular_gates, lc_17_simultation, lc_19_qubitization}. These together give a minimal toolkit for using and thinking about QSP. Theorem~\ref{thm:major_qsp_properties} clearly states what QSP unitaries must look like (i.e., going from real phases to embedded polynomials), while Theorem~\ref{thm:partial_qsp_properties} addresses the reverse problem (going from desired embedded polynomials to real phases). Moreover, Theorem~\ref{thm:partial_qsp_properties} is a problem in matrix completions: given a \emph{partially specified} QSP unitary whose elements are simply constrained, can one find missing components that satsify a more complex constraint? Consequently, for reasons to be discussed later, this completed matrix then immediately yields a set of QSP phases $\Phi$. The utility of matrix completions is clear; note that in Theorem~\ref{thm:partial_qsp_properties} one can choose $\tilde{Q} = 0$ identically, and observe that the $\lvert + \rangle \mapsto \lvert +\rangle$ transition probability is simply $|\tilde{P}(x)|^2$. The polynomial transformation is directly accessible for sampling. There are of course many ways to partially specify a QSP protocol (discussed in prior work), but we claim that Theorem~\ref{thm:partial_qsp_properties} essentially captures their foundation. Consequently, the alternative proof in the appendix provides a concrete connection from the theory of matrix completions to methods vital to understanding the multivariable setting. As a final note, we want to emphasize again that QSP performs its transformations \emph{obliviously}, that is, independent of $x$. By modifying the functional forms approximated by the embedded polynomials, important properties of $x$ can often be computed more cheaply than if one were first to estimate $x$ and apply a classical computation to this classical result; moreover, because the circuit is coherent, the result of this transformation can be used for further quantum computation. In the two variable case, this oblivious transformation becomes even more useful; i.e., one may only want to compute a joint property of two variables, deciding on properties of their correlation, rather than each variable itself. Maintaining coherence and obliviousness of these functional transforms, and thus avoiding classical post-processing at all costs, is essential to the great algorithmic savings possible with QSP/QSVT. \subsection{QSP and algebraic geometry} \label{sec:alg_geo_review} We give a casual map for the intrepid but non-specialist reader to results in algebraic geometry that permit the proofs of Theorems~\ref{thm:major_m_qsp_properties} and \ref{thm:partial_m_qsp_properties} in Appendix~\ref{appx:m_qsp_proofs}. It is the hope that merely introducing some common terms and ideas may lead others to the application of similar techniques to low-hanging fruit in the theory of QSP-like algorithms. The utility of QSP rests in the ability of the computing party to choose a simply-constrained polynomial transform and efficiently determine QSP phases achieving this transform. It turns out that this ability to simply specify a desired polynomial (see proof of Theorem~\ref{thm:partial_qsp_properties}) relies on a special fact about positive trigonometric polynomials. More specifically, these results reside in a family of what are known as \emph{positivstellensätze} (positive-locus-theorems), or for non-negative polynomials \emph{nichtnegativstellensätze} (nonnegative locus-theorems). In turn these belong to an even larger family of \emph{nullstellensäztze} (zero-locus-theorems). These families of theorems seek to establish a relationship between geometry and algebra going back to Diophantus (and more recently Hilbert, Weil, Riemann, Grothendieck and Gröbner, to name but a few). Here are a few examples of the types of statements common to these subfields \cite{marshall_sos_08}. \begin{enumerate}[label=(\arabic*)] \item All positive single-variable polynomials over the reals can be written as the sum of at most two squares. \item All multivariate polynomials that take only nonnegative values over the reals can be written as sums of squares of real rational functions. (Hilbert's 17\textsuperscript{th} problem.) \item All nonnegative trigonometric polynomials can be written as exactly one square. \end{enumerate} These theorems seek to establish succinct descriptions of algebraic functions (often polynomials, for our purposes) which satisfy constraints on algebraic and semialgebraic sets (often products of intervals, for our purposes) \cite{dumitrescu_monograph_07}. In general these statements get weaker as the number of variables increases (see Hilbert's 17\textsuperscript{th} problem above) \cite{dritschel_outer_05, geronimo_ortho_poly_07, mv_frt_04}, and stronger as the family of considered polynomials is restricted (see the Fejér-Riesz lemma, Lemma~\ref{lemma:frt}). Moreover, such results can also often be extended to polynomials over operators as well \cite{op_val_frt_09}. \begin{remark} Here we provide a rapid series of quick, casual definitions of common terms in this work. Algebraic sets are subsets of (for us) $\mathbb{R}^n$ or $\mathbb{C}^n$ defined by zeros of finite sets of polynomial expressions. Sometimes it is required or preferred that one discusses only irreducible algebraic sets, referred to as algebraic varieties (and where algebraic sets can be thought of as finite unions of algebraic varieties). Semi-algebraic sets are defined the same way, save as the locus of both roots of finitely many polynomial equations and the solutions to finitely many polynomial inequalities. They are also preserved under finite union and intersection. In principle these sets and varieties can be defined over general algebraically closed fields, i.e., fields $F$ where every non-constant polynomial in $F[x]$ (univariate polynomials with coefficients in $F$) has a root in $F$. While the definitions presented here can be made extremely general (e.g., the irreducible algebraic sets above can be defined purely in terms of prime ideals of polynomial rings and closed subsets in the Zariski topology), we try to keep close to $\mathbb{R}^n$ and $\mathbb{C}^n$. A foundational observation of algebraic geometry is the formalization of the often overlooked fact that a univariate polynomial (an algebraic object) is uniquely defined by its root-set in the corresponding field (a geometric object). For more general fields or in multivariable settings, similar statements connect ideals of polynomial rings\footnote{We don't define these here because they won't be needed for the proofs given, but they are vital, simple building blocks of the statements of algebraic geometry and abstract algebra generally. For some classic textbooks see \cite{hartshorne_textbook} and the more introductory \cite{fulton_textbook}.} and algebraic sets (e.g., Hilbert's Nullstellensatz). By understanding algebraic objects through geometric means (and vice versa) a variety of otherwise difficult problems can become amenable to simpler, otherwise established methods of analysis. \end{remark} This work will study a subset of Laurent polynomials, trigonometric polynomials, for which relevant positivstellensätze are known to exist for two variables, albeit under complicated restrictions \cite{mv_frt_04, dumitrescu_monograph_07}. The primary work of Theorem~\ref{thm:partial_m_qsp_properties} will be thus to guarantee that the setting of M-QSP satisfies the requirements for the application of these positivstellensätze, and to probe the induced properties of this application. We aim also to interpret the meaning of these rather arcane methods in the context of quantum algorithms. This work mostly relies on the generalization of a standard result for non-negative single-variable trigonometric polynomials known and attributed variously \cite{polya_szego_analysis} as the Fejér-Riesz lemma (or theorem), given in Lemma~\ref{lemma:frt}. This lemma states that such polynomials can be reëxpressed as squares uniquely up to simple constraints. We claim, and show in the alternative proof of \ref{thm:partial_qsp_properties}, that this lemma is effectively the only non-trivial mathematical fact made use of in QSP. The purpose of this lemma in both QSP and M-QSP is to enable a partial specification of a unitary matrix to be completed under constraints. This completed (i.e., filled-out) unitary is easily implementable as a series of simple quantum gates. That is, if these partial specifications can be satisfied, then the other major theorem of QSP (Theorem~\ref{thm:major_qsp_properties}) guarantees that the corresponding QSP phases can be determined efficiently (where further work has shown numerical stability as well). In the multivariable setting, a generalized Fejér-Riesz theorem (now truly a theorem, with proof in excess of thirty pages \cite{mv_frt_04}) exists! We make use of this theorem to undergird a new theory of M-QSP, and invite other researchers to make use of the rich related literature to attack similar problems in the theory of alternating quantum circuit ansätze. \begin{remark} Note that the Fejér-Riesz theorem is disjoint from the classical algorithms used to find the QSP phases of a QSP protocol given its corresponding unitary, as well as classical optimization algorithms used to find good polynomial approximations to continuous functions (e.g., the Remez-type exchange and Parks-McClellan algorithms \cite{grenez, remez, parks_mcclellan_73}). Both of these classical algorithmic families require far fewer modifications when moving to the multivariable case than the Fejér-Riesz theorem. \end{remark} \subsection{Extension to M-QSP} \label{sec:m_qsp_review} We introduce a series of definitions toward analogues of Theorems~\ref{thm:major_qsp_properties} and \ref{thm:partial_qsp_properties} under the assumption of oracle access to two signal operators whose relation is unknown in general. This involves two steps: (1) a simple transformation of variables to clarify the statement of the major theorems, and (2) an application of a series of results from the theory of multivariable nullstellensätze. We start with the definition of multivariable quantum signal processing (M-QSP) protocols, solidifying the intuition given by Fig.~\ref{fig:m_qsp_sibyl_diagram}. \begin{definition} \label{def:m_qsp} Multivariable quantum signal processing (M-QSP). Given oracle access to two unitary operators $A(x_A) = \exp{(i\arccos{x_A}\,\sigma_x)}$ and $B(x_B) = \exp{(i\arccos{x_B}\,\sigma_x)}$, an M-QSP protocol of length $n$ is defined by the length-$n$ binary string $s \in \{0, 1\}^n$ and a set of real phases $\Phi = \{\phi_0, \phi_1, \cdots, \phi_n\} \in \mathbb{R}^{n + 1}$ according to the map to the following quantum circuit: \begin{equation} \label{eq:m_qsp_form} U_{(s, \Phi)}(x_A, x_B) = e^{i\phi_0 \sigma_z}\prod_{k = 1}^{n} A^{s_k}(x_A)B^{1 - s_k}(x_B)e^{i\phi_k \sigma_z}. \end{equation} This is again a product of rotations about orthogonal axes on the Bloch sphere. This map is depicted in Figure~\ref{fig:m_qsp_specification}. \end{definition} \begin{figure} \caption{Circuit definitions for standard QSP and M-QSP, indicating the explicit map from the set of real phases $\Phi$ (the QSP phases), and the bit-string $s$ to single-qubit quantum circuits comprising $X$ and $Z$ rotations. This realizes the generic protocol set forth in Fig.~\ref{fig:m_qsp_sibyl_diagram}. M-QSP circuits come from an exponentially large family in which the querent is allowed to apply either the unknown $A(a) = e^{i\theta_a \sigma_x}$ or $B(b) = e^{i\theta_b \sigma_x}$ between $Z$ rotations parameterized by $\Phi$. Here $a = e^{i\theta_a}$ and $b = e^{i\theta_b}$. For standard QSP $s = 11\cdots1$ or $s = 00\cdots0$ depending on the variable of choice. Here $A(a)$ versus $A(x_A)$ in Eq.~\ref{def:m_qsp} refer to the Laurent picture (Def.~\ref{def:laurent_picture}) and $x$ picture respectively; likewise for $B$.} \label{fig:m_qsp_specification} \end{figure} While Def.~\ref{def:m_qsp} shares obvious similarities to the circuit given in Theorem~\ref{thm:major_qsp_properties} (its major novel component being the bit-string $s$ defining the order of $A, B$ iterates), its form is awkward. For the rest of the work we will use a mathematically equivalent picture (Def.~\ref{def:laurent_picture}) to more smoothly apply methods from algebraic geometry. Similar maps have been considered in \cite{haah_2019, mrtc_21} to improve numerical stability of algorithms for finding standard QSP phases. \begin{definition} \label{def:laurent_picture} The Laurent picture. It will be helpful to consider M-QSP under the map $(x_A, x_B) \in [-1,1]^2$ to $(a, b) \in \mathbb{T}^{2} = \{a, b \in \mathbb{C}^2 \;\text{s.t.}\; |a| = |b| = 1\}$ following from replacing \begin{equation} x_A \mapsto \frac{1}{2}(a + a^{-1}), \quad x_B \mapsto \frac{1}{2}(b + b^{-1}). \end{equation} Polynomial transforms in the $x$-picture are equivalent to pseudo-polynomial transforms in the Laurent picture. Often however we will drop the pseudo prefix and simply refer to both as polynomial transforms with the requisite caveats. \end{definition} We will often be working with Laurent polynomials that are real on $\mathbb{T}^2$, so-called \emph{Hermitian trigonometric polynomials} which have the generic form (in two variables) \begin{equation} \label{eq:hermitian_trig_poly} g(a, b) = \sum_{j = -m}^{m}\sum_{k = -n}^{n} g_{j,k} a^j b^k, \quad g_{j,k} = g_{-j, -k}^{*}, \end{equation} for nonnegative integers $m, n$, where the $g_{j,k}$ are complex. We define the degree of a polynomial like that in Eq.~\ref{eq:hermitian_trig_poly} by the ordered tuple $(m, n)$, and say that a polynomial of degree $(p, q)$ satisfies $(p, q) \preccurlyeq (m, n)$ if both $p \leq m$ and $q \leq n$. We say such a polynomial as in Eq.~\ref{eq:hermitian_trig_poly} has inversion parity $(d_A, d_B) \pmod{2}$ if under the map $a \mapsto a^{-1}$ the polynomial transforms as $g \mapsto (-1)^{d_A}g$, and analogously for $b \mapsto b^{-1}$. We say, toward the following lemma, that a multivariable polynomial in the $x$ picture has negation parity $(d_A, d_B) \pmod{2}$ if under the map $x_A \mapsto -x_A$ the polynomial tranaforms as $g(x_A, x_B) \mapsto (-1)^{d_A}g(x_A, x_B)$, and analogously for $x_B \mapsto -x_B$. Note that one can also consider negation parity in the Laurent picture. Before moving entirely to the Laurent picture we state a more familiar $x$ picture form of the future Theorem~\ref{thm:major_m_qsp_properties} in part to demonstrate the necessity for moving to the Laurent picture. \begin{lemma} \label{lemma:m_qsp_form_x} Unitary form of multivariable quantum signal processing (M-QSP) in the $x$ picture. Let $n \in \mathbb{N}$. There exists $\Phi = \{\phi_0, \phi_1, \cdots, \phi_n\} \in \mathbb{R}^{n + 1}$ such that for all $x_A, x_B \in [-1,1]^2$ the circuit presented of length $n$ in Definition~\ref{def:m_qsp} with $m = |s|$ (the Hamming weight of $s$) has the form: \begin{align} U_{s, \Phi}(x_A, x_B) &=\nonumber\\ &\begin{pmatrix} P + Q\sqrt{1 - x_a^2}\sqrt{1 - x_b^2} & R\sqrt{1 - x_a^2} + S\sqrt{1 - x_b^2}\\ -R^*\sqrt{1 - x_a^2} -S^*\sqrt{1 - x_b^2} & P^* + Q^*\sqrt{1 - x_a^2}\sqrt{1 - x_b^2} \end{pmatrix}, \end{align} if and only if $P, Q, R, S \in \mathbb{C}[x_A, x_B]$ and \begin{enumerate}[label=(\arabic*)] \item $\text{deg}(P) \preccurlyeq (m, n - m)$ and $\text{deg}(Q) \preccurlyeq (m - 1, n - m - 1)$ and $\text{deg}(R) \preccurlyeq (m - 1, n - m)$ and $\text{deg}(S) \preccurlyeq (m, n - m - 1)$. \item $P$ has negation parity-$(m, n - m) \pmod{2}$ and $Q$ has negation parity-$(m - 1, n - m - 1) \pmod{2}$ and $R$ has negation parity-$(m - 1, n - m) \pmod{2}$ and $S$ has negation parity-$(m, n - m - 1) \pmod{2}$. \item For all $x_A, x_B \in [-1, 1]^2$ the relation \begin{align} |P|^2 + (1 - x_A^2)(1 - x_B^2)|Q|^2 &+ (1 - x_A^2)|R|^2 + (1 - x_B^2)|S|^2 \nonumber\\ &+ \sqrt{1 - x_A^2}\sqrt{1 - x_B^2}(PQ + P^*Q^* + RS + R^*S^*) = 1,\label{eq:det_relation_x} \end{align} holds, where $P, Q, R, S$ are in $\mathbb{C}[x_A, x_B]$. \item The statement of Conjecture~\ref{conj:m_qsp_factorability} holds as given. \end{enumerate} Proof follows from mapping Theorem~\ref{thm:major_m_qsp_properties} to the $x$ picture. We give this theorem in this form (the $x$-picture) mostly to match with the form of the standard QSP theorems. Unfortunately proving things in this picture is not clean, mostly because the domain is a square in $x$ space due to branch cuts in the square root function, rather than a natural toroidal domain in the Laurent picture. \end{lemma} While Lemma~\ref{lemma:m_qsp_form_x} sits in neat analogy with Theorem~\ref{thm:major_qsp_properties}, it is largely useless for the techniques of positive extensions beyond some opaque versions of Schmüdgen's positivstellensatz \cite{marshall_sos_08}. Consequently from here on we work in the simplified Laurent picture (in other words, choosing to work on the natural torus carved out by the two rotation oracles). \begin{theorem} \label{thm:major_m_qsp_properties} Unitary form of multivariable quantum signal processing (M-QSP). Let $n \in \mathbb{N}$. There exists $\Phi = \{\phi_0, \phi_1, \cdots, \phi_n\} \in \mathbb{R}^{n + 1}$ such that for all $(a, b) \in \mathbb{T}^2$: \begin{equation} \label{eq:m_qsp_def} U_{(s, \Phi)}(a, b) = e^{i\phi_0 \sigma_z}\prod_{k = 1}^{n} A^{s_k}(x_A)B^{1 - s_k}(x_B)e^{i\phi_k \sigma_z} = \begin{pmatrix} P & Q\\ -Q^* & P^* \end{pmatrix}, \end{equation} where $A = \mathbb{I}(a + a^{-1})/2 + \sigma_x(a - a^{-1})/2$ and $B = \mathbb{I}(b + b^{-1})/2 + \sigma_x(b - b^{-1})/2$ if and only if $P, Q \in \mathbb{C}[a, b]$ (Laurent polynomials) and \begin{enumerate}[label=(\arabic*)] \item $\text{deg}(P) \preccurlyeq (m, n - m)$ and $\text{deg}(Q) \preccurlyeq (m, n - m)$ for $m = |s|$ the Hamming weight of $s$. \item $P$ has parity-$n \pmod{2}$ under $(a, b) \mapsto (a^{-1}, b^{-1})$ and $Q$ has parity-$(n - 1) \pmod{2}$ under $(a, b) \mapsto (a^{-1}, b^{-1})$. \item $P$ has parity $m \pmod{2}$ under $a \mapsto -a$ and parity $m - n \pmod{2}$ under $b \mapsto -b$ and Q has parity $m - 1 \pmod{2}$ under $a \mapsto -a$ and parity $n - m - 1 \pmod{2}$ under $b \mapsto -b$. \item For all $(a, b) \in \mathbb{T}^2$ the relation $|P|^2 + |Q|^2 = 1$ holds. \item The statement of Conjecture~\ref{conj:m_qsp_factorability} holds as given. \end{enumerate} This result is posed similarly to that of Theorem~\ref{thm:major_qsp_properties}, and its proof is similar up to the use of Conjecture~\ref{conj:m_qsp_factorability}, as shown in Appendix~\ref{appx:m_qsp_proofs}. \end{theorem} \begin{definition} \label{def:stable_polynomials} Stable polynomials in one and many variables. A polynomial $p(z_1, z_2, \cdots, z_n)$ is said to be stable if the polynomial does not have zeros in the multi-disk \begin{equation} \mathbb{D}^n = \{(z_1, z_2, \cdots, z_n) \in \mathbb{C}^n \text{ s.t. } |z_1| \leq 1, |z_2| \leq 1, \cdots, |z_n| \leq 1\}. \end{equation} There exist other definitions where one considers polynomials with no zeros in the multi upper half-plane, but these are equivalent up to conformal maps. This family of polynomials is quite restricted, though they are ubiquitous in classical signal processing and control theory. In the single and two variable case it is further known that all such polynomials are determinantal, i.e., that for every $p(z_1, z_2)$ of degree $(n_1, n_2)$ with no zeros in the bidisk and $p(0, 0) = 1$, one can write $p(z_1, z_2) = \det{(I - KZ)}$, where $Z$ is an $(n_1 + n_2)\times(n_1 + n_2)$ diagonal matrix with $z_1$ or $z_2$ in each diagonal position, and $K$ a contraction. Here a contraction is a linear map with $\lVert K \rVert_{\infty} < 1$ \cite{geronimo_stable_06, grinshpan_stable_16}. \end{definition} \begin{theorem} \label{thm:partial_m_qsp_properties} Let $n \in \mathbb{N}$ fixed. Let $\tilde{P}, \tilde{Q} \in \mathbb{C}[a, b]$ (Laurent) such that, for all $(a, b) \in \mathbb{T}^2$, $\tilde{P}(a, b)$ and $\tilde{Q}(a, b) \in \mathbb{R}$. There exist some \emph{stable} $P, Q \in \mathbb{C}[a, b]$ (Laurent) satisfying conditions (1-4) of Theorem~\ref{thm:major_m_qsp_properties} such that $\tilde{P} = \Re(P)$ and $\tilde{Q} = \Re(Q)$ if and only if $\tilde{P}, \tilde{Q}$ satisfy conditions (1-3) of Theorem~\ref{thm:major_m_qsp_properties} and for all $a, b \in \mathbb{T}^2$ \begin{equation} f(a, b) = 1 - |\tilde{P}|^2 - |\tilde{Q}|^2 > 0, \end{equation} where $f(a, b)$ is a Laurent polynomial, (note the strict inequality) \emph{and} the additional property holds that the doubly-indexed Toeplitz matrix $c_{u - v}$ (defined in Appendix~\ref{appx:m_qsp_proofs}) populated entirely by differences of the (finitely many) Fourier components of $1/f$ satisfies \begin{equation} \label{eq:low_rank_condition} \left[(c_{u - v})_{u, v \in \Lambda\setminus\{0, 0\}}\right]^{-1}_{\subalign{&\{1, 2, \cdots, m\}\times\{0\} \\ &\{0\}\times\{1, 2, \cdots, n - m\}}} = 0 \end{equation} where $\Lambda = \{0, \cdots m\}\times\{0, \cdots n - m\}$ for $m = |s|$ as before. In this case $f(a, b)$ can be written as the square of a \emph{stable} Laurent polynomial. The same result holds if we consider purely imaginary $\tilde{P}, \tilde{Q}$ on $\mathbb{T}^2$, and we can choose either $\tilde{P} \equiv 0$ or $\tilde{Q} \equiv 0$ if desired for simplicity. If the conditions hold, the satisfying $P, Q$ can be computed efficiently in $n$. For an explicit construction of the Toeplitz matrix $c_{u - v}$ refer to the proof of this theorem in Appendix~\ref{appx:m_qsp_proofs}, which relies on intricate results in \cite{mv_frt_04}. Note also that this result is independent of Conjecture~\ref{conj:m_qsp_factorability}. \end{theorem} \begin{conjecture} \label{conj:m_qsp_factorability} The FRT = QSP conjecture. In both standard QSP and M-QSP, one is interested in the properties of the single-qubit unitary embedding certain symmetric (trigonometric \& Laurent) polynomial transformations, i.e., \begin{equation} \label{eq:unitary_form_conjecture} U = \begin{pmatrix} P & Q\\ -Q^* & P^* \end{pmatrix}, \end{equation} where $P, Q$ are one- or two-variable polynomials in each setting respectively. This matrix has determinant $1$; this imposes relations between the coefficients of $P$ and $Q$. Specifically, it means that certain sums of products of coefficients constituting the polynomial defining the determinant, $|P|^2 + |Q|^2 = 1$, must be zero. In standard QSP, we make use of the fact that this implies $P_{d_A} = e^{i\phi_A}Q_{d_A}$ where $P_k$ is the coefficient of $a^k$ in $P$ and so on, and where $d_A$ is the largest positive degree of $a$ present. It is this simple constraint that permits QSP phase read-off (the QSP phase in fact relates closely to $\phi_A$). In the multivariable case the required property no longer holds manifestly, and must be proven; crucially, and this is why we term this the FRT = QSP conjecture, in the single-variable setting the use of the Fejér-Riesz theorem (FRT) in Theorem~\ref{thm:partial_qsp_properties} allows QSP phase read-off precisely because all unitaries of the form given in Eq.~\ref{eq:unitary_form_conjecture} satisfy $P_{d_A} = e^{i\phi_A}Q_{d_A}$, and thus all FRT-generated completions lead to achievable QSP unitaries. In the multivariable setting this same FRT = QSP equivalence corresponds to a strong property of unitary matrices with multivariable polynomials as elements. This property is the conjecture of interest, and there are reasons discussed in Remark~\ref{rem:conjecture_frt_relation} for its reasonableness. At an intuitive level, it is the statement that unitary matrix completions with minimal symmetries and constraints on their elements (basically parity and norm) are always products of low-degree unitary iterates. This is, in again another sense, an extension of the FRT for unitary matrix completions. We depict some of these relationships further in Fig.~\ref{fig:conjecture_flow_diagram}. Having developed a little motivation, we give a precise statement for FRT = QSP. Given a unitary matrix satisfying the conditions (1-4) of Theorem~\ref{thm:major_m_qsp_properties} (these are the required symmetries and constraints), the single-variable Laurent polynomial coefficients of $P, Q$ satisfy one or both of the following relations for $\phi_A, \phi_B \in \mathbb{R}$: \begin{align} \sum_{k = -d_B}^{d_B} P_{d_A, k} b^{k} &= e^{i\phi_A}\left[\sum_{k = -d_B}^{d_B} Q_{d_A, k} b^{k}\right]\\[0.3em] \sum_{k = -d_A}^{d_A} P_{k, d_B} a^{k} &= e^{i\phi_B}\left[\sum_{k = -d_A}^{d_A} Q_{k, d_B} a^{k}\right]. \end{align} Here $d_A, d_B$ are the maximal positive degree of $a, b$ respectively appearing in $P, Q$. In other words, either of the two single-variable Laurent polynomial coefficients of maximal degree terms in the other variable must differ only by an overall phase. As shorthand we will denote this set of conditions by the following: \begin{align} P_{d_A}(b) &= e^{i\phi_A}Q_{d_A}(b)\\ P_{d_B}(a) &= e^{i\phi_B}Q_{d_B}(a). \end{align} Here $P_{d_A}(b)$ denotes the single variable Laurent polynomial (in $b$) coefficient of the $a^{d_A}$ term of $P$, and analogously for the other terms. \end{conjecture} Simple inspection of a small subset of the equations relating the two polynomials $P, Q$ induced by condition (4) of Theorem~\ref{thm:major_m_qsp_properties} indicates that these polynomials could in general differ by an overall phase \emph{and a conjugation of some subset of their root multisets}, as opposed to the requirement of Conjecture~\ref{conj:m_qsp_factorability}. Consequently this conjecture is strictly stronger than what is induced by the highest order terms of condition (4), though there are reasons to suspect its truth as discussed in Sec.~\ref{sec:discussion} and Remark~\ref{rem:conjecture_frt_relation}. \begin{remark} \label{rem:conjecture_frt_relation} As discussed above, Conjecture~\ref{conj:m_qsp_factorability} (FRT = QSP) is necessary and sufficient for constructive theorems of M-QSP that rely only on the multivariable Fejér-Riesz theorem. Put colloquially, it is the statement that unitary matrix completions furnished by the FRT always allow themselves to be broken into products of low-degree unitary iterates (in this case the $X, Z$ rotations of QSP). This is a strong condition, but we discuss intuition for why it might hold, as well as the usefulness of the FRT for understanding the M-QSP ansatz (Def.~\ref{def:m_qsp}) even when this conjecture does not hold. We also show that M-QSP suggests useful elaborations on the FRT. Separate in-progress numerical work shows that the conjecture holds for small-length M-QSP protocols; it is the lack of a simple inductive or otherwise bootstrappable argument that is of concern. That said, the path towards a solution may be obvious when rephrased in terms of other subfields of algebraic geometry; methods towards showing the conjecture involve looking at each of the $\mathcal{O}(d_A d_B)$ homogenous polynomial equations relating the coefficients of $P, Q$ according to condition (4) in Theorem~\ref{thm:major_m_qsp_properties}. The study of simultaneous systems of multivariable polynomial equations is rich, though full of its own conjectures. Moreover, by simple counting arguments, we see that the coefficients of the leading degree terms of the pairs $P_{d_A}(b), Q_{d_A}(b)$ and $P_{d_B}(a), Q_{d_B}(a)$ in M-QSP are far more constrained than in the single-variable case; namely each of the $\mathcal{O}(d_B)$ or $\mathcal{O}(d_A)$ scalar coefficients respectively of these single-variable polynomials itself enters into $\mathcal{O}(d_A d_B)$ other relations for lower-order homogeneous polynomial equations constituting the determinant. Consequently it is not unreasonable that the roots of at least one of these pairs of polynomials might be forced to differ by no non-trivial conjugations, toward contradiction of the unitarity of the overall transform. In turn, such constructions would provide an entirely new setting for realizing even \emph{non-stable} matrix completions, for which FRT-like theorems have had nothing to say \cite{mv_frt_04}. \end{remark} The FRT = QSP conjecture is not all bad news; note that any \emph{stable} polynomial transform possible to embed in an M-QSP protocol will, by the uniqueness promise \cite{mv_frt_04} of the multivariable Fejér-Riesz theorem up to this stability, lead to a valid M-QSP unitary whose corresponding phases can be efficiently determined by Corollary~\ref{cor:m_qsp_phase_readoff}. Moreover, as is the case in standard QSP, often one wishes not to determine a polynomial transformation directly and then determine its phases, but optimize over the space of possible QSP transformations toward a well-functioning protocol; nothing in the construction of M-QSP prevents this, and results in multivariable approximation theory, as given in Sec.~\ref{sec:discussion}, suggest no major hits to efficiency either. \begin{corollary} \label{cor:m_qsp_phase_readoff} Given a unitary which satisfies conditions (1-4) of Theorem~\ref{thm:major_m_qsp_properties} and is promised to have the form of Definition~\ref{def:m_qsp}, the bitstring $s$ and M-QSP phases $\Phi$ can be efficiently determined by a classical algorithm given $P, Q$. Note that the statement of this corollary trivially circumvents Conjecture~\ref{conj:m_qsp_factorability}. Proof is given in Appendix~\ref{appx:m_qsp_proofs}, within the proof of Theorem~\ref{thm:major_m_qsp_properties}. \end{corollary} This completes a minimal treatment of M-QSP. Evidently its defining theorems are not as succinct as their single-variable analogues. This is for two reasons: (1) the addition of the non-intuitive requirements of the multivariable Fejér-Riesz theorem in Theorem~\ref{thm:partial_m_qsp_properties}, and the reliance on Conjecture~\ref{conj:m_qsp_factorability} in Theorem~\ref{thm:major_m_qsp_properties}. That said, there is positive news for these theorems. Firstly, the initial hurdle of whether the exponentially-many possible orderings of $A, B$ iterates (determined by the bitstring $s$ of Definition~\ref{def:m_qsp}) prevent efficient phase read-off on principle is shown to be a non-issue by Corollary~\ref{cor:m_qsp_phase_readoff}. Secondly, Conjecture~\ref{conj:m_qsp_factorability} is simple to state and has a reasonable chance of being verified or disproven by counterexample through techniques in the theory of systems of multivariable polynomial equations. Finally, the uniqueness guarantee of multivariable Fejér-Riesz theorem shows that if any sufficient condition for the embeddability of a \emph{stable} multivariable polynomial transform in an M-QSP protocol is found, that this theorem can immediately be used to compute completions of unitary matrices, leading to efficient M-QSP phase read-off. Many paths are possible to close the loop on a full characterization of M-QSP. Nevertheless, the necessary and sufficient conditions given in Eq.~\ref{eq:low_rank_condition} are opaque, and the types of functions with compact Fourier support that satisfy them are hard to visualize. Sec.~\ref{sec:discussion} is in devotion to these caveats. However, while the space of achievable multivariable polynomials of a given degree with M-QSP is not all possible polynomials of said degree up to bound and parity constraints, we also give simple arguments for why this should not even have been expected, and how slightly increasing the degree may resolve these issues. Moreover, the numerical performance of algorithms optimizing over M-QSP protocols approximating interesting multivariable functions should be relatively good up to known results in multivariable approximation theory. \section{Worked examples} \label{sec:worked_examples} That the M-QSP ansatz can generate complicated multivariable transformations of the eigenvalues of two commuting operators may be intuitively clear, but it is worthwhile to provide a couple worked examples and visualizations to this effect. In this section we briefly examine two choices of parameterization for M-QSP protocols in which the embedded polynomials have closed form, and discuss their utility. \begin{enumerate}[label=(\arabic*)] \item Trivial M-QSP. Recalling that the bit string $s$ indicates the order in which iterates are applied, take $s$ to be the alternating string of length $2n$, i.e., $s = [01]^n$ (where this parenthetical shorthand indicates repetition of the length-two bit-string). For the QSP phases $\Phi$ consider the all-zeros list $\Phi = \{0, 0, \cdots, 0\}$ of length $2n + 1$. The astute observer may note that in this setting the ordering of $s$ does not matter. \item XYZ M-QSP. Take $s$ indicating the order in which iterates are applied to again be the alternating string of length $2n$. Take the $k$-th element of $\Phi$ to be $(-1)^{k} \pi/4$, i.e., $\Phi = \{\pi/4, -\pi/4, \cdots, -\pi/4, \pi/4\}$ of length $2n + 1$. \end{enumerate} We claim that these two ansätze give two useful classes of M-QSP protocols, and that the latter can be used to answer certain promise problems in noiseless settings which would have had no obvious deterministic quantum solution without the mechanisms of M-QSP. In fact, much like in the single-variable setting, these protocols embed polynomial transformations closely related to the Chebyshev polynomials, which are ubiquitous in the theory of efficient functional approximation. While proposals for multivariable generalizations to Chebyshev polynomials are diverse \cite{lidl_72, dunkl_orthogonal_14, beerends_chebyshev_91}, and the map between $\Phi$ and the expansions of $P, Q$ in a Chebyshev basis is highly non-trivial even for standard QSP, it is useful that such simply described countably infinite families of multivariable polynomials can be achieved. It remains for future study to determine if M-QSP suggests an alternative construction for Chebyshev polynomials over many variables. One can observe that the transformations appearing in the top-left ($P$) and top-right ($Q$) corners of unitaries described in Lemmas~\ref{lemma:trivial_qsp} and \ref{lemma:xyz_qsp} are not only \emph{not} polynomials in the variables $x_a = (a + a^{-1})/2$ and $x_b = (b + b^{-1})/2$, but do not satisfy the well-known orthogonality relations that define Chebyshev polynomials; nevertheless, these are still real (Laurent) polynomials of bounded magnitude on the torus, and achieve the same maximal derivative (i.e., proportional to their degree) properties expected of Chebyshev polynomials. \begin{lemma} \label{lemma:trivial_qsp} Trivial M-QSP. For so-called trivial M-QSP protocols defined in the previous paragraph, the polynomials $P, Q$ defining the resulting unitary have the following form for fixed $n$: \begin{align} P(a, b) &= T_{n}\left[\frac{1}{2}\left(a + \frac{1}{a}\right)\right] T_{n}\left[\frac{1}{2}\left(b + \frac{1}{b}\right)\right]\nonumber\\[0.5em] &+ \frac{1}{4}\left(a - \frac{1}{a}\right)\left(b - \frac{1}{b}\right) U_{n-1}\left[\frac{1}{2}\left(a + \frac{1}{a}\right)\right] U_{n-1}\left[\frac{1}{2}\left(b + \frac{1}{b}\right)\right],\\[0.5em] Q(a, b) &= \frac{1}{2} \left(b - \frac{1}{b}\right) T_{n}\left[\frac{1}{2}\left(a + \frac{1}{a}\right)\right] U_{n-1}\left[\frac{1}{2}\left(b + \frac{1}{b}\right)\right]\nonumber\\[0.5em] &+ \frac{1}{2} \left(a - \frac{1}{a}\right) T_{n}\left[\frac{1}{2}\left(b + \frac{1}{b}\right)\right] U_{n-1}\left[\frac{1}{2}\left(a + \frac{1}{a}\right)\right], \end{align} where $T_{k}(x)$ and $U_{k}(x)$ are the $k$-th Chebyshev polynomials of the first and second kind evaluated on $x$. It is a small fun exercise to show that these polynomials must be Laurent polynomials in $(ab)$, exhibiting no cross terms of $a, b$ with differing exponents. Note that $|P|^2 + |Q|^2 = 1$ must hold, and thus each of these polynomials is bounded in magnitude by $1$ on $\mathbb{T}^2$. \end{lemma} \begin{lemma} \label{lemma:xyz_qsp} XYZ M-QSP. For the XYZ M-QSP protocol defined in the previous paragraph, the polynomials $P, Q$ defining the resulting unitary have the following form for fixed $n$: \begin{align} P(a, b) &= T_{n}\left[\frac{1}{4}\left(a + \frac{1}{a}\right)\left(b + \frac{1}{b}\right)\right] + \frac{i}{4}\left(a - \frac{1}{a}\right)\left(b - \frac{1}{b}\right) U_{n - 1}\left[\frac{1}{4}\left(a + \frac{1}{a}\right)\left(b + \frac{1}{b}\right)\right],\\[0.5em] Q(a, b) &= \frac{1}{4} \left[ \left(a - \frac{1}{a}\right)\left(b + \frac{1}{b}\right) - i\left(a + \frac{1}{a}\right)\left(b - \frac{1}{b}\right) \right] U_{n-1}\left[\frac{1}{4}\left(a + \frac{1}{a}\right)\left(b + \frac{1}{b}\right)\right], \end{align} where $T_{k}(x)$ and $U_{k}(x)$ are the $k$-th Chebyshev polynomials of the first and second kind evaluated on $x$. Note that in most instances these polynomials are simpler in the $\theta$ picture, as $(a \pm a^{-1})$ and $(b \pm b^{-1})$ have nicer expressions as cosine and sine of $\theta_a, \theta_b$ for $a = e^{i\theta_a}, b = e^{i\theta_b}$. The proof of this fact is by simple inspection according to the known recurrence relation for the Chebyshev polynomials. Note that $|P|^2 + |Q|^2 = 1$ must hold, and thus each of these polynomials is bounded in magnitude by $1$ on $\mathbb{T}^2$. Note that we call this XYZ M-QSP because the chosen phases can be seen to conjugate one subset of the iterates to change them from $X$ to $Y$ rotations on the Bloch sphere, \begin{equation} e^{i\theta_a \sigma_x} \mapsto e^{-i\sigma_z\pi/4}e^{i\theta_a \sigma_x}e^{i\sigma_z\pi/4} = e^{i\theta_a \sigma_y}, \end{equation} and thus the ansatz looks like alternating $X$ and $Y$ rotations by $\theta_a, \theta_b$. \end{lemma} We can make a few cursory observations about the polynomial transformations embedded by these ansätze. For trivial M-QSP, the ordering of iterates cannot matter, and thus any embedded transformation must also be a polynomial purely of $(ab)$ as a single-variable; this is depicted in Fig.~\ref{fig:worked_examples}. Likewise, any time a product of $A, B$ iterates is repeated without intervening $Z$ rotations, a corresponding reduction to a single-variable standard QSP protocol is possible. In contrast, the polynomial transforms of XYZ M-QSP for $n \geq 2$ do not factor in this neat way, and we can come up with somewhat contrived but interesting promise problems for which XYZ M-QSP provides a neat solution. \begin{problem} \label{prob:multi_channel_discrimination} Consider the following multi-channel discrimination problem. A querent is given free access to two oracles $A = e^{i\theta_a}$ and $B = e^{i\theta_b}$ and is told that one of the two following scenarios holds: \begin{enumerate}[label=(\arabic*)] \item $\theta_a, \theta_b$ are from the four-element set $\{\{0, \pm \pi/2\}\}\cup\{\{\pm\pi/2, 0\}\}$. \item $\theta_a, \theta_b$ satisfy the relation $4\cos^2{\theta_a}\cos^{2}{\theta_b} = 1$. \end{enumerate} It can be shown that these two cases are disjoint. Moreover, this discrimination problem can be solved deterministically in $6$ total queries using M-QSP (shown in (II) of Fig.~\ref{fig:worked_examples}), while for two quantum parties given access to $A, B$ separately, there is no such algorithm, for the same reasons as those discussed in \cite{noisy_channel_rossi_22, rossi_qht_21}. Additionally, while not discussed here, this query complexity persists even in the presence of small noise \cite{noisy_channel_rossi_22}. That is, no two quantum computing parties connected only by classical communication and sharing no entanglement, given access to one each among the oracles in this problem, can decide this problem deterministically with zero error, nor can this problem be reduced to single-variable QSP by substitution. Consequently M-QSP has permitted the deterministic computation of a structured \emph{joint} property that is not related to sums or differences of encoded signals (i.e., by reduction to single-variable QSP in $a^n b^m$ for integers $n, m$). Here, M-QSP both outperforms serial application of QSP protocols on two oracles individually, and does so for a highly non-trivial functional relation. Intuitively, note that this advantage relies crucially on an inability to decompose the discrimination regions depicted in Figure~\ref{fig:worked_examples} into blocks on which the circuit output should be constant for a given discrimination problem. Consequently two parties, each given access to one among the two oracles, cannot with certainty disambiguate these joint properties by projecting onto either the $\theta_a$ or $\theta_b$ axis, and classically computing the function outside the quantum circuit. \end{problem} \begin{figure} \caption{Contour plots of $|P(a, b)|^2$ for the (I) trivial M-QSP protocol with $n = 1$ and the (II) XYZ M-QSP protocol with $n = 3$. Note that by definition that $P$ is bounded in magnitude by $1$. Here $\theta_a, \theta_b$ satisfy $\cos{\theta_a} = (a + a^{-1})/2$ and $\cos{\theta_b} = (b + b^{-1})/2$ and analogously for sine. The toroidal nature of the domain is evident: the plots are also symmetric under $\theta_a \mapsto -\theta_a$, equivalently inversion parity $a \mapsto a^{-1}$, as well as $\theta_a \mapsto \theta_a + \pi$, equivalently negation parity $a \mapsto -a$, and likewise for the $b$ variables. The red line overlay in (II) represents the relation $4\cos^2{\theta_a}\cos^2{\theta_b} = 1$, as discussed in the text. Note finally that (I) is a function purely of $(\theta_a + \theta_b)$ as expected.} \label{fig:worked_examples} \end{figure} \begin{remark} Note that for trivial M-QSP protocols the iterates $A, B$ commute; consequently the embedded functions are single-variable Laurent polynomials in the joint variable $ab$. Clearly such polynomials are not stable, and thus these completions are not furnished by Theorem~\ref{thm:m_frt}. However, through this single-variable reduction these completions \emph{are enabled} by the simpler Fejér-Riesz lemma. It remains an open question whether the application of Theorem~\ref{thm:m_frt} fails only in the settings where a single-variable reduction similar to this case exists. If this is the case, one could consider some non-trivial class M-QSP\textsuperscript{$\star$} of multivariable protocols which admit no single-variable reduction, and thus always (up to small perturbation) permit the application of the FRT as given. This would represent a substantial elaboration on the theory of unitary matrix completions. \end{remark} The central message from this section is a precise point about the utility of M-QSP for computing joint functions of oracular parameters. While such functions could always in principle be computed approximately classically \emph{outside} a quantum circuit by first estimating the relevant parameters tomographically, this can be extremely costly, and may provide more information than a querent wishes to know. As with standard QSP, M-QSP allows precise and \emph{coherent} control over both subsystem dynamics and information extracted by measurement; known structure of the data encoded in the oracle can be leveraged into query complexity savings, written in the language of functional approximation theory. If an M-QSP computation is oblivious to individual, expensive-to-determine properties of each oracle, then it can avoid lower bounds for computing these unimportant properties! Its functional transforms can be carefully controlled, defined by interpolatory or approximative constraints, and cascaded in the application of many QSP protocols without unnecessary measurement and amplification. \section{Discussion} \label{sec:discussion} In this work we have developed a theory of multivariable quantum signal processing (M-QSP) in two variables, and shown a variety of results about its properties through direct analogies to theorems of standard QSP. Specifically, we give Theorem~\ref{thm:major_m_qsp_properties}, which shows that up to Conjecture~\ref{conj:m_qsp_factorability}, M-QSP polynomial transforms are only as constrained as those of standard QSP. Moreover, we show in Corollary~\ref{cor:m_qsp_phase_readoff} that given a valid M-QSP protocol there is no obstacle to determining its constitutive real phases $\Phi$ and the order $s$ in which these oracles are applied. Finally, we show Theorem~\ref{thm:partial_m_qsp_properties}: that given a partially-defined M-QSP protocol, the existence of a \emph{stable} completion of said protocol relies solely on the guarantees of a multivariable Fejér-Riesz theorem. This result connects questions in QSP-like algorithms to nullstellensätze, furnishes alternative proofs for results in standard QSP, and opens a variety of concrete questions involving matrix completions in a quantum information setting. The remaining purpose of this section is fourfold: (1) state caveats related to M-QSP insofar as its properties and guarantees have not been fully characterized, (2) discuss the numerical outlook for M-QSP, (3) give a brief overview of how M-QSP informs a lifted, many-qubit M-QSVT, and finally (4) state avenues of ongoing research and their basal open questions. \subsection{Caveats and reminders} The story of M-QSP is not completely resolved in this work. The results in previous sections provide only a partial characterization of the expressive powers of the M-QSP ansatz, and do so in some cases with respect to specific conjectures and caveats. Here we give succinct reminders tethered to these limitations. \begin{enumerate}[label=(\arabic*)] \item The space of embeddable polynomial transforms in M-QSP is smaller than the space of polynomials with bounded norm and definite parity up to a given degree (as opposed to standard QSP). We argue that one should not have expected to be able to achieve all polynomial transformations in M-QSP as permitted by the norm and parity constraints given in Theorem~\ref{thm:partial_qsp_properties}. This a simple counting argument: for a multivariable polynomial of degree $(n, m)$ with definite parity and bounded norm on the bitorus, there exist $\mathcal{O}(nm)$ possible small independent perturbations of the coefficients which preserve these properties, but only $\mathcal{O}(n + m)$ real QSP phases parameterizing M-QSP ansätze. It is precisely that the achievable transforms are some low rank subspace of this space of norm-bounded definite-parity transforms that the multivariable Fejér-Riesz theorem's statement captures. \item The multivariable Fejér-Riesz theorem (FRT) can determine if there exist unitary completions for partially defined polynomial transformations if and only if those completions are \emph{stable} polynomial transforms. Stable polynomials in one and two variables, as discussed in Definition~\ref{def:stable_polynomials}, have special properties. It is also known that there exist polynomial transforms achieveable by M-QSP which are \emph{not stable}, and thus for which the FRT can say little in the two-variable case. It is an interesting question whether such instability implies the ability to reduce the corresponding transformation to a single-variable setting (or else be infinitesimally perturbed to a stable setting). In the single-variable case, such unstable polynomial embeddings can always be converted to stable ones as all roots are isolated; a similar transformation for the multivariable case is not possible in general \cite{geronimo_stable_06}. The conjecture (FRT = QSP in Conjecture~\ref{conj:m_qsp_factorability}) that, like in the single-variable case, the multivariable FRT is the only non-trivial mechanism underlying M-QSP, has a succinct statement. Nevertheless, the methods to prove this statement depend on the existence of solutions to simultaneous multivariable polynomial equations, which is a hard problem in general. Any theory of M-QSP must overcome this difficulty. \item The guarantees of M-QSP critically rely on the two queryable oracles having the same eigenbasis. Consequently lifting M-QSP to multiple qubits requires identical singular vectors for the relevant block encoded operators. While this restriction is unfortunate, it is not unexpected; considering a theorem of QSP or QSVT for general interleaved non-commuting operators not only exposes us to pathological cases where the oracle set is complete for unitary approximation or quantum computation, but destroys the assumptions of Jordan's lemma \cite{jordan_75} (that two interleaved rotations or reflections select invariant subspaces); such results fundamentally enable QSVT \cite{gslw_19}, and cannot be abandoned without drastically altering the simple utility of the ansatz. \end{enumerate} \subsection{The numerical outlook for M-QSP} While the theoretical tools from algebraic geometry to understand multivariable polynomials are necessarily not as strong as their single variable analogues \cite{mv_frt_04}, the theory of multivariable polynomial approximation and interpolation is well-developed and in general spells good news for those looking to approach M-QSP from a numerical perspective. We give known results in multivariable approximation theory, showing that there is no fundamental barrier to multivariable analogues for the multiple \emph{classical} subroutines that are used in standard QSP. \begin{enumerate}[label=(\arabic*)] \item Multivariable Stone-Weierstrass theorems. Most basically, it is known that multivariable trigonometric polynomials are dense in the space of continuous functions on the multitorus, just as in the single-variable setting \cite{polya_szego_analysis}. \item Multivariable Jackson-type theorems. Jackson's theorems or Jackson's inequalities relate the smoothness of a function and the required degree of a trigonometric polynomial approximation to a desired uniform error. Such theorems exist in the multivariable case \cite{newman_jackson_64, ragozin_jackson_70} and have fundamentally the same character as in the single-variable case, meaning the required degree's best-case polylogarithmic dependence on approximative error achieved by QSP is not forbidden in the M-QSP setting. That said, directly comparing degree in the single-variable and multivariable context has its own caveats \cite{trefethen_hypercube_17}. \item Multivariable Remez-type or Parks-McClellan algorithms. Beyond ensuring that there exist good trigonometric polynomial approximations to desired continuous functions, much work related to QSP has centered on good classical algorithms for efficiently computing said polynomial approximations, derived from the well-known signal processing Remez-type/Parks-McClellan algorithms \cite{remez, grenez, parks_mcclellan_72}. These algorithms have multivariable counterparts with similar performance as in the single-variable case \cite{watson_m_remez_75}, although their theoretical guarantees are less well-understood. \item Incorporating Fejér-Riesz constraints. If one seeks the stable factorizations guaranteed in \cite{mv_frt_04}, one may worry that optimizing over polynomials which satsify the constraint given in Theorem~\ref{thm:partial_m_qsp_properties} may itself be difficult. That said, numerical work in \cite{mv_frt_04, woerdeman_numerical_03, hachez_approx_05} each support that this constraint leads to well-defined semi-definite programming problems that, while not fully characterized, appear empirically compatible with common classical optimization algorithms discussed above. \item Optimizing over QSP protocols. Recent research in standard QSP has shown that imposing symmetries on QSP phases leads to improved performance for numerical optimization over QSP protocols, as well as guarantees of convexity of the search space under well-defined constraints on the desired embedded functional transform \cite{sym_qsp_21}. While we have observed evidence for the benefits of similar symmetrization for M-QSP in our own numerical simulations, formally showing similar guarantees as in the single-variable setting generally is a prime direction for future work. \end{enumerate} \subsection{Lifting M-QSP to M-QSVT} Much of the interest in QSP-like algorithms stems from their use at the core of algorithms for manipulating the eigenvalues or singular values of larger linear systems embedded in unitary matrices \cite{gslw_19, lloyd_hamiltonian_qsvt_21, lc_19_qubitization}. QSP can be thought of as the special case in which this linear operator is just the single scalar value in the top left of a representation of an SU(2) operator. We briefly review how to lift from QSP to QSVT, and show that M-QSP immediately enables a lifted M-QSVT for pairs of operators that share the same singular vectors (or equivalently, in the case that these operators are square, that they commute). The purpose of this section is not to exhaustively build a theory of M-QSVT and its uses, but to advertise the powerful fact that anything it is possible to prove about M-QSP leads directly and simply to a lifted, many-qubit context. As the authors of \cite{gslw_19} succinctly note, interleaving any unitary $U$ with simple phase operators can induce polynomial transforms of the singular values of certain sub-blocks of $U$; the business of QSVT is to explicitly identify these sub-blocks. The preservation of these blocks under repeated interleaving is a corollary of an old result, Jordan's lemma \cite{jordan_75}, and is the reason that any constructive non-commuting version of M-QSP is destined for fundamental problems. Applications of this lemma are ubiquitous in other areas of quantum information, from quantum walks \cite{szegedy_04} to quantum interactive proofs \cite{marriott_watrous_05, fast_qma_jordan_09}, and are useful to understand. Following the conventions of \cite{gslw_19}, take $\mathcal{H}_U$ some finite-dimensional Hilbert space on which $U$ acts, and $\tilde{\Pi}, \Pi$ orthogonal projectors which locate the linear operator $A$ according to \begin{equation} A = \tilde{\Pi}U\Pi. \end{equation} We say here that $U$ block encodes $A$, and this idea is formalized through Sec.~4 of \cite{gslw_19}. Taking $d = \text{rank}{(\Pi)}$, $\tilde{d} = \text{rank}{(\tilde{\Pi})}$, and $d_{\rm min} = \min{(d, \tilde{d})}$, the singular value decomposition of $A$ is simple. Take $\{|\psi_j\rangle\}, j \in [d]$ and $\{|\tilde{\psi}_j\rangle\}, j \in [\tilde{d}]$ to be orthonormal bases for $\text{img}{(\Pi)}$ and $\text{img}{(\tilde{\Pi})}$ respectively, then \begin{equation} A = \sum_{j = 1}^{d_{\rm min}} \xi_{j}|\tilde{\psi}_j\rangle\langle \psi_j |, \end{equation} where each of the $\xi_{j}$ are in $[0, 1]$, and $\xi_{j} \geq \xi_{k}$ for $1 \leq j \leq k \leq d_{\rm min}$. The careful work of \cite{gslw_19} shows that the action of $U$ with respect to this basis acts as \begin{equation} \label{eq:qsvt_u_decomp} U = \cdots \oplus\; \bigoplus_{\xi_{j} \neq 0, 1} \begin{bmatrix} \xi_j & \sqrt{1 - \xi_j^2}\;\\ \sqrt{1 - \xi_j^2} & -\xi_j \end{bmatrix}^{\mathcal{H}_j}_{\tilde{\mathcal{H}}_j} \oplus \cdots, \end{equation} where the block's superscript $\mathcal{H}_{j}$ and subscript $\tilde{\mathcal{H}}_j$ indicate that it maps from the space spanned by the $|\psi_j\rangle$ to that spanned by the $|\tilde{\psi}_j\rangle$, each of which is a subspace of the original $\mathcal{H}_U$. The additional components in the direct sum, i.e., the action of $U$ on the rest of $\mathcal{H}_U$, can be written explicitly, and correspond to actions of $U$ outside the relevant images of $\Pi, \tilde{\Pi}$; this careful bookkeeping is documented neatly in \cite{gslw_19}. Their construction crucially introduces two further phase operators which can be shown to be easily constructable, namely \begin{align} e^{i\phi(2\Pi - I)} &= \cdots \oplus\; \bigoplus_{\xi_{j} \neq 0, 1} \begin{bmatrix} e^{i\phi} & 0\;\\ 0 & e^{-i\phi} \end{bmatrix}^{\mathcal{H}_j}_{\mathcal{H}_j} \oplus \cdots,\label{eq:phase_decomp_1}\\ e^{i\phi(2\tilde{\Pi} - I)} &= \cdots \oplus\; \bigoplus_{\xi_{j} \neq 0, 1} \begin{bmatrix} e^{i\phi} & 0\;\\ 0 & e^{-i\phi} \end{bmatrix}^{\tilde{\mathcal{H}}_j}_{\tilde{\mathcal{H}}_j} \oplus \cdots,\label{eq:phase_decomp_2} \end{align} which together with the action of $U$ allow us to recognize interleaved products of these operators as performing effectively (up to substitutions of rotations for reflections) QSP in each of the singular vector subspaces defined by these projectors. In effect, the blocks as shown in the above equations multiply only with each other all thanks to Jordan's lemma, and each of these sub-blocks looks just like a product of single-qubit rotations (or antiunitary reflections, in this setting). In more specific words, drawing from Definition 15 and Theorem 17 of \cite{gslw_19}, the alternating protocol (assuming for the moment the protocol's length $n$ is even) \begin{equation} U_{\phi} = \prod_{j = 1}^{n/2} \left[e^{i\phi_{2j - 1}(2\Pi - I)}U^{\dag} e^{i\phi_{2j}(2\tilde{\Pi} - I)}U\right], \end{equation} can be shown to induce a desired polynomial transform in precisely the way QSP does, explicitly that \begin{equation} \tilde{\Pi}U_{\Phi}\Pi = \sum_{j = 1}^{d_{\rm min}} P(\xi_j)|\tilde{\psi}_j\rangle\langle \psi_j |, \end{equation} where this polynomial transformation is effectively (again up to a simple map between reflections and rotations) the same one as generated by the standard QSP protocol with QSP phases $\Phi$. All that is required for polynomial transformation of degree $n$ are $n$ uses of $U, U^\dag$, the $\Pi, \tilde{\Pi}$-controlled NOT gates needed for the phase operators, single-qubit phase gates, and a constant number of auxiliary qubits. The usefulness of this basic construction in lifting M-QSP should be clear; given two linear operators $A_1, A_2$ which have \emph{the same singular vectors} in their singular value decompositions and which are located within unitaries $U_1, U_2$ of the same size, the decompositions given in Eqs.~\ref{eq:qsvt_u_decomp}, \ref{eq:phase_decomp_1}, and \ref{eq:phase_decomp_2} still hold. I.e., \begin{equation} A_1 = \tilde{\Pi} U_1 \Pi, \quad A_2 = \tilde{\Pi} U_2 \Pi, \end{equation} where only if $A_1, A_2$ share singular vectors do the subspaces discussed above remain invariant under operator interleaving. A simpler instance of this phenomenon is when $U_1, U_2$ block encode commuting operators, in which case this condition is obviously satisfied. Note that this is clearly a restrictive condition, though it should come as no surprise: no analogue of Jordan's lemma exists for interleaved products of more than two operators, as there exist cases in which no non-trivial subspaces would be preserved by such action. Here we choose to preserve the two-dimensional subspaces spanned by the singular vectors, but allow the singular values to differ. Evidently M-QSVT is no harder to construct than M-QSP, and it is left to the interested reader to port results in the latter to the context of the former; any instance of block-encodable commuting operators can be discussed in the terminology of M-QSP! \subsection{Ongoing work and open questions} Finally we outline major open avenues for M-QSP, aimed at theoretical physicists, theoretical computer scientists, those in industry, and pure mathematicians looking to switch subfields. M-QSP, like its single-variable analogue, is ripe for simultaneous analytical and numerical investigation; its utility is centered in the relative simplicity of its defining ansatz in conjunction with the rigorously characterized expressiveness of its generated transformations. This, together with its low resource-overhead, makes it a good candidate for continuing the legacy of QSVT in unifying the current pantheon of quantum algorithms, as well as realizing them on near-term devices. \begin{enumerate}[label=(\arabic*)] \item M-QSP opens the door to considering other alternating ansätze: variations include restrictions (e.g., strictly alternating protocols, protocols with symmetrized phases), as well as elaborations (e.g., arbitrarily many oracles). It is the intent of Sec.~\ref{sec:alg_geo_review} to introduce readers to the mathematical subfields which may in turn inform new, far larger classes and families of QSP- and QSVT-like algorithms. Some of these may not only make use of scalar factorization results, but the diverse families of operator-valued results in the theory of positive extensions \cite{mv_frt_04, op_val_frt_09}. \item M-QSP can be used even in the absence of a complete characterization of all circuits stemming from all reasonable ansätze. Indeed, the existence of even a single countably infinite family of embedable polynomial transformations can yields proofs of quantum advantage. Thus empirical research into such families, even by those who have no wish to understand deep takes in algebraic geometry, is worthwhile. \item As stated, lifting M-QSP to M-QSVT gives a theory only of commuting block encoded operators; this is necessary in order to preserve the use of Jordan's lemma. Relaxing this constraint in general may be impossible, but investigating situations in which more complicated subspaces are preserved by the interleaving operators used in QSVT may be possible. Novel pure mathematical investigations into variants of Jordan's lemma have great utility for quantum information beyond QSP/QSVT \cite{marriott_watrous_05, fast_qma_jordan_09}, and are a great starting point for foundational work. In turn, these result better inform our understanding of control of subsystem dynamics (a fundamental question in quantum information) and a variety of other periodic circuit ansätze. \item Improved Fejér-Riesz theorems for non-stable factorizations. Known results in positive extensions consider stable factorizations, which in the single-variable case is no problem because such polynomials have discrete zeros. While firmly in the realm of pure mathematics, relaxing such theorems to consider both non-stable factorizations and nonnegative (rather than positive) multivariable trigonometric polynomials would greatly impact the theory of quantum algorithms. Indeed, showing the FRT = QSP Conjecture (Conjecture~\ref{conj:m_qsp_factorability}) would provide useful examples of such extensions, and thus add to a critical series of results in algebraic geometry from a pragmatic computational perspective. \end{enumerate} As a final takeaway: QSP-like algorithms derive their utility from their complete characterization of control over subsystem dynamics of unitary evolutions. This is a \emph{bottom-up} approach that avoids the difficulties of circuit ansätze like VQEs or QNNs, whose properties are often difficult to treat rigorously, and which must primarily be investigated numerically or heuristically. M-QSP remains in the vein of QSP's original successful approach—it seeks to take advantage of the usual Feynmanian adage: \emph{there's plenty of room at the bottom}. \appendix \section{Extensions to the Fejér-Riesz lemma} \label{appx:m_qsp_proofs} The purpose of this appendix is threefold: (1) incorporate the single-variable Fejér-Riesz lemma into proofs of main theorems of single-variable QSP, (2) use modified and considerably more involved multivariable versions of this lemma in the proofs of major properties of M-QSP, and (3) generally acquaint the study of QSP-like ansätze with a relevant and well-understood subfield of algebraic geometry. This work is inspired by results which descend from the study of Hilbert's 17\textsuperscript{th} problem. Related statements are, with a little maneuvering, ubiquitous in quantum information, and may offer more insight to the interested researcher than the utilitarian implementation here. \subsection{Single-variable setting} We fulfill a promise to re-prove Theorem~\ref{thm:partial_qsp_properties} in the Laurent picture, and show that it relies entirely on a relatively simple and clean result, the aforementioned Fejér-Riesz lemma; this lemma provides a concrete description of an entire sub-class of positive polynomials. Almost all of the major results in the previous work on QSP outside of its classical subroutines can be reformulated to center on this lemma, and are arguably made cleaner and more compact by this reduction due to the removal of awkward branch cuts in the $x$ picture. \subsubsection{Related definitions and lemmas} \begin{lemma} \label{lemma:frt} Single-variable Fejér-Riesz lemma (an old result, recalled in \cite{mv_frt_04}). A \emph{single-variable} trigonometric polynomial \begin{equation} f(z) = \sum_{k = -n}^{n} f_{k}z^{k}, \end{equation} taking non-negative values on $\mathbb{T}$ can always be expressed as the modulus squared of a polynomial of the same degree, i.e., there exists $g(z) = g_{0} + \cdots + g_{n}z^n \in \mathbb{C}[z]$ such that \begin{equation} f(z) = |g(z)|^2 = g(z)g^*(z^{-1}), \end{equation} where the degree of $g$ is the same as that of $f$. In fact one can choose $g(z)$ to be \emph{outer}, i.e., $g(z) \neq 0$ for $|z| < 1$, and in the non-singular case, when $f(z) > 0$ for $|z| = 1$, one can choose $g(z)$ to be stable, namely $g(z) \neq 0$ for $|z| \leq 1$. Up to this choice and an overall phase the factorization is unique. A standard proof (among many) of this lemma is found with Theorem 1.1 of \cite{dumitrescu_monograph_07}, and relies only on the fundamental theorem of algebra. \end{lemma} \subsubsection{Alternative proof of Theorem \ref{thm:partial_qsp_properties}} We modify the statement of Theorem~\ref{thm:partial_qsp_properties} in a way that is amenable to application of the single-variable Fejér-Riesz lemma. This allows the proof of the multivariable analogue of this theorem in Appendix~\ref{appx:m_qsp_proofs} to be more familiar. Consider, as was given in Theorem~\ref{thm:partial_qsp_properties}, that the real polynomials $\tilde{P}$ and $\tilde{Q}$ satisfy the inequality $|\tilde{P}|^2 + (1 - x^2)|\tilde{Q}|^2 \leq 1$. Consider instead a renaming of the polynomial $\tilde{P}(x)$ by its Laurent picture version $P(z) \in \mathbb{C}[z]$ and the non-polynomial $\sqrt{1 - x^2}\tilde{Q}(x)$ by its Laurent picture version (now truly a Laurent polynomial) $Q(z) = -(i/2)(z - z^{-1})\tilde{Q}(z)$. In other words we consider the matrix \begin{equation} \label{eq:laurent_condition} \begin{pmatrix} P(z) + iR(z) & Q(z) + iS(z) \\ -Q(z) + iS(z) & P(z) - iR(z) \end{pmatrix}, \end{equation} where $P(z)$ and $Q(z)$ are known and real on $\mathbb{T}^2$, and where we want to determine if there exist $R(z)$ and $S(z)$ (again real on $\mathbb{T}^2$) satisfying conditions (1-2) of the theorem statement as well as the determinant condition \begin{equation} P(z)^2 + Q(z)^2 + R(z)^2 + S(z)^2 = 1. \end{equation} Note that we have used the known even $z \mapsto -z$ parity of each Laurent polynomial to fill out Eq.~\ref{eq:laurent_condition}, and that while each of these polynomials is either real or imaginary on $\mathbb{T}^2$, this does not imply the coefficients of the Laurent polynomial are either all real or all imaginary, only Hermitian as stated previously. Now define the nonnegative degree-$2n$ Laurent polynomial $U(z) = 1 - P(z)^2 - Q(z)^2$ and apply the Fejér-Riesz lemma (Lemma \ref{lemma:frt}) to yield a stable \emph{real-coefficient} (not real on $\mathbb{T}^2$) polynomial of degree $2n$ of the form \begin{equation} T(z) = \sum_{k = 0}^{2n} t_k z^k, \; t_k \in \mathbb{R}, \end{equation} which must satisfy \begin{align} U(z) = R(z)^2 + S(z)^2 &= (R(z) + iS(z))(R(z) - iS(z))\nonumber\\ &= (T(z) z^{-n}) (z^{n} T^*(z^{-1}))\nonumber\\ &= (T^\prime(z)) (T^{\prime*}(z^{-1})), \end{align} where we have added dummy powers of $z$ to make the $T(z)$ guaranteed by the Lemma \ref{lemma:frt} into a degree-$n$ Laurent polynomial $T^\prime(z)$ with real coefficients. Consequently the symmetric and antisymmetric components of this polynomial, with respect to $z \mapsto 1/z$ can be matched with $R(z)$ and $S(z)$ respectively and unambigously (the latter absorbing the factor of $i$). This preserves the desired $z \mapsto -z$ parity of each element, condition (2), the desired degree constraint, condition (1), and finally the determinant constraint, condition (3), which is what we desired to show. Returning to the $x$ picture and pulling out the necessary factor of $-(i/2)(z - z^{-1})$ from $Q(z)$ (possible to pull out because of the fundamental theorem of algebra guaranteeing roots at $z = \pm 1$), we recover the usual $\sqrt{1 - x^2}$ term. Thus, without much extra work, the Fejér-Riesz lemma comes across as the only non-trivial mechanism underlying the reverse problem ($\tilde{P}, \tilde{Q} \mapsto \Phi$) of QSP. \subsection{Multivariable setting} We provide proofs of the constitutive theorems of M-QSP. Beyond reference to some major (and complexly derived) theorems in functional analysis, this subsection is self-contained, and aimed toward a simplified analytic presentation. Where indicated we include explicit reference to conjectures and related results as depicted in Fig.~\ref{fig:conjecture_flow_diagram}. \subsubsection{Related definitions and lemmas} In the following proofs we consider matrices whose rows and columns are indexed by subsets of $\mathbb{Z}^2$ as described in \cite{mv_frt_04}. For instance, if $U = \{(0, 0), (0, 1), (1, 0)\}$ and $V = \{(2, 1), (2, 2), (2, 3)\}$ then we denote by $C = (c_{u - v})_{u \in U, v \in V}$ the $U \times V$ (i.e., $3\times 3$) matrix \begin{equation} C = \begin{pmatrix} c_{-2,-1} & c_{-2,-2} & c_{-2,-3}\\ c_{-1,-1} & c_{-1,-2} & c_{-1,-3}\\ c_{-2,0} & c_{-2,-1} & c_{-2,-2} \end{pmatrix}, \end{equation} which evidently indexes elements in the $i, j$ (row and column) position by taking the difference of the $i$-th element of $U$ and the $j$-th element of $V$. We will usually consider the set $\Lambda = \{0, 1, \cdots, n\}\times\{0, 1, \cdots, m\}$ and its use in generating a $(n + 1)\times(n + 1)$ block Toeplitz matrix whose blocks are themselves $(m + 1)\times(m + 1)$ Toeplitz matrices. This Toeplitz matrix is the result of the multi-indexing procedure discussed previously, e.g., given $c_{0}, c_{1}, \cdots, c_{n}$ one can define $C = c_{i-j}, i,j \in \{0, 1, \cdots, n\}^2$ which has the form \begin{equation} C = \begin{pmatrix} c_{0} & c_{1}^* & \cdots & c_{n}^*\\ c_{1} & c_{0} & \ddots & \vdots\\ \vdots & \ddots & \ddots & c_{1}^*\\ c_{n} & \cdots & c_{1} & c_{0} \end{pmatrix}, \end{equation} where the $c_{k}$ we consider will have the additional Hermitian property $c_{-k} = c_{k}^*$. \begin{definition} Doubly-indexed Toeplitz matrix of Fourier components. Suppose a function $f(a, b): \mathbb{T}^2 \rightarrow \mathbb{C}$ has non-zero Fourier components $\hat{f}(k, l) = c_{kl}$ for $(k, l) \in \{0,1, \cdots, n\}\times\{0, 1, \cdots, m\}$. Then the doubly-indexed Toeplitz matrix $\Gamma$ corresponding to these Fourier components has form \begin{equation} \label{eq:gamma_toeplitz} \Gamma = \begin{pmatrix} C_{0} & \cdots & C_{-n}\\ \vdots & \ddots & \vdots\\ C_{n} & \cdots & C_{0} \end{pmatrix}, \quad C_j = \begin{pmatrix} c_{j0} & \cdots & c_{j,-m}\\ \vdots & \ddots & \vdots \\ c_{j,m} & \cdots & c_{j0} \end{pmatrix}, \end{equation} for $j \in \{-n, \cdots, n\}$, and where $c_{-k, -l} = c_{kl}^*$. Note this matrix is block-Toeplitz as described previously, and has dimension $(n + 1)(m + 1)\times(n + 1)(m + 1)$. This strange looking construction is standard as discussed in \cite{mv_frt_04}, and is motivated by single-indexed versions appearing in the proof of positive extensions and Schur complements in single-variable settings. \end{definition} \subsubsection{Proof of Theorem~\ref{thm:major_m_qsp_properties}} The proof of Theorem~\ref{thm:major_m_qsp_properties} will proceed similarly to its single-variable analogue, chiefly by induction. First, we introduce a related lemma to be used in the backwards ($\Leftarrow$) direction of the proof. This lemma underlies the need for Conjecture~\ref{conj:m_qsp_factorability}. \begin{lemma} \label{lemma:polynomial_square_ambiguity} Let $p, q$ be single-variable Laurent polynomials in $\mathbb{C}[x, x^{-1}]$ which satisfy the relation \begin{equation} \label{eq:laurent_square_relation} |p(x)|^2 = |q(x)|^2, \end{equation} where $|p(x)|^2 = p(x)p^{*}(x)$ is the modulus squared of $p$ assuming $x$ real, and analogously for $q$. Then $q$ must be equal to $p$ up to exactly (1) some overall phase $e^{i\phi}$ and (2) complex conjugation of some subset of its roots. \begin{proof} Condition (1) follows from the invariance of Eq.~\ref{eq:laurent_square_relation} up to an overall phase. Assume without loss of generality, then, that the leading coefficients of $p$ and $q$ are identical. Then one can express $|p(x)|^2 = |q(x)|^2$ in terms of its decomposition according to the fundamental theorem of algebra (assuming $p, q$ have degree $n$) \begin{equation} |p(x)|^2 = |q(x)|^2 = c x^{-2n}\prod_{k = 0}^{n} (x - a_k)(x - a_k^*), \end{equation} where the $a_k \in \mathbb{C}$ are not necessarily distinct. We see that $p$ and $q$ can thus be chosen to preserve this relation up to any of the $2^n$ (possibly non-distinct) choices between $a_k$ and $a_k^*$ as a root. This is precisely condition (2), and is the only other freedom without knowing other properties of $p, q$. \end{proof} \end{lemma} We begin the proof of Theorem~\ref{thm:major_m_qsp_properties} in earnest now, showing both directions, the second of which will depend on Conjecture~\ref{conj:m_qsp_factorability}. ($\Rightarrow$) The forward direction is relatively easy, taking $P = e^{i\phi_0}$ and $Q = 0$ which clearly satisfy properties (1-4). We can show that properties (1-3), as the overall operator is always unitary, are preserved by induction. Assume that for some length-$(n -1)$ protocol the inductive hypothesis holds and the unitary has form \begin{equation} U_{s, \Phi}(a, b) = \begin{pmatrix} P & Q \\ -Q^* & P^* \end{pmatrix}, \end{equation} where $P, Q$ satisfy (1-4) in the Laurent picture. Without loss of generality we can apply $A(a)e^{i\phi_n\sigma_z}$ to yield a new M-QSP protocol with $s^\prime = s \cup \{0\}$ and $\Phi^\prime = \Phi \cup \{\phi_n\}$; this unitary $U_{s^\prime, \Phi^\prime}$ has form, \begin{equation} \label{eq:forward_m_qsp_iterate} \frac{1}{2}\begin{pmatrix} e^{i\phi_n}\left[(a + a^{-1})P + (a - a^{-1})Q\right] & e^{-i\phi_n}\left[(a - a^{-1})P + (a + a^{-1})Q\right]\\ * & * \end{pmatrix}, \end{equation} whose elements are still Laurent polynomials with degrees that match the desired bounds, condition (1), parity under $(a, b) \mapsto (a^{-1}, b^{-1})$ based on those known for $P, Q$, condition (2), parity under $a \mapsto -a$ and $b \mapsto -b$ based on those known for $P, Q$, condition (3), and the determinant condition, condition (4). These conditions clearly also hold if the operator appended was $B(b)$. ($\Leftarrow$) The reverse direction of this theorem is more involved, but again can be inspired by the single-variable case and its proof in \cite{gslw_19}, relying on a few extra lemmas to make the jump to the multivariable setting. We also make use of Conjecture~\ref{conj:m_qsp_factorability}, and indicate clearly where this is done. First we consider the trivial case in which the degree of $P$ is zero. Due to the symmetries specified, this must mean $|P(a = b = 1)| = 1$ and thus $Q(a = b = 1) = 0$. A simple solution to this is $\Phi = \{\phi_0, \pi/2, -\pi/2, \cdots \pi/2, -\pi/2\}$ in $\mathbb{R}^{n + 1}$ and $s = 00\cdots0$ of length $n$, which satisfies these conditions. This is the base case of our induction. The key step in the inductive argument involves inspection of the determinant, condition (4) in Theorem~\ref{thm:major_m_qsp_properties}, namely that for all $a, b$ such that $|a| = |b| = 1$ the relation \begin{equation} P(a, b)P^*(a^{-1}, b^{-1}) + Q(a, b)Q^*(a^{-1}, b^{-1}) = 1 \end{equation} holds identically. Our goal is to determine whether the application of an iterate of the form $e^{-i\phi_n}A^{\dag}(a)^{s_n}B^{\dag}(b)^{1 - s_n}$ can reduce the degrees of the embedded polynomials for any choice of $\phi_n$ and $s_n$. Inspection of this equation yields something very similar to Eq.~\ref{eq:forward_m_qsp_iterate}, namely a map \begin{equation} \begin{pmatrix} P & Q\\ -Q^* & P^* \end{pmatrix} \mapsto \begin{pmatrix} P^{\prime} & Q^{\prime}\\ -Q^{\prime*} & P^{\prime*} \end{pmatrix}, \end{equation} where $P^\prime, Q^\prime$ (the embedded polynomials after `peeling` off an M-QSP iterate $A(a)$ or $B(b)$) are of smaller degree in $a$ if $s_n = 1$ and in $b$ if $s_n = 0$, and where the explicit form of this circuit is, without loss of generality choosing $s_n = 1$ for now \begin{equation} \label{eq:backward_m_qsp_iterate} \frac{1}{2} \begin{pmatrix} (a + a^{-1})e^{-i\phi_n}P + (a - a^{-1})e^{i\phi_n}Q & -(a - a^{-1})e^{-i\phi_n}P - (a + a^{-1})e^{i\phi_n}Q\\ * & * \end{pmatrix}. \end{equation} The condition under which the top-left and top-right embedded polynomials are of lower degree than $P, Q$ is precisely that the coefficients of the maximal degree in $a$ of $P, Q$ differ exactly by an overall phase. Concretely, lowering the degrees of the embedded polynomials in either $a$ or $b$ (corresponding to $s_n = 1, 0$ respectively) requires that either of the two pairs of polynomial coefficients of the highest degrees of $a, b$ appearing in $P, Q$ respectively, namely \begin{align} &P_{d_A}(b) = \sum_{k = -d_B}^{d_B} P_{d_A, k} b^{k}, \quad Q_{d_A}(b) = \sum_{k = -d_B}^{d_B} Q_{d_A, k} b^{k}\label{eq:conj_cond_1}\\ &P_{d_B}(a) = \sum_{k = -d_A}^{d_A} P_{k,d_B} a^{k}, \quad Q_{d_B}(a) = \sum_{k = -d_A}^{d_A} Q_{k, d_B} a^{k},\label{eq:conj_cond_2} \end{align} relate by an overall phase. Note that we discuss only the coefficient of the largest positive degree $d_A, d_B$ of $a$ or $b$; by the symmetries of these trigonometric Laurent polynomials, the largest negative degree in either variable will also identically vanish if this condition is satisfied. This condition is precisely the statement of Conjecture~\ref{conj:m_qsp_factorability}. Note that this condition also satisfies the requirement that the degrees of the bottom left and bottom right embedded polynomials are decreased, due to their trivial relation (complex conjugation) to the top left and top right embedded polynomials in Eq.~\ref{eq:backward_m_qsp_iterate}. Moreover, because this condition is a relation of coefficients, and these coefficients uniquely determine the roots of the corresponding single-variable polynomial coefficients of $P_{d_A}(b), Q_{d_A}(b)$ and $P_{d_B}(a), Q_{d_B}(a)$, Conjecture~\ref{conj:m_qsp_factorability} is both necessary and sufficient. If either of the conditions discussed in Conjecture~\ref{conj:m_qsp_factorability} holds, then there exists some choice of $\phi_n$ and $s_n$ such that the resulting lower-degree polynomials $P^\prime, Q^\prime$ still satisfy conditions (1-3) (and vacuously 4) from the statement of Theorem~\ref{thm:major_m_qsp_properties}. Moreover, if Conjecture~\ref{conj:m_qsp_factorability} holds, then this unitary must itself satisfy the inductive hypothesis, and the same process can be repeated to successively lower the degree in either $a$ or $b$ until the base case is satisfied and the result is shown. Note that it is sufficient to be able to lower the degree in $a$ or $b$, as as soon as one is reduced to the single-variable setting, the standard QSP theorems kick in, satisfying the conjecture vacuously. \begin{remark} Note that this argument leads easily to a proof of Corollary~\ref{cor:m_qsp_phase_readoff}, namely that any $P, Q$ arising from a unitary matrix built according to Definition~\ref{def:m_qsp} can be used, without explicit information about their constitutive $s, \Phi$, to efficiently determine an equivalent $s^\prime, \Phi^\prime$ by the classical method given above. To show this one needs to prove that the cases in which $s_n = 0$ and $s_n = 1$ are both possible (namely when both pairs of equations in Equations~\ref{eq:conj_cond_1} and \ref{eq:conj_cond_2} are satisfied) correspond only to $\phi_{n-1} \in \{-\pi, 0, \pi\}$ for $\phi_n$ restricted to $[-\pi, \pi]$. To show this we assume access to a description of the polynomials constituting an M-QSP protocol which takes the form \begin{equation} \label{eq:s_ambiguity} U_{\Phi, s} = U_{\Phi^\prime, s} A(a) e^{i\phi\sigma_z}B(b), \end{equation} without loss of generality choosing the $B(b)$ iterate to have been applied last, and for the final M-QSP phase to have been identically zero implicitly. If we show that the only case in which the degree of the polynomials embedded in $U_{\Phi, s}$ can be lowered in either the variable $a$ or $b$ is when $\phi \in \{-\pi, 0, \pi\}$ (and thus one can commute $A(a)$ and $B(b)$), then one can use the phase read-off procedure discussed in the main proof of Theorem~\ref{thm:major_m_qsp_properties}. Proving this is easy; we simply write out the relevant matrix elements of Eq.~\ref{eq:s_ambiguity} \begin{align} P &= \frac{1}{4}P^\prime \left[ e^{i\phi}\left(\frac{1}{ab} + \frac{a}{b} + \frac{b}{a} + ab\right) + e^{-i\phi}\left(\frac{1}{ab} - \frac{a}{b} - \frac{b}{a} + ab\right) \right]\nonumber\\ &+ \frac{1}{4}Q^\prime \left[ e^{i\phi}\left(-\frac{1}{ab} + \frac{a}{b} - \frac{b}{a} + ab\right) + e^{-i\phi}\left(-\frac{1}{ab} - \frac{a}{b} + \frac{b}{a} + ab\right) \right]\\ Q &= \frac{1}{4}P^\prime \left[ e^{i\phi}\left(-\frac{1}{ab} - \frac{a}{b} + \frac{b}{a} + ab\right) + e^{-i\phi}\left(-\frac{1}{ab} + \frac{a}{b} - \frac{b}{a} + ab\right) \right]\nonumber\\ &+ \frac{1}{4}Q^\prime \left[ e^{i\phi}\left(\frac{1}{ab} - \frac{a}{b} - \frac{b}{a} + ab\right) + e^{-i\phi}\left(\frac{1}{ab} + \frac{a}{b} + \frac{b}{a} + ab\right) \right]. \end{align} One can then look at the conditions under which the leading order single-variable coefficients of these two polynomials are identical up to an overall phase, in which case one can pull off an M-QSP iterate. For peeling a $B(b)$ or $A(a)$ iterate from this circuit we require that both pairs of leading coefficients differ by an overall phase: we write out these two pairs of equations: \begin{align} &\frac{1}{4}\tilde{P} \left[ \frac{1}{b}\left(e^{i\phi} - e^{-i\phi}\right) + b\left(e^{i\phi} + e^{-i\phi}\right) \right] + \frac{1}{4}\tilde{Q} \left[ \frac{1}{b}\left(e^{i\phi} - e^{-i\phi}\right) + b\left(e^{i\phi} + e^{-i\phi}\right) \right],\label{eq:a_iter_1}\\ &\frac{1}{4}\tilde{P} \left[ -\frac{1}{b}\left(e^{i\phi} - e^{-i\phi}\right) + b\left(e^{i\phi} + e^{-i\phi}\right) \right] + \frac{1}{4}\tilde{Q} \left[ -\frac{1}{b}\left(e^{i\phi} - e^{-i\phi}\right) + b\left(e^{i\phi} + e^{-i\phi}\right) \right],\label{eq:a_iter_2} \end{align} for peeling off an $A(b)$ iterate (where $\tilde{P}, \tilde{Q}$ are the single-variable, in $b$, coefficients of the maximal degree terms of $P, Q$ in $a$), and likewise for peeling off a $B(b)$ iterate \begin{align} &\frac{1}{4}\tilde{P} \left[ \frac{1}{a}\left(e^{i\phi} - e^{-i\phi}\right) + a\left(e^{i\phi} + e^{-i\phi}\right) \right] + \frac{1}{4}\tilde{Q} \left[ -\frac{1}{a}\left(e^{i\phi} - e^{-i\phi}\right) + a\left(e^{i\phi} + e^{-i\phi}\right) \right],\label{eq:b_iter_1}\\ &\frac{1}{4}\tilde{P} \left[ \frac{1}{a}\left(e^{i\phi} - e^{-i\phi}\right) + a\left(e^{i\phi} + e^{-i\phi}\right) \right] + \frac{1}{4}\tilde{Q} \left[ -\frac{1}{a}\left(e^{i\phi} - e^{-i\phi}\right) + a\left(e^{i\phi} + e^{-i\phi}\right) \right].\label{eq:b_iter_2} \end{align} In the case we're interested in, both of these pairs of polynomials need to differ by only an overall phase. Evidently this holds for Equations~\ref{eq:b_iter_1} and \ref{eq:b_iter_2} because they are identical. For Equations~\ref{eq:a_iter_1} and \ref{eq:a_iter_2}, the positive and negative powers of $b$ between the pair of equations now have a relative minus sign; equating these requires only the simple condition \begin{equation} e^{i\phi} - e^{-i\phi} = 2i\sin\phi = 0, \end{equation} which in turn means that $\phi \in \{-\pi, 0, \pi\}$ as was desired. Note that the solution seemingly implied by taking $e^{i\phi} + e^{-i\phi} = 0$ is not valid, as we assume the rightmost implicit QSP phase is zero, and thus the phase relation between the pairs of equations must be trivial. This completes the proof of Corollary~\ref{cor:m_qsp_phase_readoff}, and indicates that there is no ambiguity in reading off $s$ from Definition~\ref{def:m_qsp} if the corresponding unitary was required to come from a product of iterates. \end{remark} This completes the proof of Theorem~\ref{thm:major_m_qsp_properties} under the assumption of Conjecture~\ref{conj:m_qsp_factorability}. Note that resolution of this conjecture, or \emph{any} additional possibly non-necessary conditions under which the property defined in Conjecture~\ref{conj:m_qsp_factorability} holds \emph{across the inductive} step, will allow the same proof as above to proceed. Moreover, we are able to show, as stated in Corollary~\ref{cor:m_qsp_phase_readoff}, that there is no fundamental difficulty in reading off M-QSP phases under the assurance that the corresponding unitary was built according to the definition of M-QSP; this corollary even extends to an arbitrary number of variables, though the corresponding Conjecture~\ref{conj:m_qsp_factorability} would be even more difficult to show. Nevertheless, the proof of this theorem presents the succinctly stated, minimal gauntlets that any attempt at a theory of M-QSP must address. \subsubsection{Proof of Theorem~\ref{thm:partial_m_qsp_properties}} \begin{theorem} \label{thm:m_frt} The multivariable Fejér-Riesz theorem (Theorem 1.1.3, equivalently generalized in Theorem 3.3.1, of \cite{mv_frt_04}). Let that the multivariable trigonometric Laurent polynomial \begin{equation} f(z, w) = \sum_{k = -n}^{n}\sum_{l = -m}^{m} f_{kl} z^k w^l \end{equation} is strictly positive for all $|z| = |w| = 1$. Then there exists a \emph{stable} (Def.~\ref{def:stable_polynomials}) multivariable polynomial $p(z, w)$ such that $f(z, w) = |p(z, w)|^2$ with the following form: \begin{equation} p(z, w) = \sum_{k = 0}^{n}\sum_{l = 0}^{m} p_{kl}z^k w^l, \end{equation} where stability means $p(z, w) \neq 0$ for $|z|, |w| \leq 1$, if and only if $\Gamma$ built from Fourier coefficients $c_{kl} = (\hat{1}/f)(k, l)$ of the reciprocal of $f$ (i.e., the matrix in Eq.~\ref{eq:low_rank_condition}, which is a doubly-indexed Toeplitz matrix as defined in Eq.~\ref{eq:gamma_toeplitz}) satisfies the following condition: the $(n + 1)m\times (m + 1)n$ submatrix of $\Gamma$ obtained by removing scalar (i.e., among the overall) rows $1 + j(m + 1)$ for $j \in \{0, \cdots, n\}$ and scalar columns $1, 2, \cdots, m + 1$ has rank $mn$. For ease of reference we note that this low-rank condition is the same as Eq.~\ref{eq:low_rank_condition}, namely \begin{equation} \left[(c_{u - v})_{u, v \in \Lambda\setminus\{0, 0\}}\right]^{-1}_{\subalign{&\{1, 2, \cdots, m\}\times\{0\} \\ &\{0\}\times\{1, 2, \cdots, n - m\}}} = 0, \end{equation} where we note that $u, v$ don't run over the entirety of $\Lambda = \{0,1,\cdots, n\}\times\{0,1,\cdots, m\}$ for this matrix, and thus the resultant matrix is of dimension $[(n + 1)(m + 1)-1]\times[(n + 1)(m + 1)-1]$. Additionally note that up to $p$ being stable, the determined $p$ is unique up to an overall phase. For concrete computations of this matrix, see examples in \cite{mv_frt_04}. \end{theorem} The proof of Theorem~\ref{thm:partial_m_qsp_properties} proceeds similarly to its corresponding single-variable version, save the addition of a much stronger condition on a particular matrix relating to the specified \emph{strictly positive} real-valued trigonometric polynomial one wishes to embed using M-QSP. We simply define the additional objects necessary and reduce the statement of Theorem~\ref{thm:partial_m_qsp_properties} to a theorem in algebraic geometry: Theorem~\ref{thm:m_frt}. Additionally we supply a brief interpretation of the methods used to prove Theorem~\ref{thm:m_frt} (the full exposition of which runs about thirty pages in \cite{mv_frt_04}). We begin by stating the desired result of the theorem, introducing relevant variable names. Consider a unitary matrix of the following form \begin{equation} \label{eq:m_qsp_decomposed_form} \begin{pmatrix} P + iR & Q + iS\\ -Q + iS & P - iR \end{pmatrix}, \end{equation} where $P, Q, R, S$, polynomials in $a, b$ take real values on $\mathbb{T}^2$. Note that that the $P, Q$ here are distinct from those defined in Theorem~\ref{thm:major_m_qsp_properties}, but that all unitaries defined in Theorem~\ref{thm:major_m_qsp_properties} may be suitably decomposed into a unitary of the form given in Eq.~\ref{eq:m_qsp_decomposed_form} by splitting the real and imaginary parts of the embeded polynomials on $\mathbb{T}^2$. The remaining work is to note that the unitarity of the matrix in Eq.~\ref{eq:m_qsp_decomposed_form} requires that the following relation holds \begin{equation} \label{eq:m_qsp_det_factoring} 1 - (P^2 + Q^2) = (R + iS)(R - iS). \end{equation} Consequently, as it was in the single variable case in Theorem~\ref{thm:partial_qsp_properties}, the existence of a matrix completion (i.e., a corresponding $R, S$) for a choice of $P, Q$ depends on the ability to factor $1 - (P^2 + Q^2)$ into a single square. In the single variable case the Fejér-Riesz theorem permitted this whenever $1 - (P^2 + Q^2)$ was non-negative. In the multivariable case we require that this quantity is positive, plus an additional series of constraints discussed below. The application of the multivariable Fejér-Riesz theorem (Theorem~\ref{thm:m_frt}) is clear; if the $\Gamma$ matrix corresponding to the Fourier coefficients of the inverse of $F = 1 - (P^2 + Q^2)$ satisfy the desired low-rank condition, then the multivariable single-variable polynomial function $T$ such that $F = |T|^2$ can be split into its real-valued ($R$) and imaginary-valued ($S$) components on $\mathbb{T}^2$. Note that because these Fourier components are real, and $P, Q$ have definite parity, then the Fourier components of the inverse are real as well, and thus $R$ and $iS$ have, as coefficients, purely real values. This means that the parity of $R$ and $S$ (the latter of which has purely imaginary coefficients) must be definite and opposite under inversion symmetry. All that is left is to ensure that these polynomials have the proper parity under negation symmetry $(a , b) \mapsto (-a, -b)$. But this is true obviously because $1 - (P^2 + Q^2)$ consists of powers of $a, b$ which are either $0$ or $2$ modulo $4$, corresponding uniquely to definite parity (odd, even under negation of both variables) for $T$. The final condition of Theorem~\ref{thm:partial_m_qsp_properties} (unitarity) is trivially satisfied by our satisfaction of Eq~\ref{eq:m_qsp_det_factoring}. Finally note that while the Fejér-Riesz theorem specifies decomposition into polynomials (i.e., sums of monomials with non-negative degrees, not Laurent polynomials), the same shifting argument can be used as in the single-variable case, namely noting that the following two products are equivalent, and thus we can make the substitution $T \mapsto T^\prime$ without worry. \begin{align} TT^* &= \left[\sum_{j = 0}^{2(n - s)}\sum_{k = 0}^{2s} t_{jk} a^j b^k\right] \left[\sum_{j = 0}^{2(n - s)}\sum_{k = 0}^{2s} t^*_{jk} a^{-j} b^{-k}\right],\\ T^\prime T^{\prime *} &= \left[\sum_{j = -(n - s)}^{(n - s)}\sum_{k = -s}^{s} t_{jk} a^j b^k\right] \left[\sum_{j = -(n - s)}^{(n - s)}\sum_{k = -s}^{s} t^*_{jk} a^{-j} b^{-k}\right]. \end{align} This completes the proof of Theorem~\ref{thm:partial_m_qsp_properties}, which at its core is far simpler than the proof of Theorem~\ref{thm:major_m_qsp_properties} beyond its reliance on the powerful albeit specific Theorem~\ref{thm:m_frt}. Moreover, it does not depend on Conjecture~\ref{conj:m_qsp_factorability}. It is purely a theorem about matrix completions (or positive extensions), where the ability to determine M-QSP phases for such a completion depends on either the validity of Conjecture~\ref{conj:m_qsp_factorability}, or that one's choice of $P, Q$ such that $1 - (P^2 + Q^2) > 0$ is judicious and happens to have a \emph{stable factorization}, which this theorem will efficiently verify the existence of and compute. If such a factorization exists, then Corollary~\ref{cor:m_qsp_phase_readoff} permits easy read-off of the M-QSP phases. \end{document}
arXiv
Retirement Planning Annuities Perpetuity Julia Kagan Julia Kagan has written about personal finance for more than 25 years and for Investopedia since 2014. The former editor of Consumer Reports, she is an expert in credit and debt, retirement planning, home ownership, employment issues, and insurance. She is a graduate of Bryn Mawr College (A.B., history) and has an MFA in creative nonfiction from Bennington College. Marguerita Cheng Reviewed by Marguerita Cheng Marguerita is a Certified Financial Planner (CFP®), Chartered Retirement Planning Counselor (CRPC®), Retirement Income Certified Professional (RICP®), and a Chartered Socially Responsible Investing Counselor (CSRIC). She has been working in the financial planning industry for over 20 years and spends her days helping her clients gain clarity, confidence, and control over their financial lives. Ariel Courage Fact checked by Ariel Courage Ariel Courage is an experienced editor, researcher, and fact-checker. In addition to her work with Investopedia, she has performed editing and fact-checking work for several leading finance publications, including The Motley Fool and Passport to Wall Street. Annuity Definition and Guide What Is Perpetuity? A perpetuity is a security that pays for an infinite amount of time. In finance, perpetuity is a constant stream of identical cash flows with no end. The concept of perpetuity is also used in several financial theories, such as in the dividend discount model (DDM). Perpetuity, in finance, refers to a security that pays a never-ending cash stream. The present value of a perpetuity is determined by dividing cash flows by the discount rate. Examples include annuities and British consols (which were discontinued in 2015). Understanding Perpetuity An annuity is a stream of cash flows. A perpetuity is a type of annuity that lasts forever, into perpetuity. The stream of cash flows continues for an infinite amount of time. In finance, a person uses the perpetuity calculation in valuation methodologies to find the present value of a company's cash flows when discounted back at a certain rate. An example of a financial instrument with perpetual cash flows is the British-issued bonds known as consols, which the Bank of England phased out in 2015. By purchasing a consol from the British government, the bondholder was entitled to receive annual interest payments forever. Although it may seem a bit illogical, an infinite series of cash flows can have a finite present value. Because of the time value of money, each payment is only a fraction of the last. Specifically, the perpetuity formula determines the amount of cash flows in the terminal year of operation. In valuation, a company is said to be a going concern, meaning that it goes on forever. For this reason, the terminal year is a perpetuity, and analysts use the perpetuity formula to find its value. Perpetuity Present Value Formula The formula to calculate the present value of a perpetuity, or security with perpetual cash flows, is as follows: PV = C ( 1 + r ) 1 + C ( 1 + r ) 2 + C ( 1 + r ) 3 ⋯ = C r where: PV = present value C = cash flow r = discount rate \begin{aligned} &\text{PV} = \frac { C }{ ( 1 + r ) ^ 1 } + \frac { C }{ ( 1 + r ) ^ 2 } + \frac { C }{ ( 1 + r ) ^ 3 } \cdots = \frac { C }{ r } \\ &\textbf{where:} \\ &\text{PV} = \text{present value} \\ &C = \text{cash flow} \\ &r = \text{discount rate} \\ \end{aligned} ​PV=(1+r)1C​+(1+r)2C​+(1+r)3C​⋯=rC​where:PV=present valueC=cash flowr=discount rate​ The basic method used to calculate a perpetuity is to divide cash flows by some discount rate. The formula used to calculate the terminal value in a stream of cash flows for valuation purposes is a bit more complicated. It is the estimate of cash flows in year 10 of the company, multiplied by one plus the company's long-term growth rate, and then divided by the difference between the cost of capital and the growth rate. Simply put, the terminal value is some amount of cash flows divided by some discount rate, which is the basic formula for a perpetuity. Perpetuity Example For example, if a company is projected to make $100,000 in year 10, and the company's cost of capital is 8%, with a long-term growth rate of 3%, the value of the perpetuity is as follows: = Cash Flow Year 10 × ( 1 + g ) r − g = $ 100 , 000 × 1.03 0.08 − 0.03 = $ 103 , 000 0.05 = $ 2.06 million \begin{aligned} &= \frac{ \text{Cash Flow}_\text{Year 10} \times ( 1 + g ) }{ r - g } \\ &= \frac{ \$100,000 \times 1.03 }{ 0.08 - 0.03 } \\ &= \frac{ \$103,000 }{ 0.05 } \\ &= \$2.06 \text{ million} \\ \end{aligned} ​=r−gCash FlowYear 10​×(1+g)​=0.08−0.03$100,000×1.03​=0.05$103,000​=$2.06 million​ This means that $100,000 paid into a perpetuity, assuming a 3% rate of growth with an 8% cost of capital, is worth $2.06 million in 10 years. Now, a person must find the value of that $2.06 million today. To do this, analysts use another formula referred to as the present value of a perpetuity. What Is a Perpetuity? A perpetuity is a financial instrument that offers a stream of cash flows in perpetuity—that is, without end. Before 2015, the U.K. offered a government bond called a "consol" that was structured as a perpetuity, although these instruments have since been discontinued. Unlike other bonds, perpetuities do not have a fixed maturity date, but instead, continue paying interest indefinitely. How Is a Perpetuity Valued? At first glance, it may seem as though an instrument that offers an infinite stream of cash flows would be almost infinitely valuable, but this is not the case. Mathematically speaking, the value of a perpetuity is finite, and its value can be determined by discounting its future cash flows to the present using a specified discount rate. This procedure, known as discounted cash flow (DCF) analysis, is also widely used to value other types of securities, such as stocks, bonds, and real estate investments. What Is the Difference Between a Perpetuity and an Annuity? A perpetuity and an annuity are similar instruments in that both offer a fixed set of cash flows over time. However, the key difference between them is that annuities have a predetermined end date, known as the "maturity date," whereas perpetuities are intended to last forever. Importantly, both annuities and perpetuities can be valued using DCF analysis. Federal Reserve Bank of St. Louis. "Consols: The Never-Ending Bonds." Accessed Dec. 6, 2021. What Is Terminal Value (TV)? Terminal value (TV) determines the value of a business or project beyond the forecast period when future cash flows can be estimated. Delayed Perpetuity Delayed perpetuity is a perpetual stream of cash flows that start at a predetermined date in the future. Dividend Discount Model – DDM The dividend discount model (DDM) is a system for evaluating a stock by using predicted dividends and discounting them back to present value. Gordon Growth Model (GGM) The Gordon growth model (GGM) is used to determine the intrinsic value of a stock based on a future series of dividends that grow at a constant rate. Average Annual Growth Rate (AAGR) Definition Average annual growth rate (AAGR) is the average increase in the value of an investment, portfolio, asset, or cash stream over the period of a year. Present Value of an Annuity Definition The present value of an annuity is the current value of future payments from that annuity, given a specified rate of return or discount rate. Top 3 Pitfalls of Discounted Cash Flow Analysis Annuity Derivation Vs. Perpetuity Derivation: What's the Difference? Tools for Fundamental Analysis What Determines a Company's Share Price? What Is the Intrinsic Value of a Stock? What Is the Formula for Calculating Net Present Value (NPV)? How Do You Value a Stock With Supernormal Dividend Growth Rates?
CommonCrawl
laplace transform formula The Laplace transform of f(t), that it is denoted by f(t) or F(s)is defined by the equation. And this seems very general. and write: ℒ `{f(t)}=F(s)` Similarly, the Laplace transform of a function g(t) would be written: ℒ `{g(t)}=G(s)` The Good News. The best way to convert differential equations into algebraic equations is the use of Laplace transformation. Required fields are marked *. The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus. = The Laplace transform is used to solve differential equations. https://www.wikihow.com/Calculate-the-Laplace-Transform-of-a-Function Z As a holomorphic function, the Laplace transform has a power series representation. and One can define the Laplace transform of a finite Borel measure μ by the Lebesgue integral[18], An important special case is where μ is a probability measure, for example, the Dirac delta function. ] Let us prove the equivalent formulation: By plugging in The Laplace transform of a signal f(t) is denoted by L{f(t)} = F(s). Of particular use is the ability to recover the cumulative distribution function of a continuous random variable X, by means of the Laplace transform as follows:[19], If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit, The Laplace transform converges absolutely if the integral. The cumulative distribution function is the integral of the probability density function. The Laplace transform we defined is sometimes called the one-sided Laplace transform. First step of the equation can be solved with the help of the linearity equation: (Using Linearity property of the Laplace transform), L(y)(s-2) + 5 = 1/(s-3) (Use value of y(0) ie -5 (given)), here (-5s+16)/(s-2)(s-3) can be written as -6/s-2 + 1/(s-3) using partial fraction method. DO NOT simplify your answer. L The Laplace transform is usually restricted to transformation of functions of t with t ≥ 0. where T = 1/fs is the sampling period (in units of time e.g., seconds) and fs is the sampling rate (in samples per second or hertz). ) The Laplace transform of a sum is the sum of Laplace transforms of each term. The above formula is a variation of integration by parts, with the operators For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the s-domain account for that. Like all transforms, the Laplace transform changes one signal into another according to some fixed set of rules or equations. Then using linearity of Laplace transformation and then the table, we have Essentially the trick is to reduce the given function to one of the elementary functions whose Laplace transform may be found in the table. [13][14][clarification needed], These types of integrals seem first to have attracted Laplace's attention in 1782, where he was following in the spirit of Euler in using the integrals themselves as solutions of equations. The Laplace transform is defined as a unilateral or one-sided transform. The unknown constants P and R are the residues located at the corresponding poles of the transfer function. By convention, this is referred to as the Laplace transform of the random variable X itself. Given the functions f(t) and g(t), and their respective Laplace transforms F(s) and G(s), the following table is a list of properties of unilateral Laplace transform:[22], The Laplace transform can be viewed as a continuous analogue of a power series. It is used in the telecommunication field. In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral. } The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier–Mellin integral, and Mellin's inverse formula): where γ is a real number so that the contour path of integration is in the region of convergence of F(s). Table of Laplace Transforms. Techniques of complex variables can also be used to directly study Laplace transforms. t General f(t) F(s)= Z 1 0 f(t)e¡st dt f+g F+G fif(fi2R) fiF The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables. – 11 Edit View Insert Format Tools Table B Β Ι Ο Αν Ауру т?v Paragraph 12pt v M Find the Laplace Transform of the given function. The Laplace transform of the derivative of a function is the Laplace transform of that function multiplied by minus the initial value of that function. . ( Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform. If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). For example, the function f(t) = cos(ω0t) has a Laplace transform F(s) = s/(s2 + ω02) whose ROC is Re(s) > 0. The integration is done along the vertical line. To find the residue P, we multiply both sides of the equation by s + α to get, Then by letting s = −α, the contribution from R vanishes and all that is left is, and so the substitution of R and P into the expanded expression for H(s) gives, Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of H(s) to obtain. In pure and applied probability, the Laplace transform is defined as an expected value. , one gets, provided that the interchange of limits can be justified. Consider a linear time-invariant system with transfer function. Example 2: Find Laplace transform of Solution: Observe that 5t = e t log 5. [15] However, in 1785, Laplace took the critical step forward when, rather than simply looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. The steps to be followed while calculating the laplace transform are: The Laplace transform (or Laplace method) is named in honor of the great French mathematician Pierre Simon De Laplace (1749-1827). The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini's theorem and Morera's theorem. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. } Laplace Transform: Suppose that we have a piecewise continuous function {eq}\displaystyle f(t) {/eq}, that is defined for the interval {eq}\displaystyle t \in \left( {0,\infty } \right). is given by. The Laplace transform is similar to the Fourier transform. It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. Find the Laplace transform of the given function. L [16], Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space, because those solutions were periodic. {\displaystyle {\frac {d}{dx}}} ) And we'll do more on that intuition later on. be a sampling impulse train (also called a Dirac comb) and, be the sampled representation of the continuous-time x(t), The Laplace transform of the sampled signal xq(t) is, This is the precise definition of the unilateral Z-transform of the discrete function x[n]. The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory. {\displaystyle \mu _{n}=\operatorname {E} [X^{n}]} I would like to meet Divya Mam if I would get a chance, Your email address will not be published. 0 The original differential equation can then be solved by applying the inverse Laplace transform. That is, the inverse of. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at ∞. { $$ \tag {1 } F ( p) = \int\limits _ { L } f ( z) e ^ {- p z } d z , $$. , The Inverse Laplace Transform can be described as the transformation into a function of time. Table of Laplace Transformations; 3. E Changing the base of the power from x to e gives. Laplace Transform Formula. Well, the Laplace transform of anything, or our definition of it so far, is the integral from 0 to infinity of e to the minus st times our function. This definition of the Fourier transform requires a prefactor of 1/(2π) on the reverse Fourier transform. B ( {\displaystyle Z(\beta )} In mathematics, the Laplace transform, named after its inventor Pierre-Simon Laplace (/ləˈplɑːs/), is an integral transform that converts a function of a real variable In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a probability density function f. In that case, to avoid potential confusion, one often writes, where the lower limit of 0− is shorthand notation for. † Deflnition of Laplace transform, † Compute Laplace transform by deflnition, including piecewise continuous functions. The Laplace transform is also defined and injective for suitable spaces of tempered distributions. As expected, proving these formulas is straightforward as long as we use the precise form of the Laplace integral. We know that the Laplace transform simplifies a given LDE (linear differential equation) to an algebraic equation, which can later be solved using the standard algebraic identities. The Laplace transform is often used in circuit analysis, and simple conversions to the s-domain of circuit elements can be made. Here, replacing s by −t gives the moment generating function of X. in a left neighbourhood of A useful property of the Laplace transform is the following: under suitable assumptions on the behaviour of where the integration is carried out over some contour $ L $ in the complex $ z $- plane, which sets up a correspondence between a function $ f ( z) $, defined on $ L $, and an analytic function $ F ( p) $ of … The transform turns integral equations and differential equations to polynomial equations, which are much easier to solve. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths. t There is a two-sided version where the integral goes from −∞ to ∞. On the other side, the inverse transform is helpful to calculate the solution to the given problem. x Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.[31]. for some constants A and B. 1 {\displaystyle g(E)dE} Therefore, we can write this Inverse Laplace transform formula as … This power series expresses a function as a linear superposition of moments of the function. ( and on the decay rate of defines the partition function. ⁡. Transform of Unit Step Functions; 5. ∫ From 1744, Leonhard Euler investigated integrals of the form, as solutions of differential equations, but did not pursue the matter very far. For this function, we need only ramps and steps; we apply a ramp function at each change in slope of y(t), and apply a step at each discontinuity. s ) The above relation is valid as stated if and only if the region of convergence (ROC) of F(s) contains the imaginary axis, σ = 0. The current widespread use of the transform (mainly in engineering) came about during and soon after World War II,[10] replacing the earlier Heaviside operational calculus. He used an integral of the form, akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t). x There are several Paley–Wiener theorems concerning the relationship between the decay properties of f , and the properties of the Laplace transform within the region of convergence. {\displaystyle {\mathcal {L}}^{-1}} X The important properties of laplace transform include: The laplace transform of f(t) = sin t is L{sin t} = 1/(s^2 + 1). Even when the interchange cannot be justified the calculation can be suggestive. And remember, the Laplace transform is just a definition. ( Making the substitution −s = ln x gives just the Laplace transform: In other words, the Laplace transform is a continuous analog of a power series, in which the discrete parameter n is replaced by the continuous parameter t, and x is replaced by e−s. e , In particular, it is analytic. Therefore, the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. It can be written as, L-1 [f(s)] (t). Because of this property, the Laplace variable s is also known as operator variable in the L domain: either derivative operator or (for s−1) integration operator. inverse laplace s s2 + 4s + 5. g [12] Joseph Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form, which some modern historians have interpreted within modern Laplace transform theory. {\displaystyle \infty } The (unilateral) Laplace–Stieltjes transform of a function g : R → R is defined by the Lebesgue–Stieltjes integral. Laplace Transforms Formulas. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. Integral transform useful in probability theory, physics, and engineering, Computation of the Laplace transform of a function's derivative, Evaluating integrals over the positive real axis. Standard notation: Where the notation is clear, we will use an uppercase letter to indicate the Laplace transform, e.g, L(f; s) = F(s). 1 1. s. 2. eat. ) s = σ+jω. [1][2][3], The Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used a similar transform in his work on probability theory. The Laplace transform †deflnition&examples †properties&formulas { linearity { theinverseLaplacetransform { timescaling { exponentialscaling { timedelay { derivative { integral { multiplicationbyt { convolution 3{1 {\displaystyle {\mathcal {B}}\{f\}} β is said to be an Inverse laplace transform of F(s). ) f n The Laplace transform is the essential makeover of the given derivative function. (complex frequency). The inverse Laplace transform is known as the Bromwich integral, sometimes known as the Fourier-Mellin integral (see also the related Duhamel's convolution principle). Can be proved using basic rules of integration. Inverse Laplace transform converts a frequency domain signal into time domain signal. F The resultant z-transform transfer function is 1 H(z) =… . where the discrete function a(n) is replaced by the continuous one f(t). {\displaystyle ({\mathcal {L}}f)(x)=\int _{0}^{\infty }f(s)e^{-sx}\,ds} whenever the improper integral converges. Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) The Laplace transform is used frequently in engineering and physics; the output of a linear time-invariant system can be calculated by convolving its unit impulse response with the input signal. L {\displaystyle {\mathcal {L}}\{f\}} This perspective has applications in probability theory. Find the Laplace Transform of the function shown: Solution: We need to figure out how to represent the function as the sum of functions with which we are familiar. Then (see the table above), In the limit The continuous Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument s = iω or s = 2πfi[26] when the condition explained below is fulfilled. The calculator above performs a normal Laplace transform. The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of. A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable s. Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function. Two integrable functions have the same Laplace transform only if they differ on a set of Lebesgue measure zero. Only calculating the normal Laplace transform is a process also known as a unilateral Laplace transform. Solution for We transform a Laplace transform transfer function using the matched z-transform. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power. f (t) 1 3t 5e2 2e 10t. n d 7e-3+ + €5t + 2t? While the Fourier transform of a function is a complex function of a real variable (frequency), the Laplace transform of a function is a complex function of a complex variable. d The meaning of the integral depends on types of functions of interest. Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal. In the wide sense it is a Laplace integral of the form. s $inverse\:laplace\:\frac {1} {x^ {\frac {3} {2}}}$. L Once solved, use of the inverse Laplace transform reverts to the original domain. In particular, it transforms differential equations into algebraic equations and convolution into multiplication. This list is not a complete listing of Laplace transforms and only contains some of the more commonly used Laplace transforms and formulas. whenever the improper integral converges. Bernstein's theorem on monotone functions, "Laplace Transform: A First Introduction", "Differential Equations - Laplace Transforms", "The solution of definite integrals by differential transformation", "Normal coordinates in dynamical systems", Proceedings of the London Mathematical Society, http://mathworld.wolfram.com/LaplaceTransform.html, Good explanations of the initial and final value theorems, https://en.wikipedia.org/w/index.php?title=Laplace_transform&oldid=991709074, Short description is different from Wikidata, Wikipedia articles needing clarification from May 2010, Creative Commons Attribution-ShareAlike License. That is, F(s) can effectively be expressed, in the region of convergence, as the absolutely convergent Laplace transform of some other function. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined. ⁡ − The formal propertiesof calculus integrals plus the integration by parts formula used in Tables 2 and 3 leads to these rules for the Laplace transform: L(f(t) +g(t)) = L(f(t)) +L(g(t)) The integral of a sum is the sum of the integrals. This ROC is used in knowing about the causality and stability of a system. ⁡. are the moments of the function f. If the first n moments of f converge absolutely, then by repeated differentiation under the integral, This is of special significance in probability theory, where the moments of a random variable X are given by the expectation values {\displaystyle {\mathcal {L}}} ) Consider y'- 2y = e3x and y(0) = -5. 1. sa-. Laplace transform is the integral transform of the given derivative function with real variable t to convert into complex function with variable s. For t ≥ 0, let f(t) be given and assume the function satisfies certain conditions to be stated later on. Laplace Transform Definition; 2a. f So our function in this case is the unit step function, u sub c of t times f of t minus c dt. The following table provides Laplace transforms for many common functions of a single variable. The transfer function can also be shown using the Fourier transform which is only a special case of the bilateral Laplace transform for the case where =. But anyway, it's the integral from 0 to infinity of e to the minus st, times-- whatever we're taking the Laplace transform of-- times sine of at, dt. [4] Laplace wrote extensively about the use of generating functions in Essai philosophique sur les probabilités (1814), and the integral form of the Laplace transform evolved naturally as a result. cosh(t) = et +e−t 2 sinh(t) = et−e−t 2 cosh. The sources are put in if there are initial conditions on the circuit elements. The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former but not in the latter sense. The Laplace transform is invertible on a large class of functions. [5], Laplace's use of generating functions was similar to what is now known as the z-transform, and he gave little attention to the continuous variable case which was discussed by Niels Henrik Abel. Then, the relation holds. Piere-Simon Laplace introduced a more general form of the Fourier Analysis that became known as the Laplace transform. The limit here is interpreted in the weak-* topology. Let The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), which is a unilateral transform defined by [ ) [6] The theory was further developed in the 19th and early 20th centuries by Mathias Lerch,[7] Oliver Heaviside,[8] and Thomas Bromwich.[9]. f It is accepted widely in many fields. The inverse of complex function F(s) to produce a real valued function f(t) is an inverse laplace transformation of the function. μ In practice, it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table, and construct the inverse by inspection. ( When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is usually intended. Moreover, it comes with a real variable (t) for converting into complex function with variable (s). The unilateral Laplace transform is implemented in the Wolfram Language as LaplaceTransform [ f [t], t, s] and the inverse Laplace transform as InverseRadonTransform. This can be derived from the basic expression for a Laplace transform as follows: where If X is a random variable with probability density function f, then the Laplace transform of f is given by the expectation. ( Standard notation: Where the notation is clear, we will use an uppercase letter to indicate the Laplace transform, e.g, L(f; s) = F(s). The definition of the complex impedance Z (in ohms) is the ratio of the complex voltage V divided by the complex current I while holding the initial state V0 at zero: Using this definition and the previous equation, we find: which is the correct expression for the complex impedance of a capacitor. In addition, the Laplace transform has large applications in control theory. The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions of 1/(s + a) and 1/(s + b). The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function. . [20] Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b. and 0 This function is an exponentially restricted real function. In this section, students get a step-by-step explanation for every concept and will find it extremely easy to understand this topic in a detailed way. Laplace transform is named in honour of the great French mathematician, Pierre Simon De Laplace (1749-1827). holds under much weaker conditions. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms. is a special case of the Laplace transform for f an entire function of exponential type, meaning that. This definition assumes that the signal f(t) is only defined for all real numbers t ≥ 0, or f(t) = 0 for t < 0. Formula. The transform has many applications in science and engineering because it is a tool for solving differential equations. This means that, on the range of the transform, there is an inverse transform. f d The validity of this identity can be proved by other means. Jump to: navigation , search. The Laplace transform has a number of properties that make it useful for analyzing linear dynamical systems. Taking the Laplace transform of this equation, we obtain. 1b. Some of the Laplace transformation properties are: If f1 (t) ⟷ F1 (s) and [note: ⟷ implies Laplace Transform]. $laplace\:g\left (t\right)=3\sinh\left (2t\right)+3\sin\left (2t\right)$. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space. Animation showing how adding together curves can approximate a function. An example curve of e^t cos(10t) that is added together with similar curves to form a Laplace Transform. The Laplace Transform of the Delta Function Since the Laplace transform is given by an integral, it should be easy to compute it for the delta function. Symbolically, this is expressed by the differential equation. we set θ = e−t we get a two-sided Laplace transform. inverse laplace 1 x3 2. A pair of complex poles is simple if it is not repeated; it is a double or multiple poles if repeated. It transforms a time-domain function, f(t), into the s -plane by taking the integral of the function multiplied by e − st from 0 − to ∞, where s is … Many mathematical problems are solved using transformations. The impulse response is simply the inverse Laplace transform of this transfer function: To evaluate this inverse transform, we begin by expanding H(s) using the method of partial fraction expansion. Example: Laplace Transform of a Triangular Pulse. In most applications, the contour can be closed, allowing the use of the residue theorem. The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). The Laplace transform converts a time domain function to s-domain function by integration from zero to infinity of the time domain function, multiplied by e-st. RapidTables Home › … In the two-sided case, it is sometimes called the strip of absolute convergence. Circuit elements can be transformed into impedances, very similar to phasor impedances. L(cf(t)) = cL(f(t)) Constants c pass through the integral sign. By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. [21] The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence, or the domain of absolute convergence. The Laplace transform can be alternatively defined as the bilateral Laplace transform, or two-sided Laplace transform, by extending the limits of integration to be the entire real axis. The set of values for which F(s) converges absolutely is either of the form Re(s) > a or Re(s) ≥ a, where a is an extended real constant with −∞ ≤ a ≤ ∞ (a consequence of the dominated convergence theorem). {\displaystyle F} English electrical engineer Oliver Heaviside first proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus. The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t). The answer is 1. instead of F.[1][3]. This method is used to find the approximate value of the integration of the given function. ∞ g {\displaystyle f,g} s General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of Paley–Wiener theorems. , instead of → Using Inverse Laplace to Solve DEs; 9. It is used to convert complex differential equations to a simpler form having polynomials. As we know that the Laplace transform of sin at = a/(s^2 + a^2). Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. f An alternative formula for the inverse Laplace transform is given by Post's inversion formula. It's just a tool that has turned out to be extremely useful. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system. the left-hand side turns into: but assuming Fubini's theorem holds, by reversing the order of integration we get the wanted right-hand side. we find the inverse by first rearranging terms in the fraction: We are now able to take the inverse Laplace transform of our terms: This is just the sine of the sum of the arguments, yielding: In statistical mechanics, the Laplace transform of the density of states The bilateral Laplace transform F(s) is defined as follows: An alternate notation for the bilateral Laplace transform is E It is also used for many engineering tasks such as Electrical Circuit Analysis, Digital Signal Processing, System Modelling, etc. A Laplace transform of function f (t) in a time domain, where t is the real number greater than or equal to zero, is given as F (s), where there s is the complex number in frequency domain .i.e. Download BYJU'S-The Learning App and get personalised videos to understand the mathematical concepts. The Laplace transform is an integral transform widely used to solve differential equations with constant coefficients. In general, the Laplace–Stieltjes transform is the Laplace transform of the Stieltjes measure associated to g. So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on its cumulative distribution function.[25]. If a unique function is continuous on o to ∞ limit and have the property of Laplace Transform. In fact, besides integrable functions, the Laplace transform is a one-to-one mapping from one function space into another in many other function spaces as well, although there is usually no easy characterization of the range. For example, when the signals are sent, Frequently Asked Questions on Laplace Transform- FAQs. f laplace 8π. [17], The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), which is a unilateral transform defined by, where s is a complex number frequency parameter, An alternate notation for the Laplace transform is The transforms are typically very straightforward, but there are functions whose Laplace transforms cannot easily be found using elementary methods. [32] That is, the canonical partition function The function g is assumed to be of bounded variation. The idea is to transform the problem into another problem that is easier to solve. This is deduced using the nature of frequency differentiation and conditional convergence. [24] If a(n) is a discrete function of a positive integer n, then the power series associated to a(n) is the series, where x is a real variable (see Z transform). . s Transforms of Integrals; 7. L From Encyclopedia of Mathematics. For better understanding, let us solve a first-order differential equation with the help of Laplace transformation. important to understand not just the tables – but the formula where C is the capacitance (in farads) of the capacitor, i = i(t) is the electric current (in amperes) through the capacitor as a function of time, and v = v(t) is the voltage (in volts) across the terminals of the capacitor, also as a function of time. As a result, LTI systems are stable, provided that the poles of the Laplace transform of the impulse response function have negative real part. ∞ S.Boyd EE102 Table of Laplace Transforms Rememberthatweconsiderallfunctions(signals)asdeflnedonlyont'0. {\displaystyle \int \,dx} In these cases, the image of the Laplace transform lives in a space of analytic functions in the region of convergence. which is the impulse response of the system. {\displaystyle s} x The formulae given below are very useful to solve the many Laplace Transform based problems. E Laplace Transform Complex Poles. For 't' ≥ 0, let 'f(t)' be given and assume the function fulfills certain conditions to be stated later. = In general, the region of convergence for causal systems is not the same as that of anticausal systems. If that is done, the common unilateral transform simply becomes a special case of the bilateral transform, where the definition of the function being transformed is multiplied by the Heaviside step function. The equivalents for current and voltage sources are simply derived from the transformations in the table above. A necessary condition for existence of the integral is that f must be locally integrable on [0, ∞). d − . Your email address will not be published. The advantages of the Laplace transform had been emphasized by Gustav Doetsch,[11] to whom the name Laplace Transform is apparently due. Find the value of L(y). s F For this to converge for, say, all bounded functions f, it is necessary to require that ln x < 0. Replacing summation over n with integration over t, a continuous version of the power series becomes. For more information, see control theory. The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Laplace transform changes one signal into another according to some fixed set of rules or equations. Because the Laplace transform is a linear operator. being replaced by ( For example, with a ≠ 0 ≠ b, proceeding formally one has. It is the opposite of the normal Laplace transform. {\displaystyle f^{(n)}} {\displaystyle {\mathcal {L}}\left\{f(t)\right\}=F(s)} This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region Re(s) ≥ 0. Definition The Laplace transform of a function, f(t), is defined as where F(s) is the symbol for the Laplace transform, L is the Laplace transform operator, and f(t) is some function of time, t. Note The L operator transforms a time domain function f(t) into an s domain function, F(s). {\displaystyle t} x The most significant advantage is that differentiation becomes multiplication, and integration becomes division, by s (reminiscent of the way logarithms change multiplication to addition of logarithms). properties and/or identities, some Laplace transforms can be obtained from others more quickly than by using the definition directly. In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. the sides of the medium. In the Laplace inverse formula F(s) is the Transform of F(t) while in Inverse Transform F(t) is the Inverse Laplace Transform of F(s). { In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change in the electrical potential (in SI units). Still more generally, the integral can be understood in a weak sense, and this is dealt with below. It is an example of a Frullani integral. This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. The inverse Laplace transform is when we go from a function F(s) to a function f(t). The Unit Step Function - Products; 2. { The Laplace transform can also be used to solve differential equations and is used extensively in mechanical engineering and electrical engineering. The Laplace transform of f(t), that it is denoted by f(t) or F(s) is defined by the equation. {\displaystyle f,g} Laplace transform. ) denotes the nth derivative of f, can then be established with an inductive argument. exists as a proper Lebesgue integral. laplace g ( t) = 3sinh ( 2t) + 3sin ( 2t) $inverse\:laplace\:\frac {s} {s^2+4s+5}$. Transform of Periodic Functions; 6. ℒ̇= −(0) (3) In practice, we do not need to actually find this infinite integral for each function f(t) in order to find the Laplace Transform. s ( t) = e t + e − t 2 sinh. n (often time) to a function of a complex variable The Laplace transform is a well established mathematical technique for solving a differential equation. in a right neighbourhood of Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). {\displaystyle 0} As s = iω is a pole of F(s), substituting s = iω in F(s) does not yield the Fourier transform of f(t)u(t), which is proportional to the Dirac delta-function δ(ω − ω0). Properties of Laplace Transform; 4. Each residue represents the relative contribution of that singularity to the transfer function's overall shape. For locally integrable functions that decay at infinity or are of exponential type, the integral can be understood to be a (proper) Lebesgue integral. Note that the resistor is exactly the same in the time domain and the s-domain. This page was last edited on 1 December 2020, at 12:19. The linearity property of the Laplace Transform states: This is easily proven from the definition of the Laplace Transform L(δ(t − a)) = e−as for a > 0. ( f Inverse of the Laplace Transform; 8. d Recall the definition of hyperbolic functions. Typical function spaces in which this is true include the spaces of bounded continuous functions, the space L∞(0, ∞), or more generally tempered distributions on (0, ∞). To learn more in detail visit the link given for inverse laplace transform. L(δ(t)) = 1. {\displaystyle s\rightarrow 0} Below examples are based on some important elementary functions of Laplace transform. 2. f g } [27][28] For definitions and explanations, see the Explanatory Notes at the end of the table. If g is the antiderivative of f: then the Laplace–Stieltjes transform of g and the Laplace transform of f coincide. 0 = Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. ∫ "The Laplace Transform of f(t) equals function F of s". f(t)= L-1{Fs( )}F(s)= L{ ft( )}f(t)= L-1{Fs( )}F(s)= L{ ft( )} 1. Augustinus Bader Sale, Rme Authorized Dealers, Sony A7siii Review, Is Fedora Debian Or Rpm Based, Nina Tower Hotel, Soapstone Metaphysical Properties, Maui Curl Milk Uk, Ux Research Book Pdf, 2020 laplace transform formula
CommonCrawl
Sub-critical and critical stochastic quasi-geostrophic equations with infinite delay Sufficient conditions for global dynamics of a viral infection model with nonlinear diffusion doi: 10.3934/dcdsb.2020214 Global phase portraits and bifurcation diagrams for reversible equivariant hamiltonian systems of linear plus quartic homogeneous polynomials Yuzhou Tian and Yulin Zhao , School of Mathematics (Zhuhai), Sun Yat-Sen University, Zhuhai, 519082, China * Corresponding author: Yulin Zhao Received July 2019 Published July 2020 Fund Project: This research is supported by the NSF of China (No.11971495 and No.11801582) Figure(12) / Table(1) This paper is devoted to the complete classification of global phase portraits for reversible equivariant Hamiltonian systems of linear plus quartic homogeneous polynomials. Such system is affinely equivalent to one of five normal forms by an algebraic classification of its infinite singular points. Then, we classify the global phase portraits of these normal forms on the Poincaré disc. There are exactly $ 13 $ different global topological structures on the Poincaré disc. Finally we provide the bifurcation diagrams for the corresponding global phase portraits. Keywords: Hamiltonian systems, reversible equivariant, quartic polynomial systems, global phase portraits. Mathematics Subject Classification: Primary: 34C05, 34C20; Secondary: 34C14. Citation: Yuzhou Tian, Yulin Zhao. Global phase portraits and bifurcation diagrams for reversible equivariant hamiltonian systems of linear plus quartic homogeneous polynomials. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020214 M. Álvarez, A. Ferragut and X. Jarque, A survey on the blow up technique, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 21 (2011), 3103-3118. doi: 10.1142/S0218127411030416. Google Scholar A. A. Andronov, A. A. Vitt and S. E. Khaikin, Theory of Oscillators: Adiwes International Series in Physics, vol. 4, Elsevier, 2013. Google Scholar J. C. Artés and J. Llibre, Quadratic Hamiltonian vector fields, J. Differential Equations, 107 (1994), 80-95. doi: 10.1006/jdeq.1994.1004. Google Scholar L. Cairó and J. Llibre, Phase portraits of planar semi-homogeneous vector fields. Ⅰ, Nonlinear Anal., 29 (1997), 783-811. doi: 10.1016/S0362-546X(96)00088-0. Google Scholar L. Cairó and J. Llibre, Phase portraits of planar semi-homogeneous vector fields Ⅱ, Nonlinear Anal., 39 (2000), 351-363. doi: 10.1016/S0362-546X(98)00177-1. Google Scholar L. Cairó and J. Llibre, Phase portraits of planar semi-homogeneous vector fields (Ⅲ), Qual. Theory Dyn. Syst., 10 (2011), 203-246. doi: 10.1007/s12346-011-0052-y. Google Scholar A. Cima, A. Gasull and F. Mañosas, On polynomial Hamiltonian planar vector fields, J. Differential Equations, 106 (1993), 367-383. doi: 10.1006/jdeq.1993.1112. Google Scholar A. Cima and J. Llibre, Algebraic and topological classification of the homogeneous cubic vector fields in the plane, J. Math. Anal. Appl., 147 (1990), 420-448. doi: 10.1016/0022-247X(90)90359-N. Google Scholar I. E. Colak, J. Llibre and C. Valls, Hamiltonian linear type centers of linear plus cubic homogeneous polynomial vector fields, J. Differential Equations, 257 (2014), 1623-1661. doi: 10.1016/j.jde.2014.05.024. Google Scholar I. E. Colak, J. Llibre and C. Valls, Hamiltonian nilpotent centers of linear plus cubic homogeneous polynomial vector fields, Adv. Math., 259 (2014), 655-687. doi: 10.1016/j.aim.2014.04.002. Google Scholar I. E. Colak, J. Llibre and C. Valls, Bifurcation diagrams for Hamiltonian linear type centers of linear plus cubic homogeneous polynomial vector fields, J. Differential Equations, 258 (2015), 846-879. doi: 10.1016/j.jde.2014.10.006. Google Scholar I. E. Colak, J. Llibre and C. Valls, Bifurcation diagrams for Hamiltonian nilpotent centers of linear plus cubic homogeneous polynomial vector fields, J. Differential Equations, 262 (2017), 5518-5533. doi: 10.1016/j.jde.2017.02.001. Google Scholar F. S. Dias, J. Llibre and C. Valls, Polynomial Hamiltonian systems of degree 3 with symmetric nilpotent centers, Math. Comput. Simulation, 144 (2018), 60-77. doi: 10.1016/j.matcom.2017.06.002. Google Scholar F. Dumortier, Techniques in the theory of local bifurcations: Blow-up, normal forms, nilpotent bifurcations, singular perturbations, in Bifurcations and Periodic Orbits of Vector Fields, Springer, 1993, 19–73. Google Scholar F. Dumortier, J. Llibre and J. C. Artés, Qualitative Theory of Planar Differential Systems, Springer-Verlag, Berlin, 2006. Google Scholar A. Gasull, A. Guillamon and V. Mañosa, Phase portrait of Hamiltonian systems with homogeneous nonlinearities, Nonlinear Anal., 42 (2000), 679-707. doi: 10.1016/S0362-546X(99)00131-5. Google Scholar H. Goldstein, Classical Mechanics, Addison-Wesley Press, Inc., Cambridge, Mass, 1951. Google Scholar A. Guillamon and C. Pantazi, Phase portraits of separable Hamiltonian systems, Nonlinear Anal., 74 (2011), 4012-4035. doi: 10.1016/j.na.2011.03.030. Google Scholar Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, Applied Mathematical Sciences, Springer-Verlag, New York, 1995. doi: 10.1007/978-1-4757-2421-9. Google Scholar J. S. W. Lamb and M. Roberts, Reversible equivariant linear systems, J. Differential Equations, 159 (1999), 239-279. doi: 10.1006/jdeq.1999.3632. Google Scholar H. Liang, J. Huang and Y. Zhao, Classification of global phase portraits of planar quartic quasi-homogeneous polynomial differential systems, Nonlinear Dynam., 78 (2014), 1659-1681. doi: 10.1007/s11071-014-1541-8. Google Scholar J. Llibre, Y. P. Martínez and C. Vidal, Phase portraits of linear type centers of polynomial hamiltonian systems with hamiltonian function of degree $5$ of the form $H = H_1(x)+ H_2(y)$, Discrete Contin. Dyn. Syst. Ser., 39 (2019), 75-113. doi: 10.3934/dcds.2019004. Google Scholar J. Llibre, Y. P. Martínez and C. Vidal, Linear type centers of polynomial Hamiltonian systems with nonlinearities of degree 4 symmetric with respect to the $y$-axis, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 887-912. doi: 10.3934/dcdsb.2018047. Google Scholar J. Llibre, R. Oliveira and C. Valls, Phase portraits for some symmetric Riccati cubic polynomial differential equations, Topology Appl., 234 (2018), 220-237. doi: 10.1016/j.topol.2017.11.023. Google Scholar J. Llibre and C. Pessoa, Phase portraits for quadratic homogeneous polynomial vector fields on $\Bbb S^2$, Rend. Circ. Mat. Palermo, 58 (2009), 361-406. doi: 10.1007/s12215-009-0030-2. Google Scholar N. Minorsky, Nonlinear Oscillations, D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto-London-New York, 1962. Google Scholar D. A. Neumann, Classification of continuous flows on $2$-manifolds, Proc. Amer. Math. Soc., 48 (1975), 73-81. doi: 10.1090/S0002-9939-1975-0356138-6. Google Scholar H. Poincaré, Mémoire sur les courbes définies par une équation différentielle (i), Journal de Mathématiques Pures et Appliquées, 7 (1881), 375–422. Google Scholar B. Qiu and H. Liang, Classification of global phase portrait of planar quintic quasi-homogeneous coprime polynomial systems, Qual. Theory Dyn. Syst., 16 (2017), 417-451. doi: 10.1007/s12346-016-0199-7. Google Scholar J. Reyn, Phase Portraits of Planar Quadratic Systems, vol. 583, Springer, New York, 2007. Google Scholar D. Schlomiuk and X. Zhang, Quadratic differential systems with complex conjugate invariant lines meeting at a finite point, J. Differential Equations, 265 (2018), 3650-3684. doi: 10.1016/j.jde.2018.05.014. Google Scholar Y. Tian and Y. Zhao, Global phase portraits and bifurcation diagrams for Hamiltonian systems of linear plus quartic homogeneous polynomials symmetric with respect to the $y$-axis, Nonlinear Anal., 192 (2020), 111658, 27pp. doi: 10.1016/j.na.2019.111658. Google Scholar X. Yang, Global phase-portraits of plane homogeneous polynomial vector fields and stability of the origin, Systems Sci. Math. Sci., 10 (1997), 33-40. Google Scholar Y. Ye et al., Theory of Limit Cycles, vol. 66 of Transl. Math. Monographs, Amer. Math. Soc, Providence, RI, 1986. Google Scholar Figure 1. Phase portraits of system (3) Figure 2. Bifurcation diagram of system $\left({{\bf{I}}{\bf{.5}}} \right)$ Figure 3. The local phase portrait of the system (8) at the origin Figure 4. Local phase portrait of system $\left({{\bf{I}}{\bf{.1}}} \right)$ on the Poincaré disk Figure 5. The local phase portrait of system (13) at origin for $ c = 0 $ Figure 6. The local phase portrait of the system (14) at the origin Figure 7. All the local phase portraits of system $\left({{\bf{I}}{\bf{.2}}} \right)$ on the Poincaré disk Figure 8. The local phase portrait of the system (15) at the origin for $ a\leq0 $ Figure 10. The local phase portraits of system (17) at $ p_1^{\pm} $ for $ a<0 $ Figure 11. The local phase portraits of system $\left({{\bf{I}}{\bf{.4}}} \right)$ on the Poincaré disk Figure 12. The local phase portraits of system $\left({{\bf{I}}{\bf{.5}}} \right)$ with $ \Delta>0 $ on the Poincaré disk Table 1. Algebraic classification of system (6) $b$ $\omega(z)$ = 0 Conditions Roots of $\omega(z)$ Linear change Normal forms $b = 0$ Linear equation $a = 0, c\neq0$ No roots $\left(x, y\right)\mapsto\left(c^{-1/3}x, c^{-1/3}y\right)$ $({\bf{I.1}})$ $a\neq0, c\in\mathbb{R}$ One root $\left(x, y\right)\mapsto\left(a^{-1/3}x, a^{-1/3}y\right)$ $({\bf{I.2}})$ $b\neq0$ Quadratic equation $a\in\mathbb{R}, c = 0$ $0$ is a root $\left(x, y\right)\mapsto\left(b^{-1/3}x, b^{-1/3}y\right)$ $({\bf{I.3}})$ $a\in\mathbb{R}, c\neq0, \Delta = 0$ Multiple root $\left(x, y\right)\mapsto\left(b^{-1/3}x, b^{-1/3}y\right)$ $({\bf{I.4}})$ $a\in\mathbb{R}, c\neq0, \Delta\neq0$ Two simple roots $\left(x, y\right)\mapsto\left(b^{-1/3}x, b^{-1/3}y\right)$ $({\bf{I.5}})$ Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Simon Hochgerner. Symmetry actuated closed-loop Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 641-669. doi: 10.3934/jgm.2020030 Hua Shi, Xiang Zhang, Yuyan Zhang. Complex planar Hamiltonian systems: Linearization and dynamics. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020406 Manuel de León, Víctor M. Jiménez, Manuel Lainz. Contact Hamiltonian and Lagrangian systems with nonholonomic constraints. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2021001 Jan Bouwe van den Berg, Elena Queirolo. A general framework for validated continuation of periodic orbits in systems of polynomial ODEs. Journal of Computational Dynamics, 2021, 8 (1) : 59-97. doi: 10.3934/jcd.2021004 João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 Huanhuan Tian, Maoan Han. Limit cycle bifurcations of piecewise smooth near-Hamiltonian systems with a switching curve. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020368 Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176 Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHUM approach. Numerical Algebra, Control & Optimization, 2020 doi: 10.3934/naco.2020055 Vandana Sharma. Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021001 Lingju Kong, Roger Nichols. On principal eigenvalues of biharmonic systems. Communications on Pure & Applied Analysis, 2021, 20 (1) : 1-15. doi: 10.3934/cpaa.2020254 Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020399 Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434 Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020451 Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems by stages. Journal of Geometric Mechanics, 2020, 12 (4) : 607-639. doi: 10.3934/jgm.2020029 Lingwei Ma, Zhenqiu Zhang. Monotonicity for fractional Laplacian systems in unbounded Lipschitz domains. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 537-552. doi: 10.3934/dcds.2020268 Peter H. van der Kamp, D. I. McLaren, G. R. W. Quispel. Homogeneous darboux polynomials and generalising integrable ODE systems. Journal of Computational Dynamics, 2021, 8 (1) : 1-8. doi: 10.3934/jcd.2021001 Yuzhou Tian Yulin Zhao
CommonCrawl
Statistical Testing CollinearityTest - Test for the Presence of Multi-colinearity Jacquie Nesbitt Returns the p-value of the multi-collinearity test (i.e. whether one variable can be linearly predicted from the others with a non-trivial degree of accuracy). CollinearityTest(X, Mask, Method, Column Index) is the independent variables data matrix, such that each column represents one variable. is the boolean array to select a subset of the input variables in X. If missing, all variables in X are included. is the statistics to compute (1 = Condition Number (default), 2 = VIF, 3 = Determinant, 4 = Eigenvalues). 1 Condition Number (Kappa) 2 Variance Inflation Factor (VIF) Column Index is a switch to designate the explanatory variable to examine (not required for condition number). The sample data may include missing values. Each column in the input matrix corresponds to a separate variable. Each row in the input matrix corresponds to an observation. Observations (i.e. row) with missing values are removed. In the variance inflation factor (VIF) method, a series of regressions models are constructed, where one variable is the dependent variable against the remaining predictors. $$\textrm{Tolerance}_i = 1-R_i^2$$ $$\textrm{VIF}_i =\frac{1}{\textrm{Tolearance}_i} = \frac{1}{1-R_i^2}$$ $R_i^2$ is the coefficient of determination of a regression of explanator $i$ on all the other explanators. A tolerance of less than 0.20 or 0.10 and/or a VIF of 5 or 10 and above indicates a multicollinearity problem. The condition number ($\kappa$) test is a standard measure of ill-conditioning in a matrix; It will indicate that the inversion of the matrix is numerically unstable with finite-precision numbers (standard computer floats and doubles). $$ X = \begin{bmatrix} 1 & X_{11} & \cdots & X_{k1} \\ \vdots & \vdots & & \vdots \\ 1 & X_{1N} & \cdots & X_{kN} \end{bmatrix} $$ $$\kappa = \sqrt{\frac{\lambda_{max}}{\lambda_{min}}}$$ $\lambda_{max}$ is the maximum eigenvalue. $\lambda_{min}$ is the minimum eigenvalue. As a rule of thumb, a condition number ($\kappa$) greater or equal to 30 indicates a severe multi-collinearity problem. The CollinearityTest function is available starting with version 1.60 APACHE. Files Examples Wikipedia - Multicollinearity Wikipedia - Variance inflation factor Wikipedia - Condition number Wikipedia - Multiple regression Farrar Donald E. and Glauber, Robert R (1967). "Multicollinearity in Regression Analysis: The Problem Revisited". The Review of Economics and Statistics 49(1):92-107. JohansenTest - Johansen Cointegration Test Multi-Collinearity Test Chow Test - Regression Stability Test Multi-collinearity test in Excel Appendix B: Akaike-Information Criterion (AIC) © Help center
CommonCrawl
\begin{definition}[Definition:Gram/Symbol] The symbol for the '''gram''' is $\mathrm g$. \end{definition}
ProofWiki
\begin{document} \title[Minimal mass blow-up for mBO]{Construction of a minimal mass blow up solution of the modified Benjamin-Ono equation} \author[Y. Martel]{Yvan Martel} \address{CMLS, \'Ecole Polytechnique, CNRS, Universit\'e Paris-Saclay, 91128 Palaiseau, France} \email{[email protected]} \author[D. Pilod]{Didier Pilod} \address{Instituto de Matem\'atica, Universidade Federal do Rio de Janeiro, Caixa Postal 68530, CEP: 21945-970, Rio de Janeiro, RJ, Brazil} \email{[email protected]} \begin{abstract} We construct a minimal mass blow up solution of the modified Benjamin-Ono equation (mBO) \[ u_{t}+(u^3-D^1 u)_{x}=0, \eqno{\rm (mBO)} \] which is a standard mass critical dispersive model. Let $Q\in H^{\frac 12}$, $Q>0$, be the unique ground state solution of $D^1 Q +Q=Q^3$, constructed using variational arguments by Weinstein (Comm. PDE, 12 (1987), J. Diff. Eq., 69 (1987)) and Albert, Bona and Saut (Proc. Royal London Soc., 453 (1997)), and whose uniqueness was recently proved by Frank and Lenzmann (Acta Math., 210 (2013)). We show the existence of a solution $S$ of (mBO) satisfying $\|S \|_{L^2}=\|Q\|_{L^2}$ and \[ S(t)-\frac1{\lambda^{\frac12}(t)} Q\left(\frac{\cdot - x(t)}{\lambda(t)}\right)\to 0\quad \mbox{ in }\ H^{\frac 12}(\mathbb R) \mbox{ as }\ t\downarrow 0, \] where \[ \lambda(t)\sim t,\quad x(t) \sim -|\ln t| \quad \hbox{and}\quad \|S(t)\|_{\dot H^{\frac 12}} \sim t^{-\frac 12}\|Q\|_{\dot H^{\frac 12}} \quad \hbox{as}\ t\downarrow 0. \] This existence result is analogous to the one obtained by Martel, Merle and Rapha\"el (J. Eur. Math. Soc., 17 (2015)) for the mass critical generalized Korteweg-de Vries equation. However, in contrast with the (gKdV) equation, for which the blow up problem is now well-understood in a neighborhood of the ground state, $S$ is the first example of blow up solution for (mBO). The proof involves the construction of a blow up profile, energy estimates as well as refined localization arguments, developed in the context of Benjamin-Ono type equations by Kenig, Martel and Robbiano (Ann. Inst. H. Poincaré, Anal. Non Lin., 28 (2011)). Due to the lack of information on the (mBO) flow around the ground state, the energy estimates have to be considerably sharpened in the present paper. \end{abstract} \maketitle \section{Introduction} \subsection{Main result} We consider the modified Benjamin-Ono equation (mBO) \begin{equation} \label{mBO} u_t+\big(u^3-\mathcal{H}u_x\big)_x=0 \, ,\quad t \in \mathbb R\,,\ x \in \mathbb R\,, \end{equation} where $u(t,x)$ is a real-valued function and $\mathcal{H}$ denotes the Hilbert transform, defined by \begin{displaymath} \mathcal{H}f(x)= \frac1\pi \, \text{p.v.}\int_{\mathbb R}\frac{f(y)}{x-y}dy \, . \end{displaymath} Observe that with this convention $\mathcal{H}\partial_x=D^1$, where $D^\alpha$ is the Riesz potential of order $-\alpha$, defined via Fourier transform by $(D^\alpha f)^{\wedge}(\xi)=|\xi|^\alpha\widehat{f}(\xi)$, for any $\alpha \in \mathbb R$. We see equation (mBO) as a natural generalization of the classical quadratic Benjamin-Ono equation \begin{equation} \label{BO} u_t+\big(u^2-\mathcal{H}u_x\big)_x=0 \, ,\quad t \in \mathbb R\,,\ x \in \mathbb R\,, \end{equation} introduced by Benjamin \cite{Ben} and Ono~\cite{On} and intensively studied since then, both mathematically and numerically, as a model for one-dimensional waves in deep water. The cubic nonlinearity for the Benjamin-Ono model is also relevant as a long wave model, see e.g. Abdelouhab, Bona, Felland and~Saut \cite{AbBoFeSa} and Bona and Kalisch~\cite{BoKa}. At first sight, the relation between \eqref{mBO} and \eqref{BO} seems similar to the one between the (cubic) modified KdV equation and the Korteweg-de Vries equation, but \eqref{mBO} is not completely integrable and no algebraic relation relates these two models. Another difference is that with dispersion of the Benjamin-Ono type, a cubic nonlinearity leads to instable waves. More generally, nonlinear one dimensional models with weak dispersion seem of great physical interest, see e.g. Klein and Saut~\cite{KlSa} and Linares, Pilod and Saut~\cite{LiPiSa}. Equation (mBO) is a typical model with interesting mathematical properties, which can be seen as an intermediate step between the well-studied generalized (KdV) equations and other relevant models with weak dispersion. The following quantities are formally invariant by the flow associated to (mBO) \begin{equation} \label{mass} M(u)=\frac12 \int_{\mathbb R} u^2dx \quad \hbox{and}\quad E(u)=\frac12 \int_{\mathbb R} |D^{\frac12}u|^2dx-\frac14\int_{\mathbb R}u^4dx \, . \end{equation} Note the scaling symmetry: if $u(t,x)$ is solution then $u_{\lambda}(t,x)$ defined by \begin{equation} \label{ulambda} u_{\lambda}(t,x)=\lambda^{-\frac12}u(\lambda^{-2} t, \lambda^{-1} x) \end{equation} is also solution. Since this transformation leaves the $L^2$ norm invariant, the problem is \emph{mass critical}. Recall that the Cauchy problem for \eqref{mBO} is locally well-posed in the energy space $H^{\frac12}(\mathbb R)$ by the work of Kenig and Takaoka \cite{KeTa}: for any $u_0 \in H^{\frac12}(\mathbb R)$, there exists a unique (in a certain sense) maximal solution of \eqref{mBO} in $C([0,T^\star):H^{\frac12}(\mathbb R))$ satisfying $u(0,\cdot)=u_0$. Moreover, the flow map data-solution is locally Lipschitz. (See also Tao \cite{Tao}, respectively Molinet and Ribaud \cite{MoRi,MoRi1} and Kenig, Ponce and Vega \cite{KPV1}, for previous related works on the Benjamin-Ono equation, respectively the modified Benjamin-Ono equation.) For such solutions, the quantities $M(u(t))$ and $E(u(t))$ are conserved. Moreover, if $T^\star<+\infty$ then $\lim_{t \uparrow T^\star} \| D^{\frac12}u(t)\|_{L^2}=+\infty$ and more precisely, by a scaling argument, $\|D^{\frac12}u(t)\|_{L^2}\gtrsim (T^\star-t)^{-\frac 14}$, for $t<T^\star$ close to $T^\star$. We refer to Sect.~\ref{S.2.4} for more details. From works of Weinstein \cite{We1, We2} and Albert, Bona and Saut \cite{AlBoSa}, there exists an even \textit{ground state} solution $ Q\in H^{\frac 12}(\mathbb R),$ $Q>0$ of the stationary problem \begin{equation} \label{E} D^1Q+Q-Q^3=0 , \end{equation} related to the best constant in the following Gagliardo-Nirenberg inequality \begin{equation}\label{gn} \hbox{for all $v\in H^{\frac 12}(\mathbb R)$,}\quad \int v^4 \leq 2 \int |D^{\frac 12} v|^2 \left(\frac {\int v^2}{\int Q^2}\right). \end{equation} Frank and Lenzmann \cite{FrLe} proved very general uniqueness results of nonlinear ground states for fractional Laplacians in $\mathbb R$ that include the model \eqref{E}. As a consequence, $Q$ is the \emph{unique} ground state solution of \eqref{E} up to the symmetries of the equation. Their work also includes a decisive description of the spectrum of the linearized operator around $Q$. We refer to Sect.~\ref{SW} for more details. Following a classical observation due to Weinstein \cite{We82}, the conservation laws \eqref{mass}, the inequality \eqref{gn} and the Cauchy theory \cite{KeTa} imply that any initial data $u_{0}\in H^{\frac 12}(\mathbb R)$ with subcritical mass, i.e. satisfying $\|u_0\|_{L^2}<\|Q\|_{L^2}$ generates a \emph{global and bounded} solution in $H^{\frac 12}$. In this paper, we show that this condition is sharp by constructing a \emph{minimal mass blow up} solution, i.e. a solution of (mBO) which blows up in finite time in $H^{\frac 12}$ with the threshold mass $\|Q\|_{L^2}$. Actually, this solution is the first example of blow up solution for (mBO). \begin{theorem}\label{th1} There exist $T_{0}>0$ and a solution $S\in C((0,T_{0}]\, :\, H^{\frac 12}(\mathbb R))$ to~{\rm (mBO)} such that \begin{equation}\label{mm} \|S(t)\|_{L^2}=\|Q\|_{L^2}\quad \mbox{for all $t\in (0,T_{0}]$,} \end{equation} and \begin{equation}\label{d:S} S(t)-\frac1{\lambda^{\frac12}(t)} Q\left(\frac{\cdot - x(t)}{\lambda(t)}\right)\to 0\quad \mbox{in $H^{\frac 12}(\mathbb R)$ as $t\downarrow 0$}, \end{equation} where the functions $t\mapsto\lambda(t)$ and $t\mapsto x(t)$ satisfy \begin{equation}\label{th:param} \lambda(t)\sim t,\quad x(t) \sim -|\ln t| \quad \hbox{as}\ t\downarrow 0. \end{equation} In particular, \begin{equation}\label{th:rate} \|D^{\frac12} S(t)\|_{L^2} \sim {t^{-\frac 12}} {\|D^{\frac12} Q\|_{L^2}} \quad \mbox{as $t\downarrow 0$.} \end{equation} \end{theorem} Note that Theorem~\ref{th1} implies readily the orbital instability (in $L^2(\mathbb R)$ and thus also in $H^{\frac 12}(\mathbb R)$) of the solution $u(t,x)=Q(x-t)$, which also seems to be new for (mBO). Indeed, for any $n>1/T_{0}$, let $t_{n}=1/n$, $T_{n}=(T_{0}-t_{n}) \lambda(t_{n})^{-2}>0$, and \[ u_{0,n}(x) = \lambda^{\frac 12} (t_{n}) S(t_n,\lambda(t_{n}) x + x(t_{n}))\quad \hbox{so that}\quad \lim_{n\to +\infty}\|u_{0,n}-Q\|_{H^{\frac 12}}=0. \] Then, the corresponding solution $u_{n}$ of \eqref{mBO} writes, for $t\in [0,T_{n}]$, \[ u_{n}(t,x) = \lambda^{\frac 12} (t_{n}) S(t_n+\lambda(t_{n})^2 t,\lambda(t_{n}) x + x(t_{n})) \] and thus, \begin{multline} \inf\{ \|u_{n}(T_{n}) - \lambda_{1}^{\frac 12} Q(\lambda_{1} \cdot + x_{1})\|_{L^2}\, : \, {x_{1}\in \mathbb R,\lambda_{1}>0}\} \\ = \inf\{ \|S(T_{0})- \lambda_{1}^{\frac 12} Q(\lambda_{1} \cdot + x_{1})\|_{L^2}\, : \, {x_{1}\in \mathbb R,\lambda_{1}>0}\} =c_{0} > 0. \end{multline} It is also clear that the blow up behavior displayed by the solution $S$ is unstable since for any initial data with mass less that $\|Q\|_{L^2}$, the corresponding solution is global and bounded. From \eqref{th:rate}, we see that $S(t)$ blows up as $t\downarrow 0$ in $H^{\frac 12}(\mathbb R)$ twice as fast as the lower bound given by the Cauchy theory. \subsection{Comments and references} Historically, blow up results for nonlinear dispersive PDE were first obtained by global obstruction arguments, such as the Virial identity for the nonlinear Schr\"odinger equations (NLS) and related models. More rarely, explicit blow up solutions (the most famous one for (NLS) is reproduced in \eqref{Snls}) would give a description of some special forms of blow up. In the 80's and 90's, variational arguments and a refined understanding of the linearized operator around the ground state led to original blow up constructions and classification results related to rescaled solitary waves, see in particular Weinstein \cite{We,We86}, Merle and Tsutsumi~\cite{MeTs}, Merle \cite{Me,Meduke} and Bourgain-Wang \cite{BW}. Numerical experiments were also used to try to predict blow up rates. We refer to Cazenave \cite{Ca03} and references therein. Such directions were more recently systematically studied, in particular for the mass critical generalized KdV equation, for mass critical NLS equations and several other related models. A few sample results will be reviewed below (mainly from \cite{MaMejmpa,MaMe,MaMeRa1,Mejams,MeRagafa,MeRainvent,Pe}). It became clear that a refined study of the flow of the evolution equation around the ground state family was the key to the understanding of the blow up dynamics with one bubble, both for \emph{stable} and \emph{unstable} forms of blow up. Theorem~\ref{th1} above belongs to this category of results and methods, providing a quite explicit blow up solution. In a situation where very few is known on the flow around the ground state, considering a ``doubly critical'' situation (both critical exponent and critical mass) is a way to enjoy a lot of structure and rigidity, idea which goes back to \cite{We86} and \cite{Meduke}. Now, we give more details on previous related results, starting with the closest models. For the mass critical generalized Korteweg-de Vries equation (gKdV) \begin{equation}\label{kdv} u_t + (u_{xx} + u^5)_x =0, \quad x\in \mathbb R,\ t\in \mathbb R, \end{equation} (the energy space for \eqref{kdv} is $H^1(\mathbb R)$), an existence result similar to Theorem~\ref{th1} was proved by Martel, Merle and Rapha\"el \cite{MaMeRa2}, and then sharpened by Combet and Martel \cite{CoMa}. More precisely, let $Q_{\rm KdV}\in H^1(\mathbb R)$, $Q_{\rm KdV}>0$ be the ground state for \eqref{kdv}, i.e. the unique even solution of $Q_{\rm KdV}''+Q_{\rm KdV}^5=Q_{\rm KdV}$. It follows from \cite{MaMeRa2} and \cite{CoMa} that there exists a solution $S_{\rm KdV}$ on $(0,+\infty)$ such that $\|S_{\rm KdV}(t)\|_{L^2}=\|Q_{\rm KdV}\|_{L^2}$ and \begin{equation}\label{Skdv}\begin{aligned} & S_{\rm KdV}(t) - \frac 1{t^{\frac 12}} Q_{\rm KdV}\left(\frac{\cdot+\frac1t}t+c_0\right) \to 0 \quad \mbox{in $H^1(\mathbb R)$},\\ & \|(S_{\rm KdV})_{x}(t)\|_{L^2}\sim t^{-1} \|Q'\|_{L^2}, \quad \mbox{ as $t\downarrow 0$}, \end{aligned}\end{equation} for some constant $c_{0}$. We see that the singularity has the form of a blow up bubble with the same scaling $\lambda(t)\sim t$, as in \eqref{d:S}. A main qualitative difference is the speed of the bubble as $t\downarrow 0$, since for (gKdV), $x_{\rm KdV}(t)\sim -\frac 1t$, whereas $x(t)\sim -|\ln t| $ for (mBO). In this respect, (mBO) seems to be a threshold case in the family of critical equations \eqref{gBO}. For (mBO), the information obtained in the present paper on the parameters $\lambda(t)$ and $x(t)$ as $t\downarrow 0$ is not sufficient to replace them by their explicit asymptotics in the convergence result \eqref{d:S}; see Remark~\ref{rk.precise} for more details. The result \eqref{Skdv} for (gKdV) is thus more precise. In fact, for (gKdV), the minimal mass blow up is quite well understood, at least close to the blow up time: in addition to \eqref{Skdv}, sharp asymptotics, both in time (as $t\downarrow 0$) and in space (as $x\to \pm\infty$) were derived in \cite{CoMa}, for any level of derivative of $S_{\rm KdV}$. Importantly, $S_{\rm KdV}$ is also known to be global for $t>0$ and to be the \emph{unique} minimal mass solution of (gKdV), up to the symmetries of the equation (scaling, translations and sign change), see \cite{MaMeRa2}. For (mBO), such properties are open problems. Recall that the existence and uniqueness of the minimal mass solution $S_{\rm KdV}$ is only a part of the results obtained in \cite{MaMejmpa,MaMe,MaMeNaRa,MaMeRa1,MaMeRa2,MaMeRa3,Mejams} on the description of the blow up phenomenon around the ground state (or soliton) for (gKdV) and, more generally, on the classification of the long time behavior of solutions close to the soliton. Those works focus on the case of \emph{slightly supercritical mass} $H^1$ initial data \begin{equation} \label{in:1} \|Q_{\rm KdV}\|_{L^2} \leq \|u_0\|_{L^2} < (1+\delta_0) \|Q_{\rm KdV}\|_{L^2} \quad \mbox{where }\ 0<\delta_0\ll 1. \end{equation} In this context, the main results can be summarized as follows: (1) The ground state $Q_{\rm KdV}$ is a universal blow up profile; (2) General $H^1$ initial data with negative energy lead to blow up in finite or infinite time; (3) For initial data close to $Q_{\rm KdV}$ in a topology stronger than $H^1$ (based on $L^2$ weighted norm), only three behaviors are possible: (Blowup) with speed $(T^\star-t)^{-1}$, (Soliton) and (Exit). It is also proved in~\cite{MaMeNaRa} that the (Soliton) case (solutions that converge in a local sense to a bounded soliton) corresponds to a codimension one manifold of initial data which separates the (Blowup) and (Exit) cases. The (Exit) case refers to solutions that eventually leave any small neighborhood of the soliton. It is expected (but yet an open problem) that such solutions behave as a solution of the linear problem as $t\to +\infty$. Finally, blow up solutions with various blow up rates (in finite or infinite time) are constructed in~\cite{MaMeRa3} for initial data arbitrarily close to~$Q$ in the energy space. Concerning the critical and supercritical gKdV equations, we also refer to Klein and Peter \cite{KlPe} and references therein for detailed numerical studies. In contrast, we recall that Theorem \ref{th1} is the first blow up result for the (mBO) equation. The difficulties in extending techniques and results from (gKdV) to (mBO) are multiple. First, the slow decay of the soliton $Q(x)$ as $x\to \infty$ (see Proposition~\ref{pr:decay}) creates serious difficulties when trying to construct a relevant blow up profile (see Remark~\ref{rk:profile}), and, more technically, when estimating error terms far from the bubble. Second, an important aspect of the analysis in KdV-type equations consists in considering localized versions of basic quantities, such as the energy and the mass. Standard commutator estimates are not enough and suitable localization arguments were developed in this context by Kenig and Martel~\cite{KeMa} and Kenig, Martel and Robbiano~\cite{KeMaRo}. They are decisively used in the present paper (see Sect.~\ref{s.2.5}), but being by nature much more limited than the corresponding ones for the (gKdV) equation, they create error terms that are difficult to handle. Finally, a decisive point in studying the flow of the critical (gKdV) equation around the soliton is a suitable Virial-type identity, roughly speaking a Liapounov functional on the linearized equation around $Q_{\rm KdV}$. It was first introduced by Martel and Merle~\cite{MaMejmpa} and used intensively in all subsequent works on (gKdV) mentionned above. Such a Virial identity is not available for the linearization of (mBO) around the soliton. In the present paper, to get around the lack of such Liapounov functional, we introduce a new refined algebra related to the energy functional, extending the approach of Rapha\"el and Szeftel in \cite{RaSz} (see below for more comments). Such approach happens to be successful for the construction of the minimal mass solution, which corresponds to a precise, rigid regime, but it should not be sufficient to study extensively the blow up around the soliton, in particular the stable blow up. The results mentionned above on (gKdV) and Theorem~\ref{th1} are, to our knowledge, the only available rigorous results on blow up for KdV-type or BO-type equations. Nevertheless, the history of minimal mass blow up solutions for $L^2$ critical nonlinear dispersive equations is much longer, especially for NLS-type equations. It started with the early derivation of the \emph{explicit minimal mass blow up solution} for the mass critical nonlinear Schr\"odinger (NLS) in $\mathbb R^d$, $d\geq 1$, \[ i\partial_t u + \Delta u + |u|^{\frac 4d} u = 0, \quad x\in \mathbb R^d, \] using the so-called pseudo-conformal symmetry. Let, for $t>0$, \begin{equation}\label{Snls}\begin{aligned} &S_{\rm NLS}(t,x)=\frac1 {t^{\frac N2}} e^{- i\frac{|x|^2}{4t}- \frac i t} Q_\mathrm{NLS}\left(\frac x t\right), \\ & \|S_{\rm NLS}(t)\|_{L^2}= \|Q_{\rm NLS}\|_{L^2}, \quad \|\nabla S_{\rm NLS}(t)\|_{L^2}\mathop{\sim}_{t\downarrow 0}\frac C{t}, \end{aligned}\end{equation} where $Q_{\rm NLS}>0$ is the unique ground state of (NLS). Then $S_{\rm NLS}$ is solution of (NLS); see Weinstein~\cite{We82}, Cazenave~\cite{Ca03} for references. Also using the pseudo-conformal symmetry, Merle~\cite{Meduke} proved that $S_\mathrm{NLS}$ is the unique (up to the symmetries of the equation) minimal mass blow up solution in the energy space (see also Banica~\cite{Ba} and Hmidi and Keraani~\cite{HmKe}). We refer to Merle and Rapha\"el~\cite{MeRagafa,MeRainvent} (and references therein) for more recent results notably on the stable ``log-log'' blow up for (NLS) equation. For the inhomogeneous mass critical (NLS) in dimension~2, \[ i \partial_t u+\Delta u +k(x)|u|^2 u=0,\quad k(0)=1,\quad \nabla k(0)=0, \] while Merle~\cite{MeIHP} derived sufficient conditions on the function $k(x)>0$ to ensure the \emph{nonexistence} of minimal elements, Rapha\"el and Szeftel~\cite{RaSz} introduced a new approach to obtain existence and uniqueness of a minimal blow up solution under a necessary and sufficient condition on $k(x)$, in the absence of pseudo-conformal transformation. For other constructions of minimal mass solutions for NLS-type equations, by various methods, we refer to~\cite{BCD,BW,CG,KLR,LeMaRa}. In particular, Krieger, Lenzmann and Rapha\"el \cite{KLR} addressed the case of the half-wave equation in one space dimension, which also involves the nonlocal operator $D^1$, and requires the use of commutator estimates. However, as pointed out in \cite{MaMeRa2,CoMa}, the minimal mass blow up for KdV-type equations is specific, in some sense less compact that for NLS or wave-type equations, and requires the use of local norms, instead of global norms. A main difficulty in the present paper is to combine such local norms and nonlocal operators. A hint of the specificity of KdV-type blow up is given by the asymptotics found in \cite{CoMa}, showing the existence of a fixed tail for $S_{\rm KdV}(t)$. See also Remark~\ref{rk:BS} for more details. We expect that the strategy of this paper also applies to the following family of mass critical dispersion generalized Benjamin-Ono equations, considered e.g. by Angulo, Bona, Linares and Scialom \cite{AnBoLiSc} and by Kenig, Martel and Robbiano \cite{KeMaRo}, \begin{equation}\label{gBO} u_t + ( |u|^{2\alpha} u - D^{\alpha} u )_x = 0, \quad t\in \mathbb{R},\ x\in \mathbb{R}, \end{equation} for $\alpha\in (1,2)$. Recall that \cite{KeMaRo} extends perturbatively the analysis of \cite{MaMejmpa,Mejams} to the case where the model \eqref{gBO} is close to (gKdV), i.e. for $\alpha<2$ close to $2$. In particular, blow up in finite or infinite time for negative energy solutions is obtained in this context. The main obstruction to extend such results to $\alpha \in [1,2)$ is the absence of suitable Virial-type identity as mentionned before. In the mass subcritical situation, recall that the asymptotic stability of the soliton of the Benjamin-Ono equation \eqref{BO} was proved in \cite{KeMa}, extending previous results on (gKdV) (see \cite{MaMesub} and references therein), by using a specific algebra related to the explicit form of the soliton. For $\alpha \in [\frac 12,1)$, the blow up problem for \eqref{gBO} is also relevant and important in physics. In particular, the dispersion in the case $\alpha=\frac12$, for which the nonlinearity is quadratic, is somehow reminiscent of the linear dispersion of finite depth water waves with surface tension. The corresponding Whitham equation with surface tension writes $$u_t + ( u^2 - w(D)u )_x = 0, \quad t\in \mathbb{R},\ x\in \mathbb{R}, $$ where $w(D)$ is the Fourier multiplier of symbol $w(\xi)=\left( \frac{\tanh (|\xi|)}{|\xi|} \right)^{\frac12}\left(1+\tau\xi^2\right)^{\frac12}$ and $\tau$ is a positive parameter related to the surface tension. Note that for high frequencies $w(\xi) \sim |\xi|^{\frac12}$, which corresponds to the dispersion of \eqref{gBO} in the case $\alpha=\frac12$. We refer to Linares, Pilod and~Saut \cite{LiPiSa} for a detailled discussion and some progress on the local theory for the Cauchy problem and to Klein and Saut~\cite{KlSa} for numerical simulations. Obviously, weaker dispersion can only complicate the problem and it seems quite challenging to address the full range $\alpha \in [\frac 12,1)$. \subsection{Notation} For any positive $a$ and $b$, the notation $a \lesssim b$ means that $a \le c \,b$ holds for a universal constant $c>0$. Let $(\cdot,\cdot)$ denote the scalar product on $L^2(\mathbb R)$, \begin{displaymath} (f,g)=\int_{\mathbb R} f(x) g(x) dx, \end{displaymath} for $f, \, g$ two real valued functions in $L^2(\mathbb R)$. For simplicity of notation, we often write $\int$ for $\int_{\mathbb R}$ and omit $dx$. If $I$ is an interval of $\mathbb R$, then ${\bf 1}_I$ denotes the characteristic function of $I$. \noindent Let $\chi \in C^{\infty}(\mathbb R)$ be a cut-off function such that \begin{equation}\label{def:chi} 0 \le \chi \le 1, \quad \chi'\ge 0 \ \text{on} \ \mathbb R, \quad \chi_{|_{(-\infty,-2)}} \equiv 0 \quad \text{and} \quad \chi_{|_{(-1,+\infty)}} \equiv 1 \, . \end{equation} Let $\Lambda$ denote the generator of the $L^2$ scaling, \begin{equation} \label{lambda} \Lambda f:=\frac12f+xf' \, . \end{equation} Let $\mathcal{L}$ be the linearized operator around $Q$, i.e. (see also Sect.~\ref{SW}) \begin{equation} \label{L} \mathcal{L}:=D^1+1-3Q^2 \, . \end{equation} We introduce the spaces $\mathcal{Y}_\ell$, for $\ell \in \mathbb N$, \begin{displaymath} \mathcal{Y}_\ell:=\big\{f \in C^{\infty}(\mathbb R) \, : \, \forall \, k \in \mathbb N, \ \forall \, x \in \mathbb R,\ |f^{(k)}(x)| \lesssim{(1+|x|)^{-(\ell+k)}} \big\} \, , \end{displaymath} and the space $\mathcal{Z}$ \begin{equation*} \mathcal{Z}:=\big\{ f \in C^{\infty}(\mathbb R) \cap L^{\infty}(\mathbb R) \, : \, f' \in \mathcal{Y}_2 \ \text{and} \ \forall \, x >0, \ |f(x)| \lesssim{(1+|x|)^{-1}} \big\} \, . \end{equation*} \subsection{Outline of the proof} The general strategy of the proof is to adapt the robust arguments developed in \cite{RaSz} (see also the previous papers \cite{Me} and \cite{Ma}) to construct minimal mass solutions in contexts where few is known on the flow of the equation around the soliton, i.e. in the absence of a general Virial functional for the linearized flow around the ground state. In particular, we combine the control of an energy-type functional, suitably localized (see below), with a mixed Morawetz-Virial functional. The coercivity of the energy functional uses in a crucial way the complete understanding of the kernel of $\mathcal L$, proved in \cite{FrLe}. However, the strategy of \cite{RaSz} has to be adapted to one of the specificities of KdV and BO-type equations which requires the use of local estimates on the residual terms, as in \cite{MaMeRa1,MaMeRa2}, and to the nonlocal nature of the operator $D^1$, which requires specific localization arguments introduced in \cite{KeMa,KeMaRo}. The proof of Theorem~\ref{th1} thus needs the combination of all existing techniques in similar contexts, but this is still not enough. Indeed, because of the slow decay of the ground state $Q$ ($0<Q(x)\lesssim (1+|x|^2)^{-1}$ - see Proposition \ref{pr:decay}), one cannot satisfactorily improve the ansatz to a sufficient order as was done in \cite{CoMa}. This means that the error term (denoted by $ \varepsilon$) cannot be too small, even in local norms around the soliton (in contrast, for (NLS) type equation, it can be taken arbitrarily small in global norms - see e.g. \cite{LeMaRa} - and in local norms for (gKdV) - see \cite{CoMa}). The lack of good estimates on the error terms creates important difficulties to control the cubic terms that are usually easily controlled. One of the main novelty of this paper is to push forward the algebra of \cite{RaSz} and \cite{MaMeRa2} to cancel out these cubic terms. In Sect.~\ref{sect2}, we recall known facts on the ground state $Q$, we construct the blow up profile (subsequently denoted by $Q_{b}$) and we introduce a suitable decomposition of any solution around the blow up profile. In Sect.~\ref{sect3}, we introduce a particular sequence of \emph{backwards in time solutions} of~\eqref{mBO} related to the special minimal mass regime of Theorem~\ref{th1} and we claim suitable uniform bootstrap estimates of the residual term $ \varepsilon$ and on the geometrical parameters. In Sect.~\ref{S4}, we close the estimates on $ \varepsilon$ using mainly a localized energy functional, but also several other functionals to cancel out diverging terms. In Sect.~\ref{S5}, we close the estimates on the parameters, adjusting carefully the \emph{final data} of the sequence of solutions. In Sect.~\ref{S6}, we use the weak convergence of the flow (from \cite{CuKe}) to obtain the solution $S(t)$ of Theorem~\ref{th1} by passing to the limit in the sequence of solutions of \eqref{mBO} uniformly controled in Sect.~\ref{sect3}-Sect.~\ref{S5}. \section{Blow up profile}\label{sect2} \subsection{Solitary waves} \label{SW} The existence of nontrivial solutions to \eqref{E} is well-known from the works of Weinstein \cite{We1,We2} and Albert, Bona and Saut \cite{AlBoSa}. We recall here the main results. \begin{proposition}[\cite{AlBoSa,We1,We2}] \label{existenceE} For $u \in H^{\frac12}(\mathbb R) \setminus \{0\}$, let \begin{displaymath} W(u)=\frac{\Big(\int |D^{\frac12}u|^2 \Big)\Big(\int |u|^2 \Big)}{\int |u|^4 } \, . \end{displaymath} There exists a solution $Q \in H^{\frac12}(\mathbb R) \cap C^{\infty}(\mathbb R)$ of \eqref{E} that solves the minimization problem \begin{equation} \label{WeinsteinF} \inf \big\{ W(u) : u \in H^{\frac12}(\mathbb R) \setminus \{0\} \big\}= W(Q) \, . \end{equation} Moreover, by translation invariance, $Q$ is chosen to be even, positive on $\mathbb R$ and satisfying $Q'<0$ on $(0,+\infty)$. \end{proposition} Note that for the corresponding equation with quadratic nonlinearity associated to the Benjamin-Ono equation \begin{equation} \label{EBO} D^1Q+Q-Q^2=0 , \end{equation} there exists an explicit solution $Q_{\rm BO}(x)=\frac{2}{1+x^2}$. By using complex analysis techniques, Amick and Toland \cite{AmTo1}\footnote{Amick and Toland proved the following stronger statement in~\cite{AmTo2}: any nonconstant bounded solution of \eqref{EBO} is either $Q_{\rm BO}$ (up to translation) or a periodic wave solution.} proved that up to translation, $Q_{\rm BO}$ is the unique solution of \eqref{EBO} in $H^{\frac 12}$. These techniques do not apply to \eqref{E}. More recently, Frank and Lenzmann \cite{FrLe} addressed succesfully the question of uniqueness for \eqref{E}. Their results actually hold for a large class of nonlocal problems involving the fractional Laplacian in one dimension and are related to the well-known notion of ground state. \begin{definition} \label{groundstate} A positive and even solution $Q$ of \eqref{E} is called a \textit{ground state} solution of \eqref{E} if $Q$ satisfies \eqref{WeinsteinF}. \end{definition} The following uniqueness result was obtained in \cite{FrLe}. \begin{theorem}[\cite{FrLe}] The ground state solution $Q=Q(|x|)>0$ of \eqref{E} is unique. Moreover, every minimizer of \eqref{WeinsteinF} is of the form $\beta Q(\gamma(\cdot-x_0))$, for some $\beta \in \mathbb C$, $\beta \neq 0$, $\gamma>0$ and $x_0 \in \mathbb R$. \end{theorem} Recall from \cite{AmTo1,KeMaRo} the decay properties of the ground state solution. \begin{proposition}[\cite{AmTo1,KeMaRo}]\label{pr:decay} The ground state $Q$ of \eqref{E} satisfies $Q\in \mathcal Y_2$ and $E(Q)=0$. \end{proposition} Note that it is easy to check that $E(Q)=0$. Indeed, $Q$ satisfies the energy identity \begin{displaymath} \int Q^2 +\int |D^{\frac12}Q|^2 =\int Q^4 \, \end{displaymath} and the Pohozaev identity\footnote{which follows by using that $\int_{\mathbb R}\mathcal{H}(Q')xQ'dx=0$, since $\mathcal{H}(x\phi)=x\mathcal{H}\phi$ if $\int \phi dx=0$.} \begin{displaymath} \int Q^2 =\frac12 \int Q^4 \, , \end{displaymath} which imply that $E(Q)=0$. For future reference, we also note that for $|a|$ small, one has \begin{equation}\label{Ea}\begin{aligned} E((1-a)Q) &= \frac 12 (1-a)^2 \int |D^{\frac 12} Q|^2 - \frac 14 (1-a)^4 \int Q^4 = \frac 12 ((1-a)^2 - (1-a) ^4) \int Q^2 \\ &= a \left(1-\frac 52a+2a^2-\frac 12a^3\right) \int Q^2. \end{aligned} \end{equation} We will need in the sequel technical facts related to the Hilbert transform. It is well-known that $\mathcal{H}(\frac1{1+x^2})=\frac{x}{1+x^2}$. More generally, we have the following result. \begin{lemma} \label{HY} If $f \in \mathcal{Y}_2$, then $\mathcal{H}f \in \mathcal{Y}_1$. \end{lemma} \begin{proof} Let $f \in \mathcal{Y}_2$. By the definition of $\mathcal H$, we have for $k\geq 0$ and some constant $c_k$, \begin{equation} \label{HY.1} x^{k+1}\mathcal{H}(f^{(k)})=\mathcal{H}(x^{k+1}f^{(k)})+c_k \int f. \end{equation} Moreover, from the Sobolev embedding $H^1(\mathbb R) \hookrightarrow L^{\infty}(\mathbb R)$ and the boundeness of $\mathcal{H}$ in $H^1$, \[ \|\mathcal{H}f^{(k)}\|_{L^\infty} \lesssim \|\mathcal{H} f^{(k)}\|_{H^1} \lesssim \|f^{(k)}\|_{H^1} \lesssim 1\, , \] \[ \|\mathcal{H}\big(x^{k+1}f^{(k)} \big)\|_{L^\infty} \lesssim \|\mathcal{H}\big(x^{k+1}f^{(k)}\big)\|_{H^1} \lesssim \|x^{k+1}f^{(k)}\|_{H^1}\lesssim 1\, . \] Thus, by \eqref{HY.1}, $|\mathcal Hf^{(k)}(x)|\lesssim (1+|x|)^{-(1+k)}$. \end{proof} We will also need the following variant of Lemma \ref{HY}. \begin{lemma} \label{pw.HY} Let $a \in C^{\infty}(\mathbb R)$ be such that $a \in L^{\infty}(\mathbb R)$ and $a', \, a'' \in L^{\infty}(\mathbb R) \cap L^2(\mathbb R)$. Then, \begin{equation} \label{pw.HY.1} \sup_{x \in \mathbb R} \, (1+x^2)\Big|\mathcal{H}\left(\frac{a(y)}{1+y^2}\right)'(x) \Big| \lesssim \sum_{j=0}^2\|a^{(j)}\|_{L^{\infty}}+\sum_{j=1}^2\|a^{(j)}\|_{L^2} \, . \end{equation} \end{lemma} \begin{proof} First, we see from the Sobolev embedding $H^1(\mathbb R) \hookrightarrow L^{\infty}(\mathbb R)$ and the continuity of $\mathcal{H}$ in $H^1$ that \[ \Big\|\mathcal{H}\left(\frac{a(y)}{1+y^2}\right)'\Big\|_{L^\infty} \lesssim \Big\|\left(\frac{a(y)}{1+y^2}\right)'\Big\|_{H^1} \lesssim \sum_{j=0}^2\|a^{(j)}\|_{L^{\infty}}\, . \] Second, we deduce from \eqref{HY.1} with $k=1$ that \[ x^2\mathcal{H}\left(\frac{a(y)}{1+y^2}\right)'(x) =\mathcal{H}\left(y^2\left(\frac{a(y)}{1+y^2}\right)'\right)(x)+\int \frac{a(y)}{1+y^2} \, . \] Arguing as above, we have \[ \Big\|\mathcal{H}\left(y^2\left(\frac{a(y)}{1+y^2}\right)'\right)\Big\|_{L^\infty} \lesssim \Big\|y^2\left(\frac{a(y)}{1+y^2}\right)'\Big\|_{H^1} \lesssim \sum_{j=0}^1\|a^{(j)}\|_{L^{\infty}}+\sum_{j=1}^2\|a^{(j)}\|_{L^2}\, . \] We conclude the proof of \eqref{pw.HY.1} gathering those estimates. \end{proof} We recall the properties of the operator $\mathcal{L}$ defined in \eqref{L}. \begin{lemma}[Linearized operator, \cite{We1,We2,FrLe}] \label{Lproperties} The self-adjoint operator $\mathcal{L}$ in $L^2$ with domain $H^1$ satisfies the following properties: \begin{itemize} \item[(i)] \emph{Spectrum of $\mathcal{L}$.} The operator $\mathcal{L}$ has only one negative eigenvalue $-\kappa_0$ ( $\kappa_0>0$) associated to an even, positive eigenfunction $\chi_0$; $\ker \mathcal{L}=\{aQ' : a \in \mathbb R\}$; and $\sigma_{ess}( \mathcal{L})=[1,+\infty)$; \item[(ii)] \emph{Scaling.} $\mathcal{L}\Lambda Q=-Q$ and $(Q,\Lambda Q)=0$, where $\Lambda$ is defined in \eqref{lambda}; \item[(iii)] for any function $h \in L^2(\mathbb R)$ orthogonal to $Q'$ (for the $L^2$-scalar product), there exists a unique function $f \in H^1(\mathbb R)$ orthogonal to $Q'$ such that $\mathcal{L}f=h$; \item[(iv)] \emph{Regularity.} if $f \in H^1(\mathbb R)$ is such that $\mathcal{L}f \in \mathcal{Y}_1$, then $f \in \mathcal{Y}_1$; \item[(v)] \emph{Coercivity of $\mathcal{L}$.} there exists $\kappa>0$ such that for all $f \in H^{\frac12}(\mathbb R)$, \begin{equation} \label{coercivity} (\mathcal{L}f,f) \ge \kappa \|f\|_{H^{\frac12}}^2-\frac1\kappa\Big((f,Q)^2+(f,\Lambda Q)^2 +(f,Q')^2 \Big) \,. \end{equation} \end{itemize} \end{lemma} \begin{proof} (i) The fact that $\text{ker} \, \mathcal{L}=\{aQ' : a \in \mathbb R\}$ is a quite delicate property, proved by Frank and Lenzmann, see Theorem~2.3 in~\cite{FrLe}. The other properties were proved by Weinstein, see Proposition~4 in~\cite{We2}. See also \cite{AlBoSa}. (ii) The assertion follows directly by differentiating the equation satisfied by $Q_{\lambda}(x) = \lambda^{-\frac 12} Q(\lambda^{-1} x)$ with respect to $\lambda$ and taking $\lambda=1$. The property $(Q,\Lambda Q)=0$ follows from $( Q_{\lambda},Q_{\lambda}) = (Q,Q)$. (iii) Let $h \in L^2(\mathbb R)$. Observe that \begin{equation} \label{regularity.1} \mathcal{L}f=h \ \text{for} \ f \in H^1(\mathbb R) \quad \Leftrightarrow \quad (id-T)f=(D^1+1)^{-1}h=:\widetilde{h} \ \text{for} \ f \in L^2(\mathbb R) \, , \end{equation} where $Tf=(D^1+1)^{-1}(3Q^2f)$ is a compact operator on $L^2(\mathbb R)$. From (i), if $(h,Q')=0$, then $\widetilde{h} \in \text{ker} \, (id-T^{\star})^{\perp}$. Thus, the existence part of (iii) follows from the Fredholm alternative, while the uniqueness part follows directly from (i). (iv) Assume now that $h \in \mathcal{Y}_1 \subseteq H^{\infty}(\mathbb R)$ and let $f \in H^1(\mathbb R)$ be solution to $\mathcal{L}f=h$. Then, it follows from \eqref{regularity.1} that $f \in H^{\infty}(\mathbb R)$. To prove the decay properties of $f$, we argue as in \cite{AmTo1}. As observed by Benjamin in \cite{Ben}, if $w=w(x,y)$ is the harmonic extension of $f$ in the upper half-plane $\mathbb R^2_+=\{(x,y) \in \mathbb R^2 : y >0\}$, then $\lim_{y \to 0}\partial_y w(x,y)=-D^1f(x)$. As a consequence, if $v=v(x,y)$ is a solution to \begin{equation} \label{regularity.2} \left\{\begin{array}{l} \Delta v=0 \quad \text{in} \quad \mathbb R^2_+ \\ \big(v-\partial_yv-3Q^2v\big)_{|_{y=0}}=h \, , \end{array} \right. \end{equation} then $f(x)=v(x,0)$ satisfies $\mathcal{L}f=h$. Following \cite{AmTo1}, the solution $v$ to \eqref{regularity.2} is given by \begin{displaymath} v(x,y)=G(\cdot,y) \ast \big(3Q^2f+h \big)(x), \quad \forall \, (x,y) \in \overline{\mathbb R^2_+}\, , \end{displaymath} where the kernel $G(x,y)$ is given by \begin{displaymath} G(x,y)=\int_0^{+\infty}g(x,y+w)e^{-w}dw \quad \text{and} \quad g(x,y)=\frac1{\pi} \frac{y}{x^2+y^2} \, . \end{displaymath} Moreover, we easily see that $G$ is positive, harmonic on $\mathbb R^2_+$ and satisfies \begin{equation}\label{regularity.3} \int_{-\infty}^{+\infty}G(x,y)dx=1, \quad \forall \, y \ge 0 \, , \end{equation} \begin{equation} \label{regularity.4} G(x,y) \lesssim \frac{1+y}{x^2+y^2}, \quad \forall \, (x,y) \in \overline{\mathbb R^2_+} \, , \end{equation} and for any $k\geq 0$, \begin{equation} \label{regularity.5} \big|\partial_x^kG(x,0)\big| \lesssim \frac1{|x|^{2+k}}, \quad \forall \, |x| \ge 1 \, . \end{equation} In particular a solution $f \in H^{\infty}(\mathbb R)$ to $\mathcal{L}f=h$ satisfies \begin{equation} \label{regularity.6} f(x)=G(\cdot,0) \ast \big(3Q^2f+h \big)(x), \quad \forall \, x \in \mathbb R \, . \end{equation} Since $h \in \mathcal{Y}_1$, $Q \in \mathcal{Y}_2$ and $f \in H^1(\mathbb R)$, we get that \begin{displaymath} |f(x)| \le G(\cdot,0) \ast \widetilde{h}(x), \quad \text{where} \quad \widetilde{h}=3\|f\|_{L^{\infty}}Q^2+|h| \, . \end{displaymath} Let $|x| \ge 1$. It follows from \eqref{regularity.5} that \begin{displaymath} \begin{split} |f(x)| &\le \sup_{t \in \mathbb R} \big\{ (1+|t|)|\widetilde{h}(t)| \big\} \int_{-\infty}^{+\infty}\frac{G(x-t,0)}{1+|t|}dt \\ & \lesssim \int_{|t-x| \le \frac12|x|}\frac{G(x-t,0)}{1+|t|}dt+\int_{|t-x|\ge \frac12|x|}\frac1{(1+|t|)(x-t)^2}dt \end{split} \end{displaymath} By using \eqref{regularity.3}, the first integral on the right-hand side of the above expression is bounded by $2/|x|$, while the second integral is easily bounded by $c/|x|$. Therefore, we conclude that \begin{equation} \label{regularity.7} \sup_{x \in \mathbb R}\big\{(1+|x|)|f(x)| \big\} < +\infty \, . \end{equation} Now, we prove by induction on $k$ that \begin{equation} \label{regularity.8} \sup_{x \in \mathbb R}\big\{(1+|x|)^{(k+1)}|f^{(k)}(x)| \big\} < +\infty \, , \end{equation} holds for all $k \in \mathbb N$. Let $l \in \mathbb N^{\star}$. Assume that \eqref{regularity.8} holds for all $k \in \{0,l-1\}$. From \eqref{regularity.6}, \[ f^{(l)}(x)=\int G^{(l)}(x-t,0) \big(3Q^2f+h \big)(t)dt := I+II \] where $I$, respectively $II$, corresponds to the region $|x-t| \le \frac12|x|$, respectively $|x-t|\ge \frac12|x|$. Let $|x| \ge 1$. Arguing as above, we deduce from \eqref{regularity.5} that \begin{equation} \label{regularity.9} |II| \lesssim \int_{|x-t|\ge \frac12|x|}\frac1{(1+|t|)|x-t|^{2+l}} \lesssim \frac1{|x|^{1+l}} \, , \end{equation} where the implicit constant depends on $\|f\|_{L^{\infty}}$ and $\sup_{t \in \mathbb R}\{ (1+|t|)|h(t)|\}$. To handle $I$, we write, for $x\geq 1$ (the case $x<-1$ is handled similarly), \[ I=\int_{|x-t| \le \frac12|x|}\partial_x^lG(x-t,0) \big(3Q^2f\big)(t)dt+\int_{|x-t| \le \frac12|x|}\partial_x^lG(x-t,0) h(t)dt =: I_1+I_2 \, .\] Several integrations by parts yield \begin{displaymath} \begin{split} I_2&=\sum_{j=0}^{l-1} (-1)^{j}\Big(\partial_x^{(l-j)}G(-\frac{x}2,0)h^{(j)}(\frac{3x}2)-(\partial_x^{(l-j)}G(\frac{x}2,0)h^{(j)}(\frac{x}2) \Big)\\ &\quad+ (-1)^{l}\int_{|t-x| \le \frac12|x|}G(x-t,0)h^{(l)}(t)dt \, . \end{split} \end{displaymath} Since $G(.,0)\in \mathcal Y_2$ and $h \in \mathcal{Y}_1$, we obtain $|I_2|\lesssim 1/{|x|^{1+l}}$. By using the same strategy, we have (the first term exists only if $l\geq 2$), \begin{displaymath} \begin{split} I_1&=\sum_{j=0}^{l-2}(-1)^j\Big(\partial_x^{(l-j)}G(-\frac{x}2,0)(3Q^2f)^{(j)}(\frac{3x}2)-(\partial_x^{(l-j)}G(\frac{x}2,0)(3Q^2f)^{(j)}(\frac{x}2) \Big)\\ &\quad+ (-1)^{l+1}\int_{|t-x| \le \frac12|x|}\partial_xG(x-t,0)(3Q^2f)^{(l-1)}(t)dt \, . \end{split} \end{displaymath} Observe that $\big| (3Q^2f)^{(j)}(t) \big| \lesssim (1+|t|)^{-(5+j)}$, for all $j=0,\cdots l-1$, thanks to the Leibniz rule, the induction hypothesis on $f$ and the fact that $Q \in \mathcal{Y}_2$. Hence, it follows that $|I_1| \lesssim 1/{|x|^{1+l}}$ and so $|I|\lesssim 1/{|x|^{1+l}}$. From this and \eqref{regularity.9}, we obtain estimate \eqref{regularity.8} with $k=l$. This finishes the proof of (iv). (v) This is a standard property obtained as a consequence of (i). We refer to Proposition~4 in \cite{We2}. See also the proof of Lemma~2 (ii) in \cite{MaMe}. \end{proof} \subsection{Definition and estimates for the localized profile} In this subsection, we construct an approximate profile $Q_b$. \begin{lemma} \label{nlprofile} There exists a unique function $P\in \mathcal{Z}$ such that \begin{equation} \label{nlprofile.1} (\mathcal{L}P)'=\Lambda Q, \quad (P,Q')=0, \quad \text{and} \quad \lim_{y \to -\infty}P(y)=\frac12\int Q \, . \end{equation} Moreover, \begin{equation} \label{nlprofile.2} p_0:=(P,Q)=\frac18\Big( \int Q \Big)^2>0 \, . \end{equation} \end{lemma} \begin{proof}We look for a solution of \eqref{nlprofile.1} of the form $P=\widetilde{P}-\int_{y}^{+\infty} \Lambda Q$. Observe that $P$ solves the equation in \eqref{nlprofile.1} if \begin{displaymath} (\mathcal{L}\widetilde{P})'=\Lambda Q+\Big(\mathcal{L}\int_{y}^{+\infty}\Lambda Q \Big)'=: R' \quad \text{where} \quad R=-\mathcal{H}\Lambda Q-3Q^2\int_{y}^{+\infty} \Lambda Q \, . \end{displaymath} It follows from Lemma \ref{HY} that $\mathcal{H}\Lambda Q\in \mathcal Y_1$ and thus $R \in \mathcal{Y}_1$. Moreover, Lemma \ref{Lproperties} (i) and (ii) yield \[ (R,Q')=-\int R'Q=-\int \Lambda Q \, Q-\int \Big(\mathcal{L}\int_{y}^{+\infty}\Lambda Q \Big)'Q=0 \, . \] Thus, using Lemma \ref{Lproperties} (iii) and (iv), there exists a unique $\widetilde{P} \in \mathcal{Y}_1$ orthogonal to $Q'$ such that $\mathcal{L}\widetilde{P}=R$. Set $P=\widetilde{P}-\int_{y}^{+\infty}\Lambda Q\in \mathcal Z$. Then, $P$ satisfies \eqref{nlprofile.1} and $(P,Q')=0$. We also see that $\lim_{y\to -\infty} P = -\int \Lambda Q = \frac 12 \int Q$. Moreover, by using that $\mathcal{L}\Lambda Q=-Q$ and integrating by parts, we compute \begin{displaymath} p_0:=(P,Q)=-\int \mathcal{L}P \, \Lambda Q =\int \Big( \int_{y}^{+\infty}\Lambda Q\Big) \, \Lambda Q=\frac12 \Big( \int \Lambda Q \Big)^2=\frac18 \Big( \int Q \Big)^2 \, . \end{displaymath} \end{proof} Since $P$ does not belong to $L^2(\mathbb R)$, we define a suitable cut-off version of it. Recall that $\chi$ is defined in \eqref{def:chi}. \begin{definition} The localized profile $Q_b$ is defined for all $b$ by \begin{equation} \label{Qb} Q_b(y)=Q(y)+ bP_b(y) \, . \end{equation} where \begin{equation} \label{Pb} P_b(y)=\chi_b(y)P(y), \quad \chi_b(y)=\chi(|b|y) \, . \end{equation} and $\chi$ is defined in \eqref{def:chi}. Define \begin{equation}\label{Rb} R_b = \mathcal L P_b - P_b = D^1 P_b - 3 Q^2 P_b . \end{equation} \end{definition} \begin{lemma}[Estimates on the localized profile] \label{lprofile} For $|b|$ small, the following properties hold. \begin{itemize} \item[(i)] \emph{Pointwise estimate for $Q_b$.} For all $y\in \mathbb R$, \begin{equation} \label{lprofile.1} |Q_b(y)| \lesssim \frac1{(1+|y|)^2} + |b| {\bf1}_{[-2,0]}(|b|y)+|b|{\bf1}_{[0,+\infty)}(y)\frac1{1+|y|} \,, \end{equation} \begin{equation} \label{lprofile.100} |Q_b'(y)| \lesssim \frac1{(1+|y|)^3}+|b|\frac1{(1+|y|)^2} {\bf1}_{[-2,+\infty)}(|b|y)+|b|^2 {\bf1}_{[-2,-1]}(|b|y) \, , \end{equation} \begin{equation}\label{bd.PbRb} \|P_b\|_{L^2}\lesssim |b|^{-\frac 12},\quad \|D^{\frac12} P_b\|_{L^2}\lesssim |\ln|b||^{\frac 12},\quad \|R_b\|_{L^2} \lesssim 1\, , \end{equation} \begin{equation} \label{yPbprime} \|D^{\frac12}(yP_b')\|_{L^2} \lesssim 1 \, , \end{equation} and \begin{equation} \label{dPb_db} \Big\|D^{\frac12}\left(\frac{\partial P_b}{\partial b} \right)\Big\|_{L^2} \lesssim |b|^{-1} \, . \end{equation} \item[(ii)]\emph{Estimate for the equation of $Q_b$.} Let $\Psi_b$ be defined by \begin{equation} \label{Psib} \Psi_b:=\big(\mathcal{H}Q_b'+Q_b-Q_b^3\big)'-b\Lambda Q_b+b^2\frac{\partial Q_b}{\partial b} \, . \end{equation} Then \begin{equation} \label{lprofile.2} \|\Psi_b\|_{L^2}\lesssim |b|^{\frac 32},\quad \|\Psi_b-bP\chi_b'-\frac{b^2}2P_b\|_{L^2} \lesssim |b|^2 \, , \end{equation} and \begin{equation} \label{lprofile.3} \|D^{\frac12}\Psi_b\|_{L^2} \lesssim |b|^2|\ln |b||^{\frac12} \, . \end{equation} \item[(iii)]\emph{Projection of $\Psi_b$ in the direction $Q$.} \begin{equation} \label{lprofile.4} \big|(\Psi_b,Q)\big| \lesssim |b|^3 \, . \end{equation} \item[(iv)]\emph{Mass and energy for $Q_b$.} \begin{equation} \label{lprofile.5} \Big|\int Q_b^2- \int Q^2 \Big| \lesssim |b| \, , \end{equation} \begin{equation} \label{lprofile.6} \Big|E(Q_b)+ p_0 b \Big| \lesssim |b|^{2} |\ln|b||\, . \end{equation} \end{itemize} \end{lemma} \begin{remark}\label{rk:profile} We see that $P$ defined above decays as $1/y$ as $y\to +\infty$. This is still acceptable for our needs in the next sections. However, if one tries to improve the ansatz $Q_{b}$ at a higher order in $b$ (as was done at any order in \cite{CoMa} for (gKdV), one faces non $L^2$ functions for $y>0$. This is an important difficulty. In this paper, we have chosen to restrict ourselves to the above ansatz $Q_{b}$, at the cost of a relatively large error terms (see \eqref{bootstrap.4}). \end{remark} \begin{remark}\label{rk:law} On the definition of $\Psi_b$ in \eqref{Psib}, we anticipate the blow up relations $\frac {\lambda_s}{\lambda} \sim -b$ and $b_s + b^2 \sim 0$, which will eventually lead to the $\frac 1 t$ blow up rate. See Lemma \ref{modulation}. \end{remark} \begin{proof}[Proof of Lemma \ref{lprofile}] (i) The proof of \eqref{lprofile.1}, \eqref{lprofile.100} follows directly from $Q \in \mathcal{Y}_2$, $P \in \mathcal{Z}$ and the definition of $Q_b$ in \eqref{Qb}. We see using $P \in \mathcal{Z}$ that \begin{equation} \label{lprofile.208} \|P_b\|_{L^2}^2 \lesssim \int_{y<0} \chi(|b|y)^2dy+\int_{y>0} \frac{dy}{1+y^2} \lesssim |b|^{-1}\,.\end{equation} Next, we split $\|D^{\frac 12}P_b\|_{L^2}$ as follows \[ \|D^{\frac 12}P_b\|_{L^2}^2 = (D^1P_b,P_b)=(D^1P,P_b)-(D^1((1-\chi_b)P),P_b). \] Since $P\in \mathcal Z$ and $P'\in \mathcal Y_2$, we have by Lemma \ref{HY}, $D^1 P=\mathcal HP'\in \mathcal Y_1$ and thus \[ |(D^1P,P_b)|\lesssim \int_{-2|b|^{-1}<y<0} \frac{dy}{1+|y|} + \int_{y>0} \frac {dy}{(1+|y|)^2} \lesssim |\ln|b||. \] Moreover, \[ |(D^1((1-\chi_b)P),P_b)| \lesssim \|P_b\|_{L^2} \|(1-\chi_b)P\|_{\dot H^1} \lesssim |b|^{-\frac 12} \left(\|(1-\chi_b)P'\|_{L^2} + \|\chi_b'P\|_{L^2}\right)\lesssim 1 \,.\] This implies $\|D^{\frac12} P_b\|_{L^2}\lesssim |\ln|b||^{\frac 12}$. Concerning $R_b$ defined in \eqref{Rb}, we have \[ \|D^1P_b\|_{L^2}^2 = \|P_b'\|_{L^2}^2 \lesssim 1 \quad \hbox{and}\quad \|Q^2 P_b\|_{L^2}\lesssim 1. \] These estimates prove \eqref{bd.PbRb}. We proceed similarly to prove \eqref{yPbprime} and decompose $\|D^{\frac12}(yP_b')\|_{L^2}$ as follows \[ \|D^{\frac12}(yP_b')\|_{L^2}^2=\int yP_b'\mathcal{H}(yP_b')'=\int yP_b'\mathcal{H}(yP')' +\int yP_b'\mathcal{H}(y(P(\chi_b-1))')' \, . \] Since $(yP')'\in \mathcal{Y}_2$, we have that $\mathcal{H}(yP')'\in \mathcal Y_1$ by using Lemma \ref{HY} and thus \[ \left| \int yP_b'\mathcal{H}(yP')' \right| \lesssim \int P_b' \frac{|y|}{1+|y|} \lesssim 1 \, . \] Moreover, since $P'\in \mathcal Y_2$, \[ \left| \int yP_b'\mathcal{H}(yP(\chi_b-1)')' \right| \lesssim \|yP_b'\|_{L^2}\|(yP(\chi_b-1)')'\|_{L^2} \lesssim |b|^{-\frac12}|b|^{\frac12} \lesssim 1 \, , \] which finishes the proof \eqref{yPbprime}. To prove \eqref{dPb_db}, we observe that $\frac{\partial P_b}{\partial b}=\text{sgn}(b)y\chi'(|b|y)P$. Thus, it follows by interpolation that \begin{displaymath} \Big\|D^{\frac12}\left(\frac{\partial P_b}{\partial b} \right)\Big\|_{L^2} \le \|y\chi'(|b|y)\|_{L^2}^{\frac12}\big\|\big(y\chi'(|b|y)\big)'\big\|_{L^2}^{\frac12} \lesssim |b|^{-1} \, , \end{displaymath} which is the desired estimate. (ii) Expanding the expression \eqref{Qb} of $Q_b$ in the definition of $\Psi_b$ and using \eqref{E}, we find that \begin{equation} \label{lprofile.7} \Psi_b=b\Psi_1+b^2\Psi_2+b^3\Psi_3 \, , \end{equation} where \begin{displaymath} \Psi_1:=(\mathcal{L}P_b)'-\Lambda Q \, , \end{displaymath} \begin{displaymath} \Psi_2:=-3(QP_b^2)'-\Lambda(P_b)+\frac{\partial{Q}_b}{\partial b} \, , \quad \text{and} \quad \Psi_3=-(P_b^3)' \, . \end{displaymath} First, we prove \eqref{lprofile.2}. By \eqref{nlprofile.1}, we rewrite $\Psi_1$ as \begin{displaymath} \begin{split} \Psi_1&=(\mathcal{L}P_b)'-(\mathcal{L}P)' \\ &=\mathcal{H}\big((P(\chi_b-1)\big)''+\big(P(\chi_b-1)\big)'-3\big(Q^2P(\chi_b-1)\big)' \\ &=\mathcal{H}\big(P''(\chi_b-1) \big)+2\mathcal{H}\big(P'\chi_b' \big)+\mathcal{H}\big(P\chi_b''\big)+P'(\chi_b-1)+P\chi_b'-3\big(Q^2P(\chi_b-1)\big)' \end{split} \end{displaymath} Since $\mathcal{H}$ is bounded in $L^2$, $P \in L^{\infty}(\mathbb R)$ and $P' \in \mathcal{Y}_2$, we obtain \begin{displaymath} \|b\mathcal{H}\big(P''(\chi_b-1) \big)\|_{L^2} \lesssim |b|\Big( \int_{y \le -|b|^{-1}}\frac1{(1+|y|)^6} \, dy\Big)^{\frac12} \lesssim |b|^{\frac72} \, , \end{displaymath} \begin{displaymath} \|b\mathcal{H}(\chi_b'P')\|_{L^2}=|b| \|\chi_b'P'\|_{L^2} \lesssim |b|^2\Big( \int_{y \le -|b|^{-1}}\frac1{(1+|y|)^4} \, dy\Big)^{\frac12} \lesssim |b|^{\frac72} \, \end{displaymath} and \begin{displaymath} \|b\mathcal{H}(\chi_b''P)\|_{L^2} \lesssim |b|^3\Big( \int \chi''(|b|y)^2 \, dy\Big)^{\frac12} \lesssim |b|^{\frac52} \, . \end{displaymath} Similarly, \begin{displaymath} \|bP'(\chi_b-1) \|_{L^2} \lesssim |b|\Big( \int_{y \le -|b|^{-1}}\frac1{(1+|y|)^4} \, dy\Big)^{\frac12} \lesssim |b|^{\frac52} \, , \end{displaymath} and \begin{displaymath} \|b\big(\chi_b'Q^2P\big)\|_{L^2} \lesssim |b|^2\Big( \int_{y \le -|b|^{-1}}\frac1{(1+|y|)^8} \, dy\Big)^{\frac12} \lesssim |b|^{\frac{11}2}\, . \end{displaymath} Therefore, we conclude gathering those estimates that \begin{equation} \label{lprofile.205} \|b\big(\Psi_1-\mathcal{H}(\chi_b''P)-P'(\chi_b-1)-P\chi_b')\|_{L^2} \lesssim |b|^{\frac72} \quad \hbox{and}\quad \|b\big(\Psi_1-P\chi_b')\|_{L^2} \lesssim |b|^{\frac52} \, . \end{equation} Also, note that\begin{equation} \label{lprofile.206} \|bP\chi_b'\|_{L^2} \lesssim |b|^2\Big( \int |\chi'(|b|y)|^2 \, dy\Big)^{\frac12} \lesssim |b|^{\frac32} \, . \end{equation} Now we focus on $b^2\Psi_2$ and $b^3\Psi_3$. One sees easily that \begin{displaymath} \|b^2(Q\chi_b^2P^2)'\|_{L^2}+\|b^3(\chi_b^3P^3)'\|_{L^2} \lesssim |b|^2 \, . \end{displaymath} To deal with the remaining terms in $\Psi_2$, we observe that \begin{displaymath} \Lambda(P_b)=\frac12 P_b+yP'\chi_b +yP\chi_b' \end{displaymath} and \begin{equation}\label{def.DQb} \frac{\partial Q_b}{\partial b}=P_b+|b|y\chi'(|b|y)P=P_b+yP\chi_b' \, , \end{equation} so that \begin{equation} \label{lprofile.207} -\Lambda(P_b)+\frac{\partial Q_b}{\partial b}=\frac12P_b-yP'\chi_b \, . \end{equation} By using $P \in \mathcal{Z}$, \begin{displaymath} \|y\chi_bP'\|_{L^2} \lesssim \Big(\int \chi(|b|y)^2\frac{y^2}{1+y^4}dy\Big)^{\frac12} \lesssim 1 \, . \end{displaymath} Hence \begin{equation} \label{lprofile.209} \|b^2(\Psi_2-\frac12P_b)\|_{L^2} + \|b^3\Psi_3\|_{L^2} \lesssim |b|^2 \, . \end{equation} Therefore, we conclude estimate \eqref{lprofile.2} by gathering \eqref{lprofile.7}, \eqref{lprofile.205}, \eqref{lprofile.207} and \eqref{lprofile.209}. To prove estimate \eqref{lprofile.3}, we estimate each term on the right-hand side of \eqref{lprofile.7} in $\dot{H}^{\frac12}$. For the sake of simplicity, we only explain how to deal with the terms corresponding to \eqref{lprofile.206} and \eqref{lprofile.207}, which are the most problematic ones. First, observe that \begin{displaymath} \big\| b(\chi_b'P)' \big\|_{L^2} \lesssim |b|\|\chi_b''P\|_{L^2}+|b|\|\chi_b'P'\|_{L^2} \lesssim |b|^{\frac52} \, , \end{displaymath} which gives after interpolation with \eqref{lprofile.206} \begin{displaymath} |b|\big\| D^{\frac12}(\chi_b'P) \big\|_{L^2} \lesssim |b|^2 \, . \end{displaymath} Moreover, we have from \eqref{bd.PbRb} that \begin{displaymath} \label{lprofile.302} |b|^2\big\| D^{\frac12}(P_b) \big\|_{L^2} \lesssim |b|^2|\ln |b||^{\frac12}\, . \end{displaymath} This last estimate gives the bound in \eqref{lprofile.3}. (iii) We take the scalar product of \eqref{lprofile.7} with $Q$. First, for $\Psi_1$, we use \eqref{lprofile.205}. Since $\mathcal H Q \in \mathcal Y_1$ by Lemma \ref{HY}, \[ \big|\big(\mathcal{H}(\chi_b''P),Q\big)\big|=\big|\big( \chi_b''P,\mathcal HQ\big)\big| \lesssim b^2 \int_{-2|b|^{-1}<y<-|b|^{-1}} \frac {dy}{1+|y|}\lesssim b^2. \] Moreover, since $P',Q\in \mathcal Y_2$, \[\big|\big(P'(\chi_b-1),Q\big)\big| +\big|\big( P\chi_b', Q\big) \big| \lesssim |b|^2\, ,\] Thus, it follows from \eqref{lprofile.205} that \begin{equation} \label{lprofile.400} \big|\big( b\Psi_1, Q\big) \big| \lesssim |b|^{3} \, . \end{equation} Next, we see using the computations in \eqref{lprofile.207} that \begin{equation} \label{lprofile.401} \begin{split} (\Psi_2,Q)&=\big(-3(QP^2)'-\Lambda P+P,Q \big) -3\big((QP^2(\chi_b-1))',Q \big)\\ & \quad +\frac12\big(P(\chi_b-1),Q \big) -\big(yP'(\chi_b-1),Q \big) \, . \end{split} \end{equation} By taking the scalar product of the equation in \eqref{nlprofile.1} with $P$, we get that \begin{displaymath} (P',P)-3\big((Q^2P)',P \big)=-(\Lambda P,Q) \, . \end{displaymath} On the one hand, it follows from \eqref{nlprofile.2} that \begin{displaymath} (P,P')=-\frac12\lim_{y \to -\infty}P(y)^2=-(P,Q) \, . \end{displaymath} On the other hand, integration by parts yields \begin{displaymath} -3\big((Q^2P)',P \big)=3((P^2Q)',Q) \, . \end{displaymath} Hence, we deduce combining those identities that \begin{equation} \label{lprofile.402} \big(-3(P^2Q)'+P-\Lambda P,Q \big)=0 \, , \end{equation} so that the first term on the right-hand side of \eqref{lprofile.401} cancels out. We estimate the other ones as follows: \begin{displaymath} \big|\big((QP^2(\chi_b-1))',Q \big) \big| \lesssim \int_{y \le |b|^{-1}} \frac1{(1+|y|)^5}dy \lesssim |b|^4 \, , \end{displaymath} \begin{displaymath} \big|\big(P(\chi_b-1),Q \big) \big| \lesssim \int_{y \le |b|^{-1}} \frac1{(1+|y|)^2}dy \lesssim |b| \, , \end{displaymath} and \begin{displaymath} \big|\big(yP'(\chi_b-1),Q \big) \big| \lesssim \int_{y \le |b|^{-1}} \frac1{(1+|y|)^3}dy \lesssim |b|^2 \, . \end{displaymath} This implies that $\big|\big( b^2\Psi_2,Q \big)\big| \lesssim |b|^3$. Finally, we see easily that $\big|\big( b^3\Psi_3,Q \big)\big| \lesssim |b|^3$. This finishes the proof of estimate \eqref{lprofile.4}. (iv) By using the definition of $Q_b$ in \eqref{Qb}, we compute \begin{displaymath} \int Q_b^2=\int Q^2+2b(Q,\chi_bP)+b^2\int \chi_b^2P^2 \, . \end{displaymath} Moreover, observe that $\big|b(Q,\chi_bP) \big| \lesssim |b|$ and $\Big|b^2\int \chi_b^2 P^2 \Big| \lesssim |b|$. This finishes the proof of estimate \eqref{lprofile.5}. Expanding $Q_b=Q+bP_b$ in the definition of $E$ in \eqref{mass}, we see that \begin{displaymath} E(Q_b)=E(Q)+b\int P_b \big(D^1Q-Q^3\big)+\frac{b^2}2\int|D^{\frac12}(P_b)|^2+\mathcal{O}(b^2) \, . \end{displaymath} From $E(Q)=0$, \eqref{E} and $p_0=(P,Q)$, it follows that \begin{equation} \label{lprofile.601} \Big|E(Q_b)+p_0 b \Big| \lesssim |b|\Big| \int PQ(\chi_b-1) \Big|+|b|^2\int|D^{\frac12} P_b|^2+\mathcal{O}(b^2) \,. \end{equation} By $| \int PQ(\chi_b-1)|\lesssim \int_{y<-|b|^{-1}} Q(y)dy \lesssim |b|$ and \eqref{bd.PbRb}, we finish the proof of \eqref{lprofile.6}. \end{proof} \subsection{Decomposition of the solution using refined profiles} \begin{lemma}[First modulation around $Q$]\label{modulation.le1} There exists $\delta_0>0$ such that if $v\in H^{\frac12}$ satisfies \begin{equation} \label{tube.1} \inf_{\lambda_1>0,\, x_1 \in \mathbb R} \|\lambda_1^{\frac12}v(\lambda_1\cdot+x_1)-Q\|_{H^{\frac12}} =\delta <\delta_0 \, , \end{equation} then there exist unique $(\lambda_v,x_v)\in (0,+\infty) \times \mathbb R$ such that the function $\eta_v$ defined by \begin{equation} \label{modulation.1.1} \eta_v(y)=\lambda_v^{\frac12}v(\lambda_v y+x_v)-Q(y) \end{equation} satisfies \begin{equation} \label{modulation.1.2} (\eta_v, Q')=(\eta_v,\Lambda Q)=0 ,\quad \|\eta_v\|_{H^{\frac12}} \lesssim\delta \,. \end{equation} \end{lemma} \begin{proof} The existence (and uniqueness) of $(\lambda_v,x_v)$ such that $\eta_v$ defined in \eqref{modulation.1.1} satisfies \eqref{modulation.1.2} follows from standard arguments, based on the implicit function theorem. We refer for example to Proposition 1 in \cite{MaMegafa}. The key point of the proof is the non-degeneracy of the Jacobian matrix: \[ \left| \begin{array}{ccc} ({\Lambda} Q,{\Lambda} Q) & ({\Lambda} Q,Q')\\ ( Q',{\Lambda} Q) & ( Q',Q')\\ \end{array}\right|=({\Lambda} Q,{\Lambda} Q) ( Q',Q')\not= 0. \] \end{proof} \begin{lemma}[Refined modulated flow] \label{modulation} Let $u$ be a solution to \eqref{mBO} on a time interval $\mathcal{I}$ such that \begin{equation} \label{tube.2} \sup_{t \in \mathcal{I}} \inf_{\lambda_1>0,\,x_1 \in \mathbb R} \|\lambda_1^{\frac12}u(t,\lambda_1\cdot+x_1)-Q\|_{H^{\frac12}}=\delta <\delta_0 \, . \end{equation} For $t\in \mathcal I$, let $\lambda(t):=\lambda_{u(t)}$ and $x(t):=x_{u(t)}$ be given by Lemma \ref{modulation.le1} and set \begin{equation} \label{modulation.1} \varepsilon(t,y)=\lambda^{\frac12}(t)u(t,\lambda(t)y+x(t))-Q_{b(t)}(y), \quad \mbox{where}\quad b(t) = - \frac{E(u_0)}{p_0} \lambda(t) \end{equation} and $Q_b$ is defined in \eqref{Qb}. Then, there exists $\delta_0>0$ such that the following holds. \begin{itemize} \item[(i)] \emph{Almost orthogonalities and smallness.} On $\mathcal{I}$, it holds \begin{equation} \label{modulation.3} \|\varepsilon\|_{H^{\frac12}}^2+|b| \lesssim \delta \,. \end{equation} \begin{equation} \label{modulation.2} |(\varepsilon, Q')|+|(\varepsilon,\Lambda Q)|\lesssim |b|. \end{equation} \item[(ii)] \emph{Conservation laws.} On $\mathcal{I}$, it holds \begin{equation} \label{modulation.5} \|\varepsilon\|_{H^{\frac 12}}^2 \lesssim |b|+ \int u_0^2 - \int Q^2 \, , \end{equation} \begin{equation} \label{modulation.6} \Big|- (\varepsilon, Q) + b( \varepsilon,R_b) + \frac 12 \int \Big[|D^{\frac12}\varepsilon|^2 -\frac 12 \Big( (Q_b+\varepsilon)^4-Q_b^4-4Q_b^3 \varepsilon\Big) \Big]\Big| \lesssim |b|^{2} |\ln|b||, \end{equation} where $R_b$ is defined in \eqref{Rb}, and \begin{equation}\label{modulation.6bis} |(\varepsilon,Q)|\lesssim |b|^2|\ln |b|| + |b| \|\varepsilon\|_{L^2} + \|\varepsilon\|_{\dot H^{\frac 12}}^2+\int \varepsilon^2 Q^2. \end{equation} \item[(iii)] \emph{Equation of $\varepsilon$.} The function $(\lambda,x,b) : \mathcal{I} \to (0,+\infty) \times \mathbb R^2$ is of class $\mathcal{C}^1$. For $t_0 \in \mathcal{I}$ and $s_0 \in \mathbb R$, define the rescaled time variable $s$ by \[s(t)=s_0+\int_{t_0}^t \frac{dt'}{\lambda^2(t')} \quad\hbox{and}\quad \mathcal{J}=s(\mathcal{I}).\] Then, on $\mathcal J$, the function $\varepsilon(s,y)$ is solution of \begin{equation} \label{modulation.4} \begin{aligned} &\varepsilon_s-\Big( D^1 \varepsilon+\varepsilon-\big((Q_b+\varepsilon)^3-Q_b^3\big) \Big)_y \\ & \quad =\frac{\lambda_s}{\lambda}\Lambda \varepsilon +(\frac{\lambda_s}{\lambda}+b)\Lambda Q_b+(\frac{x_s}{\lambda}-1)(Q_b+\varepsilon)_y-(b_s+b^2)\frac{\partial Q_b}{\partial b}+\Psi_b \, , \end{aligned} \end{equation} where $\Psi_b$ is defined in \eqref{Psib}. \item[(iv)] \emph{Modulation equations.} On $\mathcal J$, it holds \begin{equation} \label{modulation.7} \Big| \frac{x_s}{\lambda}-1\Big|+ \Big| \frac{\lambda_s}{\lambda}+b\Big| \lesssim |b|^2+\Big(\int \frac{\varepsilon^2}{1+y^2} \Big)^{\frac12} . \end{equation} \end{itemize} \end{lemma} \begin{remark} The choice $b = - \frac{E(u_0)}{p_0} \lambda$ in \eqref{modulation.1} is not really standard. Usually, for simplicity, $b(t)$ is tuned so that $(\varepsilon,Q)=0$ (see e.g. \cite{MaMeRa1}). The choice \eqref{modulation.1} leads to the relation \eqref{modulation.6}, which is a sufficient substitute to the exact orthogonality $(\varepsilon,Q)=0$. In Sect.~\ref{S4}, such a sharp choice related to energy will be technically important in our proof. \end{remark} \begin{proof} (i) For all $t\in \mathcal I$, let $\lambda(t):=\lambda_{u(t)}$ and $x(t):=x_{u(t)}$ be given by Lemma \ref{modulation.le1} such that $ \eta(t,y)=\lambda(t)^{\frac12}u(t,\lambda(t) y+x(t))-Q(y)$ satisfies $ (\eta(t), Q')=(\eta(t),\Lambda Q)=0$ and $\|\eta(t)\|_{H^{\frac 12}}\lesssim \delta$. Then, since $E(Q)=0$, $\lambda E(u)=\lambda E(u_0) = E(Q+\eta)=\mathcal O(\delta)$. It follows that $b(t)$ defined by $b = - ({E(u_0)}/{p_0}) \lambda$ satisfies $|b(t)|\lesssim \delta$. Moreover, $\varepsilon(t)$ defined by \eqref{modulation.1} satisfies $\varepsilon = -b P_b + \eta$, where $P_b$ is defined in \eqref{Pb}. Since $\|P_b\|_{L^2}\lesssim |b|^{-\frac 12}$ and $\|P_b\|_{\dot H^{\frac 12}}\lesssim 1$, we obtain $\|\varepsilon\|_{ H^{\frac 12}}\lesssim |b|^{\frac 12}+\delta\lesssim \delta^{\frac 12}$. Since $|(P_b,\Lambda Q)|+|(P_b,Q')|\lesssim 1$, estimates \eqref{modulation.2} are consequences of $ (\eta(t), Q')=(\eta(t),\Lambda Q)=0$. (ii) By using the conservation of the energy $E$ defined in \eqref{mass}, we get from \eqref{lprofile.6} and the equation of $Q$ that \begin{align*} \lambda E(u_0)&=E(Q_b+\varepsilon)=E(Q_b)+\int (D^1\varepsilon) Q_b+\frac 12 \int |D^{\frac12}\varepsilon|^2-\frac14 \int \Big((Q_b+\varepsilon)^4-Q_b^4 \Big)\\ &=-p_0 b +\int \varepsilon (D^1 Q_b -Q_b^3) + \frac 12\int |D^{\frac12}\varepsilon|^2-\frac14\int \Big((Q_b+\varepsilon)^4-Q_b^4-4Q_b^3\varepsilon\Big)\\ &\quad +\mathcal O(|b|^2|\ln|b||) \, . \end{align*} We compute ($R_b$ being defined in \eqref{Rb}) \[ D^1 Q_b - Q_b^3 = D^1 Q - Q^3 + b (D^1 P_b -3 Q^2 P_b) -3 b^2 Q P_b^2 - b^3 P_b^3 = -Q + b R_b + \mathcal O_{L^2}(|b|^2). \] Thus, by the choice $\lambda E(u_0) = -p_0 b$ and $\|\varepsilon\|_{L^2}\lesssim 1$, one obtains \begin{equation}\label{energie.1} \int \Big[|D^{\frac12}\varepsilon|^2-\frac12 \Big(Q_b+\varepsilon)^4-Q_b^4-4Q_b^3\varepsilon\Big)\Big] -2(\varepsilon,Q)+2b(\varepsilon,R_b)=\mathcal O(|b|^2|\ln|b||), \end{equation} which is \eqref{modulation.6}. In particular, by \eqref{bd.PbRb} and \eqref{gnp}, $\int \varepsilon^4 \lesssim \|\varepsilon\|_{H^{\frac 12}}^4$, $\int |Q_b| |\varepsilon|^3 \lesssim \|\varepsilon\|_{H^{\frac 12}}^3$, $\int |Q_b^2-Q^2| \varepsilon^2 \lesssim |b| \int \varepsilon^2$, and thus, \begin{equation}\label{energie.2} |(\varepsilon,Q)|\lesssim |b|^2 |\ln|b|| + |b| \|\varepsilon\|_{L^2} + \|\varepsilon\|_{\dot H^{\frac 12}}^2 +\int \varepsilon^2 Q^2 \,. \end{equation} By using the $L^2$ conservation, \eqref{lprofile.5} and \eqref{bd.PbRb}, we get (using the notation in \eqref{ulambda}), \begin{align*} \|u_0\|_{L^2}^2 &=\|u_{\lambda^{-1}}(\cdot+x(t),t)\|_{L^2}^2=\|Q_b\|_{L^2}^2+2(\varepsilon,Q_b)+\|\varepsilon\|_{L^2}^2\\ &=\|Q\|_{L^2}^2+2(\varepsilon,Q)+\|\varepsilon\|_{L^2}^2+\mathcal O(|b|+|b|^{\frac 12}\|\varepsilon\|_{L^2}). \end{align*} Summing this identity with \eqref{energie.1}, we obtain \[ \int \Big[|D^{\frac12}\varepsilon|^2+|\varepsilon|^2-\frac12 \Big((Q_b+\varepsilon)^4-Q_b^4-4Q_b^3\varepsilon\Big)\Big] =\mathcal O(|b|+|b|^{\frac 12}\|\varepsilon\|_{L^2})+\|u_0\|_{L^2}^2-\|Q\|_{L^2}^2\,. \] Since by \eqref{gnp}, $\int \varepsilon^4 \lesssim \|\varepsilon\|_{H^{\frac 12}}^4$, $\int |Q_b| |\varepsilon|^3 \lesssim \|\varepsilon\|_{H^{\frac 12}}^3$ and $\int |Q_b^2-Q^2| \varepsilon^2 \lesssim |b| \int \varepsilon^2$, we obtain \[ (\mathcal L \varepsilon,\varepsilon) = \mathcal O(|b|+|b|^{\frac 12}\|\varepsilon\|_{L^2})+\|u_0\|_{L^2}^2-\|Q\|_{L^2}^2 + \mathcal O(\|\varepsilon\|_{H^{\frac 12}}^3). \] By the coercivity of $\mathcal L$ (see \eqref{coercivity}) and \eqref{modulation.2}, \eqref{energie.2}, for $\delta$ small enough, we obtain \[ \|\varepsilon\|_{H^{\frac 12}}^2 \lesssim |b|+\|u_0\|_{L^2}^2-\|Q\|_{L^2}^2. \] (iii) The $\mathcal C^1$ regularity of $\lambda(t)$ and $x(t)$ is a standard fact, see e.g.~Proposition~1 in \cite{MaMe}. Equation \eqref{modulation.4} follows directly by writing that $u$ under the decomposition \eqref{modulation.1} solves \eqref{mBO} and by using the definition of $\Psi_b$ in \eqref{Psib}. An intermediate step in the computations is the derivation of the following equation for $\eta(s,y)$, which is more handy to derive later the estimates for $x_s$ and $\lambda_s$ \begin{equation} \label{modulation.0} \eta_s-\Big( D^1 \eta+\eta-\big((Q+\eta)^3-Q^3\big) \Big)_y = \frac{\lambda_s}{\lambda} \Lambda (Q+\eta)+(\frac{x_s}{\lambda}-1)(Q+\eta)_y. \end{equation} (iv) Differentiating the orthogonality relation $(\eta,Q')=0$ with respect to $s$, using \eqref{modulation.0}, $(\Lambda Q,Q')=0$ and the decay properties of $Q$, we see that \begin{equation}\label{ortho.1} \left| \left(\frac {x_s}\lambda- 1\right) - \frac 1{\|Q'\|_{L^2}^2} \int \eta \mathcal L(Q'') \right| \lesssim \left(\int \frac {\eta^2}{1+y^2}\right)^{\frac 12} \left(\left| \frac{\lambda_s}{\lambda}\right| + \left| \frac {x_s}\lambda- 1\right|\right)+ \int \frac {\eta^2+|\eta|^3}{1+y^2} . \end{equation} Similarly, using $(\eta,\Lambda Q)=0$, we have \begin{equation}\label{ortho.2} \left| \frac {\lambda_s}\lambda - \frac 1{\|\Lambda Q\|_{L^2}^2} \int \eta \mathcal L\big((\Lambda Q)'\big) \right| \lesssim \left(\int \frac {\eta^2}{1+y^2}\right)^{\frac 12} \left(\left| \frac{\lambda_s}{\lambda}\right| + \left| \frac {x_s}\lambda- 1\right|\right)+\int \frac {\eta^2+|\eta|^3}{1+y^2}. \end{equation} Combining \eqref{ortho.1} and \eqref{ortho.2}, we obtain \begin{equation}\label{ortho.3} \left| \left(\frac {x_s}\lambda- 1\right) - \frac 1{\|Q'\|_{L^2}^2} \int \eta \mathcal L(Q'') \right|+ \left| \frac {\lambda_s}\lambda - \frac 1{\|\Lambda Q\|_{L^2}^2} \int \eta \mathcal L\big((\Lambda Q)'\big) \right| \lesssim \int \frac {\eta^2+|\eta|^3}{1+y^2} . \end{equation} Now, we insert $\eta=\varepsilon+bP_b$ in \eqref{ortho.3}. Note that \[ \int P \mathcal L(Q'') = - \int (\mathcal L P)' Q' =-\int Q' \Lambda Q=0,\quad \int P \mathcal L\big((\Lambda Q)'\big) = - \int (\mathcal L P)' \Lambda Q = - \|\Lambda Q\|_{L^2}^2\,; \] moreover, from the definition of $P_b$ and the fact that $P$ is bounded, \[ \int \frac {|P-P_b|}{1+y^2} \lesssim \int \frac {|1-\chi_b|}{1+y^2} \lesssim |b|\,, \quad \int \frac {|bP_b|^2+|bP_b|^3}{1+y^2} \lesssim |b|^{2}, \] and from \eqref{gn}, \[ \int \frac { | \varepsilon|^3} {1+y^2} \lesssim \left( \int\frac { | \varepsilon|^2}{1+y^2} \right)^{\frac 12} \left( \int | \varepsilon|^4 \right)^{\frac 12}\lesssim \left( \int\frac { | \varepsilon|^2}{1+y^2} \right)^{\frac 12}. \] Thus, we obtain \begin{equation}\label{ortho.4} \left| \frac {x_s}\lambda- 1 \right|+ \left| \frac {\lambda_s}\lambda+ b \right| \lesssim |b|^{2}+ \left( \int \frac {\varepsilon^2}{1+y^2} \right)^{\frac 12} , \end{equation} which yields \eqref{modulation.7}. \end{proof} \subsection{Cauchy problem and weak continuity of the flow}\label{S.2.4} In this subsection, we recall known facts on the Cauchy problem in $H^{\frac 12}$ and the weak continuity of the flow for \eqref{mBO}. Then, we show that the decomposition of Lemma~\ref{modulation} is stable by weak $H^{\frac 12}$ limit, a technical fact that will be used in the proof of Theorem~\ref{th1}. \begin{theorem}[\cite{KeTa,CuKe}]\label{th:CP} For any $u_0\in H^{\frac 12}(\mathbb R)$, the following holds true. {\rm (i)} there exist $T=T(\|u_0\|_{H^{\frac 12}})$ and a unique solution $u$ of the equation \eqref{mBO} satisfying $u(0,\cdot)=u_0$ and $u\in C([-T,T]\, :\, H^{\frac 12}(\mathbb R)) \cap X_T$, for a certain resolution space $X_T$ (see \cite{KeTa}). Moreover, for any $R>0$, the flow map data solution is Lipschitz continuous from $ \big\{u_0 \in H^{\frac12}(\mathbb R) : \| u_0\|_{H^{\frac12}} < R \big\} $ into $C([-T(R),T(R)]:H^{\frac12}(\mathbb R))$. {\rm (ii)} Let $\{u_{0,n}\}$ be a sequence of $H^{\frac12}$ initial data such that \[ u_{0,n}\rightharpoonup u_0\quad \hbox{in } H^{\frac 12}\ \hbox{as }\ n\to +\infty.\] Assume that, for some $C, T_1, T_2>0$, for all $n>0$, the corresponding solution $u_n(t)$ of~\eqref{mBO} exists on $[-T_1,T_2]$ and satisfies $\max_{t\in [-T_1,T_2]}\|u_n(t)\|_{H^{\frac 12}}\leq C$. Let $u(t)$ be the solution of~\eqref{mBO} corresponding to $u_0$. Then, $u(t)$ exists on $[-T_1,T_2]$ and \begin{equation}\label{we:1} \forall t\in [-T_1,T_2], \quad u_n(t) \rightharpoonup u(t)\quad \hbox{in } H^{\frac 12}\ \hbox{as }\ n\to +\infty. \end{equation} \end{theorem} The second part of the Theorem, \emph{i.e.}~the weak convergence of $u_n(t)$ to $u(t)$, is stated in the last remark of \cite{CuKe}. For similar statements, we refer to Lemma~30 in~\cite{MaMejmpa} in the case of the critical (gKdV) equation, to Theorem~5 in~\cite{KeMa} in the case of the Benjamin-Ono equation in $H^{\frac 12}$, to \cite{CuKe} for Benjamin-Ono in $L^2$ and to Lemma~3.4 in~\cite{Co} for the mass supercritical (gKdV) equation. As usual, given any $u_0\in H^{\frac 12}(\mathbb R)$, we consider the solution $u\in C([0,T^\star)\, :\, H^{\frac 12}(\mathbb R))$ emanating from $u_0$ at $t=0$ and defined on its maximal interval of existence $[0,T^\star)$. If $T^\star<+\infty$, we see from Theorem~\ref{th:CP} that $\lim_{t \uparrow T^\star} \| D^{\frac12}u(t)\|_{L^2}=+\infty$. A similar statement holds for negative times. \begin{remark} Let $u(t)$ be a solution of (mBO) such that $T^\star<+\infty$. Then, \[ \|D^{\frac 12} u(t)\|_{L^2}\gtrsim (T^\star-t)^{-\frac 14}. \] Indeed, let $0<t_{0}<T^\star$, and let $v(s,y)$ be the following solution of (mBO) \[ v(s,y)= \lambda_{0}^{-\frac 12} u(\lambda_0^{-2} s + t_0,\lambda_{0}^{-1} y),\quad \lambda_0=\|D^{\frac 12} u(t_0)\|_{L^2}^2. \] Then, $\|D^{\frac 12}v(0)\|_{L^2} = 1$ and $\|v(0)\|_{L^2} = \|u(t_0)\|_{L^2}= \|u(0)\|_{L^2}$. By Theorem~\ref{th:CP}~(i), it follows that $v(s)$ exists as a solution of \eqref{mBO} in $H^{\frac 12}$ on a time interval $[0,S]$, $S>0$ independent of $t_0$. Thus, $T^\star> \lambda_0^{-2} S + t_0$, which is equivalent to \[ (T^\star-t_0) \|D^{\frac 12} u(t_0)\|_{L^2}^4 \gtrsim 1. \] \end{remark} Now, we claim the following consequence of Theorem~\ref{th:CP}~(ii) on the decomposition of Lemma \ref{modulation}. \begin{lemma}[Weak $H^{\frac12}$ stability of the decomposition] \label{le:weak} Let $\{u_{0,n}\}$ be a sequence of $H^{\frac12}$ initial data such that \[ u_{0,n}\rightharpoonup u_0\quad \hbox{in } H^{\frac 12}\ \hbox{as }\ n\to +\infty. \] Let $u(t)$ be the solution of~\eqref{mBO} corresponding to $u_0$. Assume that for all $n>0$, $u_n(t)$ exists and satisfies~\eqref{tube.2} on $[-T_1,T_2]$ for some $T_1,T_2>0$ and that the parameters of the decomposition $(\lambda_n,x_n,b_n,\varepsilon_n)$ of~$u_n$ given by Lemma~\ref{modulation} satisfy, for some $c,C>0$, for all $n$ large, \begin{equation} \label{hypoweak} \forall t\in [-T_1,T_2],\quad 0<c\leq \lambda_n(t)<C ,\quad \lambda_n(0)=1,\quad x_n(0)=0. \end{equation} Then, $u(t)$ exists and satisfies~\eqref{tube.2} on $[-T_1,T_2]$ and its decomposition $(\lambda,x,b,\varepsilon)$ satisfies, as $n\to +\infty$, \[ \forall t\in [-T_1,T_2],\quad \varepsilon_n(t)\rightharpoonup \varepsilon(t)\quad \hbox{in } H^{\frac 12},\quad \lambda_n(t)\to \lambda(t),\quad x_n(t)\to x(t), \quad b_n(t)\to b(t). \] \end{lemma} \begin{proof}[Sketch of proof] We use the strategy of the proof of Lemma~17 in~\cite{MaMejmpa}. We also refer to~\cite{MeRainvent}, page 599, for a more detailed argument. The first step of the proof is to note that estimates~\eqref{modulation.7} provide uniform bounds on the time derivatives of the geometric parameters $(\lambda_n(t),x_n(t),b_n(t))$ on $[-T_1,T_2]$. Therefore, by Ascoli's theorem, up to the extraction of a subsequence, \[ (\lambda_n(t),x_n(t),b_n(t))\to (\widetilde\lambda(t),\widetilde x(t),\widetilde b(t)) \quad \hbox{on } [-T_1,T_2], \] for some functions $(\widetilde\lambda(t),\widetilde x(t),\widetilde b(t))$. Writing the orthogonality conditions~\eqref{modulation.2} in terms of $u_n(t)$ and $(\lambda_n(t),x_n(t),b_n(t))$, using~\eqref{we:1} and passing to the limit as $n\to +\infty$, we see that $u(t)$ and the limiting parameters $(\widetilde\lambda(t),\widetilde x(t),\widetilde b(t))$ satisfy the same orthogonality relations. In particular, they correspond to the unique parameters $(\lambda(t),x(t),b(t))$ given by Lemma~\ref{modulation}. This uniqueness statement proves by a standard argument that, for the whole sequence, $(\lambda_n(t),x_n(t),b_n(t))$ converges to $(\lambda(t),x(t),b(t))$ on $[-T_1,T_2]$ as $n\to +\infty$. It follows from \eqref{modulation.1} that $\varepsilon_n(t)\rightharpoonup \varepsilon(t)$ in $H^{\frac 12}$ as $n\to +\infty$. \end{proof} \subsection{Estimates and localization arguments for fractional Laplacians}\label{s.2.5} First, we recall various useful inequalities and commutator estimates related to $D^\alpha$ ($0<\alpha \le 1$), and the Hilbert transform $\mathcal H$. \begin{lemma}[\cite{Ca,KPV,DaMcPo}]\label{COMMUTATOR} For any $f,g,a\in \mathcal S(\mathbb R)$, \begin{equation}\label{gnp} \forall \, 2\leq p< +\infty,\quad \|f\|_{L^p} \lesssim \|f\|_{L^2}^{\frac 2p} \|D^{\frac 12} f\|_{L^2}^{\frac {p-2}p}, \end{equation} \begin{equation}\label{COMM1} \forall \, 0<\alpha \le 1, \quad \|[D^{\alpha},g]f \|_{L^2} \lesssim \|f\|_{L^4} \|D^{\alpha} g\|_{L^4}, \end{equation} where $[D^{\alpha},g]f=D^{\alpha}(fg)-gD^{\alpha}f$, and \begin{equation} \label{Calderon} \forall \, l, m \in \mathbb N \, , \quad \big\|\partial_x^l[\mathcal{H},a]\, \partial_x^mf\|_{L^2} \lesssim \|\partial_x^{l+m}a\|_{L^{\infty}}\|f\|_{L^{2}} \, ,\end{equation} where $[\mathcal{H},a]\, g=\mathcal{H}(ag)-a\mathcal{H}g$. \end{lemma} Recall that \eqref{gnp} is the Gagliardo-Nirenberg inequality, which follows from complex interpolation and Sobolev embedding. Estimate \eqref{COMM1} in the case $\alpha=1$ is due to Calder\'on \cite{Ca}, see also Coifman and Meyer \cite{CM}, formula (1.1). Estimate \eqref{COMM1} in the case $0<\alpha<1$ is a consequence of Theorem A.8 in \cite{KPV} for functions depending only on $x$, with the following choice of parameters: $0<\alpha<1$, $\alpha_1=0$, $\alpha_2=\alpha$, $p=2$, $p_1=p_2=4$. Finally, estimate \eqref{Calderon} in the case $l=0$ and $m=1$ is the classical Calder\'on commutator estimate proved in \cite{Ca}. The general case was proved by Bajvsank and Coifman \cite{BaCo} (see also Lemma 3.1 of~\cite{DaMcPo} for a different proof). The following estimates are direct consequences of \eqref{gnp}-\eqref{Calderon}. \begin{lemma} For any $a,f \in \mathcal S(\mathbb R)$, \begin{equation}\label{m.1} \left| \int (D^1 f) f' a \right|\lesssim \| f\|_{L^2}^2 \|a''\|_{L^\infty}, \end{equation} and \begin{equation}\label{m.2} \left| \int (D^1 f) f a' - \int |D^{\frac 12} f|^2 a'\right|\lesssim \|D^{\frac 12} f\|_{L^2}^{\frac 32}\| f\|_{L^2}^{\frac 12} \|a''\|_{L^2}^{\frac 34}\|a'\|_{L^2}^{\frac 14}. \end{equation} \end{lemma} \begin{proof} First, \begin{align*} \int (D^1 f) f' a & = \int (\mathcal H f') f' a = - \int f' \mathcal H (f' a) = - \int (\mathcal H f') f' a - \int f' [\mathcal H,a] f'. \end{align*} Thus, \begin{align*} \int (D^1 f) f' a & = \frac 12 \int f \left([\mathcal H,a] f'\right)' , \end{align*} which, combined to Cauchy-Schwarz inequality and \eqref{Calderon} (with $l=m=1$) implies \eqref{m.1} Second, \begin{align*} \int (D^1 f) f a' & = \int (D^{\frac 12} f) D^{\frac 12} (f a') = \int |D^{\frac 12} f|^2 a' +\int (D^{\frac 12} f) [D^{\frac 12} , a'] f . \end{align*} By \eqref{COMM1} with $\alpha=\frac12$ and \eqref{gnp}, \begin{align*} \left| \int (D^1 f) f a' - \int |D^{\frac 12} f|^2 a'\right| & \lesssim \|D^{\frac 12} f\|_{L^2} \| f\|_{L^4} \| D^{\frac 12} a'\|_{L^4} \\ &\lesssim \|D^{\frac 12} f\|_{L^2}^{\frac 32}\| f\|_{L^2}^{\frac 12} \|D^1(a')\|_{L^2}^{\frac 12}\|D^{\frac 12}(a')\|_{L^2}^{\frac 12} \\&\lesssim \|D^{\frac 12} f\|_{L^2}^{\frac 32}\| f\|_{L^2}^{\frac 12} \|a''\|_{L^2}^{\frac 34}\|a'\|_{L^2}^{\frac 14}, \end{align*} which proves \eqref{m.2}. \end{proof} In the proof of Theorem~\ref{th1}, we will also need sharp localization arguments reminiscent of the identity and the smoothing effect first observed by Kato for the generalized KdV equations (see \cite{Ka} and also e.g. \cite{MaMejmpa}). Let \[\phi(x)=\frac 1{\pi} \int_{-\infty}^x \frac{d y}{1+y^2}.\] We recall the following estimates. \begin{lemma}[Lemmas 6 and 7 of \cite{KeMaRo}, Lemmas 2 and 3 of \cite{KeMa}]\label{le.commutateur} There exists $C_1>0$ such that, for any $f\in \mathcal S(\mathbb R)$, \begin{equation} \label{eq:41b} \left|\int (D^1 f)f' \phi\right|\leq C_1 \int f^2 \phi' , \end{equation} and \begin{equation}\label{eq:41c} \left| \int (D^1 f)f \phi' -\int \left| D^{\frac 1 2}\left( f\sqrt{\phi'} \right)\right|^2\right| \leq C_1 \int f^2 \phi'. \end{equation} \end{lemma} \begin{remark} When compared to \eqref{m.1}-\eqref{m.2}, the main point of the estimates \eqref{eq:41b}-\eqref{eq:41c} is to obtain error terms depending only on localized $L^2$ quantities. In return, such estimates require a special choice of function $\phi$ - see \cite{KeMaRo}. The weak decay of $\phi'$ is a difficulty in Sect.~4, but due to the nonlocal nature of $D^1$, it is not clear whether \eqref{eq:41b}-\eqref{eq:41c} hold for functions decaying at $\infty$ faster than $\phi'$ (see also Lemma \ref{HY}). \end{remark} \section{Uniform estimates}\label{sect3} In this section, we define a specific sequence of global solutions with a rigid behavior, related to the desired minimal mass regime in Theorem \ref{th1}. \subsection{Uniform bounds for a sequence of solutions}\label{s:BS} Let $T_n=\frac1n$ for $n>1$ large. Let \begin{equation} \label{unTn} u_n^{in}(x)=\frac {1-a^{in}_n} {(\lambda^{in}_n)^{\frac12}} Q\left(\frac{x-x_n^{in}}{\lambda_n^{in}}\right) \quad \text{where} \quad x_n^{in}=-\ln(n), \end{equation} the parameter $\lambda_n^{in}\sim \frac 1n$ is to be fixed later and $a^{in}_n$ is uniquely chosen (depending on $\lambda_n^{in}$) so that (see \eqref{Ea}) \begin{equation}\label{ain} E(u_n^{in})=(\lambda_n^{in})^{-1}E((1-a^{in}_n) Q)=p_0,\quad a^{in}_n\sim \frac {p_0}{\int Q^2}\lambda_n^{in} \sim \frac {p_0}{\int Q^2}\frac 1n\,. \end{equation} (Recall that $p_0$ is defined in \eqref{nlprofile.2}.) Note that $\int (u_n^{in})^2=(1-a^{in}_n)^2 \int Q^2$ so that \begin{equation}\label{inittrois} \int Q^2 - \int (u_n^{in})^2 \sim \frac {p_0}{\int Q^2} \frac 2n. \end{equation} We consider the global solution $u_n$ of \eqref{mBO} corresponding to the data $u_n(T_n)=u_n^{in}$. For $n$ large, let $0<\tau_n\leq +\infty$ be such that $\mathcal{I}_n=[T_n,T_n+\tau_n)$ is the maximal time interval where $u_n$ satisfies \eqref{tube.2} ($\tau_n>0$ exists by \eqref{unTn} and by continuity of $t\mapsto u_n(t)$). By using Lemma~\ref{modulation}, we decompose $u_n(t)$ for $t \in \mathcal{I}_n$, as \begin{equation} \label{decomposition.1} u_n(t,x)=\frac1{\lambda^{\frac12}_n(t)} \left(Q_{b_n(t)} +\varepsilon_n\right)(t,y) \, , \quad y = \frac{x-x_n(t)}{\lambda_n(t)}, \end{equation} where $\lambda_n, \, x_n, \, b_n$ are $\mathcal{C}^1$ functions. We define the rescaled time variable $s$ by \begin{equation} \label{sn} s=s(t)=S_n+\int_{T_n}^t\frac{dt'}{\lambda_n^2(t')} \quad \text{with} \quad s(T_n)=S_n=-n \, . \end{equation} We consider all time dependent functions (such as $\lambda_n, \, b_n, \, x_n$ and $\varepsilon_n$) indifferently as functions of $t$ on $\mathcal I_n$ or of $s$ on the interval $\mathcal{J}_n=s(\mathcal{I}_n)$. From now on and until Sect.~\ref{S6} where we go back to the original variables $(t,x)$, we work for fixed $b$ and with the rescaled variables $(s,y)$. Note that by \eqref{unTn} and \eqref{modulation.1}, we have \begin{equation} \label{decomposition.2}\begin{aligned} & \lambda_n(S_n)=\lambda_n^{in}, \quad b_n(S_n)=b_n^{in}=-\lambda^{in}_n, \quad x_n(S_n)=x_n^{in},\\ & \varepsilon_n(S_n)= \varepsilon_n^{in}= -a^{in}_nQ - b_n^{in} P_{b_n^{in}}. \end{aligned}\end{equation} Fix \begin{equation}\label{theta.def} \frac 35 < \theta < \frac 23 \quad \hbox{and}\quad B=100 \, C_1\, , \end{equation} where $C_1>0$ is the universal constant in Lemma \ref{le.commutateur}. Let \begin{equation}\label{def.varphi} \phi(x)=\frac 1{\pi} \int_{-\infty}^x \frac{d y}{1+y^2}=\frac {\arctan x}{\pi} + \frac 12\,,\quad \varphi(s,y) = \frac {\phi\left(\frac {y}B +|s|^{\theta}\right)} {\phi\left(|s|^{\theta}\right)}\,, \end{equation} and \begin{equation}\label{defN} \mathcal N(\varepsilon_n)=\left( \int |D^{\frac 12} \varepsilon_n|^2 +\varepsilon_n^2 \varphi \,\right)^{\frac 12} . \end{equation} Moreover, let \begin{equation}\label{defJ1} \rho(y)=\int_{-\infty}^y \Lambda Q(y') dy', \quad J_{n}(s)= \int \varepsilon_{n}(s,y) \rho(y) \chi(-y|s|^{-\frac 23}) dy, \end{equation} and \begin{equation}\label{defl01} \mu_{n}(s) =|1-J_{n}(s)|^{\frac 1{p_{0}}} \lambda_{n}(s). \end{equation} Note that $|J_{n}|\lesssim \int_{y<0} |\varepsilon_n|(1+|y|)^{-1} + \int_{0<y<2|s|^{\frac 23}}| \varepsilon_{n}|\lesssim |s|^{\frac 13} \| \varepsilon_{n}\|_{L^2}.$ Observe that by \eqref{decomposition.2}, \begin{equation}\label{epsilton.initial} \|\varepsilon_n(S_n)\|_{L^2} \lesssim {|S_n|^{-\frac 12}}, \quad \mathcal N(\varepsilon_n(S_n)) \lesssim |a_n^{in}|+|b_n^{in}| |S_n|^{\frac \theta2}\lesssim |S_n|^{-1+\frac \theta 2}. \end{equation} Let $C^{\star}>1$, $S_0<-1$ and $n_0 >1$ to be fixed later. \begin{definition} \label{bootstrap} For $n \ge n_0$ and $s \in \mathcal{J}_n$, $s <S_0$. We say that $(\lambda_n, \, b_n, \, x_n,\varepsilon_n)$ satisfy the \textit{bootstrap estimates} at the time $s$ if \begin{equation} \label{bootstrap.1} \Big|\mu_n(s)-\frac1{|s|} \Big| \leq {|s|^{-\frac 76 + \frac \theta 4}}, \end{equation} \begin{equation} \label{bootstrap.2} \Big|\lambda_n(s)-\frac1{|s|} \Big|=\Big|b_n(s)-\frac1s \Big| \leq {|s|^{-\frac {13}{12} + \frac \theta 8}} \, , \end{equation} \begin{equation} \label{bootstrap.3} \Big|x_n(s)+\ln |s| \Big| \leq |s|^{-\frac 1{12}+ \frac \theta 8}, \end{equation} and \begin{equation} \label{bootstrap.4} \mathcal N(\varepsilon_n(s)) \le C^{\star} {|s|^{-1+\frac \theta2}} \, . \end{equation} \end{definition} \begin{definition} \label{bootstraptime} For $n \ge n_0$, we define (if this set is not empty), \begin{equation} S_n^{\star}:=\sup \big\{s \in \mathcal{J}_n \cap (S_n,S_0) \ : \ \text{\eqref{bootstrap.1}-\eqref{bootstrap.4} are satisfied on $[S_n,s]$} \,\big\} \,, \end{equation} and $\mathcal I_n^\star = [S_n,S_n^\star]$. \end{definition} The main result of this section states that there exists at least one choice of $\lambda_n^{in}\sim\frac 1n$ such that $S_n^\star=S_0$ i.e. such that the bootstrap estimates in Definition \ref{bootstrap} are valid up to a time $S_0$ independent of~$n$. \begin{proposition} \label{bootstrapprop} There exist $C^\star>1$, $S_0<-1$ and $n_0>1$ such that, for all $n>n_0$, there exists $\lambda_n^{in}$ satisfying \begin{equation}\label{Snstar} \Big|\lambda_n^{in} - \frac 1{n}\Big|< n^{-\frac {13}{12} + \frac \theta 8} \quad \hbox{and}\quad S_n^{\star}= S_0. \end{equation} \end{proposition} We prove Proposition \ref{bootstrapprop} in Sections~\ref{S4} and~\ref{S5}. For the sake of simplicity, we will omit the subscript $n$ and write $x$, $\lambda$, $\mu$, $b$ and $\varepsilon$ instead of $x_n$, $\lambda_n$, $\mu_n$, $b_n$ and $\varepsilon_n$. \subsection{Consequences of the bootstrap bounds} For future reference, we state here some consequences of the bootstrap estimates. \begin{lemma} For all $s\in \mathcal I^\star$, \begin{equation}\label{BS.1} \lambda+|b|+\|\varepsilon\|_{L^2}^2\lesssim |s|^{-1}, \end{equation} \begin{equation}\label{BS.2} \left|\frac{x_s}\lambda -1 \right| + \left| \frac {\lambda_s}{\lambda} +b\right| \lesssim |s|^{- 2}+ \mathcal N (\varepsilon)\lesssim C^\star |s|^{-1+\frac \theta2}, \end{equation} \begin{equation}\label{BS.3} |b_s+b^2|=|\lambda| \left| \frac {\lambda_s}{\lambda} +b\right| \lesssim |s|^{-3}+ |s|^{-1}\mathcal N (\varepsilon)\lesssim C^\star |s|^{-2+\frac \theta2} , \end{equation} so that \begin{equation} \label{BS.5} |b_s| \le |b_s+b^2|+|b|^2 \lesssim C^{\star}|s|^{-2+\frac{\theta}2} , \end{equation} and \begin{equation}\label{BS.4} |(\varepsilon,Q')|+|(\varepsilon,\Lambda Q)|\lesssim |s|^{-1} ,\quad |(\varepsilon,Q)|\lesssim |s|^{-\frac 32}+ \mathcal N (\varepsilon)^2\lesssim (C^\star)^2 |s|^{-2+\theta}. \end{equation} \end{lemma} \begin{proof} From \eqref{ain} and the definition of $b$ in \eqref{modulation.1}, we have $b=-\lambda$. Thus, these estimates are direct consequences of the definition of $S^\star_n$ and Lemma~\ref{modulation}. \end{proof} \begin{remark}\label{rk:BS} We note that the estimate on the full $L^2$ norm of $ \varepsilon$ given by \eqref{BS.1} cannot be improved. In contrast, the $\dot H^{\frac 12}$ norm and local $L^2$ norm of $ \varepsilon$ have a better decay as $t\downarrow 0$ (see \eqref{bootstrap.4}). This phenomenon seems specific to KdV-type equations and requires the use of local norms and estimates, which, as mentionned in the Introduction, are delicate to combine with the non local operator $D^1$. See Lemma~\ref{le.commutateur}. Note that in the present paper, we localize only the $L^2$ term in the definition of the energy and local norm (see \eqref{defN} and \eqref{defF} below). See also \cite{CoMa} on such questions. \end{remark} \section{Control of $\mathcal N(\varepsilon)$}\label{S4} In this section, we close the estimates for $\mathcal N(\varepsilon)$, i.e., we strictly improve \eqref{bootstrap.4} under the bootstrap assumptions. We consider the following main functionals \begin{equation}\label{defF} F = \int |D^{\frac 12}\varepsilon|^2 +\varepsilon^2 \varphi - \frac 12 \left(\left( Q_{b}+\varepsilon\right)^4-Q_{b}^4- 4 Q_{b}^3 \varepsilon\right) ; \end{equation} \begin{equation}\label{defG} G = \int \psi \varepsilon^2 \quad \hbox{where}\quad \psi(y)= |s|^{\frac 12} \pi\left( \phi\left(y|s|^{-\frac 12}\right)-\frac 12\right) \chi^2(y|s|^{-\frac 35}), \end{equation} \begin{equation}\label{defK} K = \int \varepsilon \mathcal L_\varphi P_b \quad \hbox{where} \quad \mathcal L_\varphi f = D^1 f + f \varphi -3Q^2 f\,. \end{equation} The functional $F$ is a localized energy functional in $\varepsilon$. At the quadratic order in $\varepsilon$, it is similar to $(\mathcal L\varepsilon,\varepsilon)$, except for the localization factor $\varphi$ on the $L^2$ term, as in $(\mathcal L_\varphi \varepsilon, \varepsilon)$. The functional $G$ is related to Virial identity and is useful to cancel some critical terms in $\frac {d F} {ds}$, as first observed for the (NLS) equation in \cite{RaSz}, see Lemma \ref{le:GG}. The functional $K$ appears as an error term in the variation of $E$, but we will see in Lemma \ref{le:KK} that it enjoys a special algebra related to scaling variation. \subsection{Energy-type estimate} \begin{lemma}\label{le:FF} There exists $C_0>0$ independent of $C^{\star}$ such that for $|S_0|$ large enough, depending on $C^\star$, and for all $s\in \mathcal I^\star$, \begin{equation}\label{eq:F}\begin{aligned} &\frac {d}{ds}\left( \frac F\lambda\right) +\frac 1{4\lambda} \int \varepsilon^2 \varphi_y \leq {C_0}\, C^\star|s|^{-2+\theta } - A -\left(\frac{\lambda_s}{\lambda} + b\right) \left(\frac{F}\lambda { -} K + \frac Z\lambda\right), \end{aligned}\end{equation} where \begin{align}\label{defA} A&= \int \varepsilon^2 \varphi -6 \int \varepsilon^2 Q \Lambda Q + 2 \left(\frac{\lambda_s}{\lambda} + b\right) \int \varepsilon y \Lambda Q + 2 \left(\frac{x_s}{\lambda} -1\right) \int \varepsilon y Q' \, , \\ \label{defZ} Z &= \int \varepsilon^2 y \varphi_y \chi_1 -2 \int \varepsilon \Lambda Q (1-\varphi) \chi_2 \end{align} and \begin{equation} \chi_1(s,y)=1-\chi(y |s|^{-2/3}), \quad \chi_2(s,y)=1-\chi\left(y \frac 4 B |s|^{-\theta}\right).\label{chi1chi2} \end{equation} \end{lemma} \begin{proof} The following notation will be used \begin{equation}\label{defV}\begin{aligned} &V=V(\varepsilon)=D^1 \varepsilon + \varepsilon - \left((Q_b+\varepsilon)^3 - Q_b^3\right),\quad \\&V_\varphi=V_\varphi(\varepsilon) = D^1 \varepsilon + \varepsilon \varphi - \left((Q_b+\varepsilon)^3 - Q_b^3\right). \end{aligned}\end{equation} We compute using \eqref{modulation.4}, \begin{align*} \lambda \frac d{ds} \left(\frac F\lambda\right) & = 2 \int \varepsilon_s V_\varphi - \frac {\lambda_s}\lambda F -2 \int \partial_s Q_b \left( (Q_b+\varepsilon)^3 - Q_b^3 - 3 Q_b^2 \varepsilon\right) + \int \varepsilon^2 \partial_s \varphi \\ & = f_1+f_2+f_3+f_4+f_5 \, , \end{align*} where \begin{align*} & f_1 = 2 \int V_y V_\varphi \, , \\ & f_2 = 2 \frac{\lambda_s}{\lambda} \int (\Lambda \varepsilon) V_\varphi - \frac {\lambda_s}\lambda F + 2(\frac {\lambda_s}{\lambda} + b) \int (\Lambda Q_b) V_\varphi - 2 (b_s+b^2) \int \frac{\partial Q_b}{\partial b} V_\varphi \, ,\\ & f_3 = 2(\frac {x_s}\lambda-1) \int (Q_b+\varepsilon)_y V_\varphi \, , \\ & f_4 = 2 \int \Psi_b V_\varphi \, , \\ & f_5 = -2 \int (Q_b)_s \left( (Q_b+\varepsilon)^3 - Q_b^3 - 3 Q_b^2 \varepsilon\right) + \int \varepsilon^2 \partial_s \varphi \, . \end{align*} In the rest of the proof, we estimate the terms $f_1$, \ldots, $f_5$, taking $S_0$ large enough, possibly depending on $C^\star$. For future reference, note that \begin{equation}\label{def.Dvarphi} \varphi_y(s,y)=\partial_y \varphi(s,y)= \frac 1{B} \frac{\phi'\left(\frac {y}B +|s|^{\theta}\right)}{\phi\left(|s|^{\theta}\right)} = \frac 1{B\pi} \frac{1}{\phi\left(|s|^{\theta}\right)}\frac 1{1+\left(\frac yB + |s|^{\theta}\right)^2}\,. \end{equation} \noindent \emph{Estimate for $f_1$.} Integrating by parts, we have \[ f_1 = 2 \int V_y V_\varphi =2 \int V_y V + 2 \int V_y (V_\varphi-V) = 2 \int V_y \varepsilon (\varphi-1) . \] Note that by integration by parts, \begin{align*} & 2 \int \left(D^1 \varepsilon\right)_y \varepsilon (\varphi-1) = 2 \int \left(D^1 \varepsilon\right)_y \varepsilon \varphi= -2 \int \left(D^1 \varepsilon\right)\left( \varepsilon_y \varphi +\varepsilon \varphi_y\right). \end{align*} By the change of variable $y' = \frac y{B}+|s|^\theta$, we have from Lemma \ref{le.commutateur}, \[ \left|\int \left(D^1 \varepsilon\right) \varepsilon_y \varphi \right| \leq \frac{ C_1} B \int \varepsilon^2\varphi_y \] and \[ \left|\int \left(D^1 \varepsilon\right) \varepsilon \varphi_y - \int \left| D^{\frac 1 2}\left( \varepsilon\sqrt{\varphi_y}\right)\right|^2\right| \le \frac {C_1} B \int \varepsilon^2 \varphi_y \] Recall that $B = 100 C_1$. Thus, \begin{align*} 2 \int \left(D^1 \varepsilon\right)_y \varepsilon (\varphi-1) \leq - 2\int \left| D^{\frac 1 2}\left( \varepsilon\sqrt{\varphi_y}\right)\right|^2 +\frac 1{25} \int \varepsilon^2 \varphi_y. \end{align*} Next, integrating by parts, \[ 2 \int \varepsilon_y \varepsilon (\varphi-1) = - \int \varepsilon^2 \varphi_y, \] and \begin{align*} \int \big((Q_b& +\varepsilon)^3-Q_b^3\big)_y \varepsilon (\varphi-1)\\ &=-\int \left((Q_b+\varepsilon)^3-Q_b^3\right) \varepsilon_y (\varphi-1)-\int \left((Q_b+\varepsilon)^3-Q_b^3\right) \varepsilon \varphi_y\\ & = -\frac 14 \int \left( {(Q_b+\varepsilon)^4} - 4 Q_b^3 \varepsilon - {Q_b^4} \right)_y (\varphi-1) \\ &\quad+ \int (Q_b)_y \left( (Q_b+\varepsilon)^3-3 \varepsilon Q_b^2-Q_b^3\right) (\varphi-1) -\int \left((Q_b+\varepsilon)^3-Q_b^3\right) \varepsilon \varphi_y\\ & = \frac 14 \int \left( {(Q_b+\varepsilon)^4} - 4 Q_b^3 \varepsilon - {Q_b^4} - 4 \left((Q_b+\varepsilon)^3-Q_b^3\right) \varepsilon\right) \varphi_y \\ &\quad +\int (Q_b)_y \left( (Q_b+\varepsilon)^3-3 \varepsilon Q_b^2-Q_b^3\right) (\varphi-1). \end{align*} Note that \[ \left| {(Q_b+\varepsilon)^4} - 4 Q_b^3 \varepsilon - {Q_b^4} - 4 \left((Q_b+\varepsilon)^3-Q_b^3\right) \varepsilon\right|\lesssim \varepsilon^4 + \varepsilon^2 Q_b^2. \] First, by Cauchy-Schwarz inequality and then \eqref{gnp}, we have \begin{align*} \int \varepsilon^4 \varphi_y& = \int \left|\varepsilon \sqrt{\varphi_y}\right|^2 \varepsilon^2 \lesssim \left( \int \left|\varepsilon \sqrt{\varphi_y}\right|^4\right)^{\frac 12} \left(\int \varepsilon^4 \right)^{\frac 12} \\& \lesssim \left( \int \left|\varepsilon \sqrt{\varphi_y}\right|^2\right)^{\frac 12} \left( \int \left|D^{\frac 12}\left(\varepsilon \sqrt{\varphi_y}\right)\right|^2\right)^{\frac 12} \left(\int \varepsilon^2 \right)^{\frac 12}\left(\int \left|D^{\frac 12}\varepsilon\right|^2 \right)^{\frac 12}\\ & \lesssim \frac 1 {|s|} \left( \int \left|D^{\frac 12}\left(\varepsilon \sqrt{\varphi_y}\right)\right|^2 + \int \varepsilon^2 {\varphi_y}\right). \end{align*} Second, since \[ Q^2 \varphi_y \lesssim \left\{\begin{aligned} & |s|^{-2\theta} Q^2 \lesssim |s|^{-2 \theta} \varphi &\quad \hbox{for $y>-|s|^{\frac 14}$},\\ & |s|^{-1} \varphi_y &\quad \hbox{for $y<-|s|^{\frac 14}$}, \end{aligned} \right. \] we have \[\int \varepsilon^2 Q_b^2 \varphi_y \lesssim \int \varepsilon^2 (Q^2+b^2) \varphi_y \lesssim |s|^{-2\theta} \mathcal N (\varepsilon)^2+ |s|^{-1} \int |\varepsilon|^2 \varphi_y. \] Thus, for $|S_0|$ large enough, \begin{align*} &\left|\int \left( {(Q_b+\varepsilon)^4} - 4Q_b^3 \varepsilon - {Q_b^4} - 4\left((Q_b+\varepsilon)^3-Q_b^3\right) \varepsilon\right) \varphi_y\right|\\ & \leq \frac 1{100} \left(\int \left| D^{\frac 1 2}\left( \varepsilon\sqrt{\varphi_y}\right)\right|^2 + \int \varepsilon^2 \varphi_y\right) + C|s|^{-2\theta}\mathcal N(\varepsilon)^2. \end{align*} Similarly, \[ \left|(Q_b+\varepsilon)^3-3 Q_b^2\varepsilon -Q_b^3\right|\lesssim |\varepsilon|^3 + \varepsilon^2 . \] Since $(Q_b)_y = Q' + b P' \chi_b + b P \chi_b'$, \[ |(Q_b)_y|\lesssim |Q'|+|b|Q + b^2\mathbf{1}_{[-2 |b|^{-1},- |b|^{-1}]} \,, \] we first observe that \begin{align*} \int_{y< -\frac B 2 |s|^{\theta}} |(Q_b)_y| \left(|\varepsilon|^3+\varepsilon^2\right) \lesssim |s|^{-3\theta} \int \left(|\varepsilon|^3+\varepsilon^2\right) \lesssim |s|^{-1-3\theta}. \end{align*} Second, by the definition of $\varphi$, \begin{equation}\label{phi_B} \hbox{for $y>-\frac B 2 |s|^{\theta}$},\quad |1-\varphi(y)|=|\varphi(0)-\varphi(y)|\lesssim |y| \max_{[-\frac B 2 |s|^{\theta},+\infty)} \varphi_y \lesssim |y| |s|^{-2\theta}, \end{equation} and so, since $|y|Q\lesssim Q^{\frac 12} \lesssim \varphi$ and $\int |\varepsilon|^3 \varphi \lesssim \int \varepsilon^2 \varphi + \int \varepsilon^4 \lesssim \mathcal N(\varepsilon)^2$, \[ \int_{y> -\frac B 2 |s|^{\theta}} |(Q_b)_y| \left(|\varepsilon|^3+\varepsilon^2\right) |1-\varphi| \lesssim |s|^{-2\theta} \int (|\varepsilon|^3 + \varepsilon^2 ) \varphi \lesssim |s|^{-2\theta} \mathcal N(\varepsilon)^2. \] In conclusion, since $\frac35<\theta<\frac23$, we obtain for $|S_0|$ large enough, \begin{equation}\label{eq:f1} f_1 \leq - \int \left( D^{\frac 1 2}\left( \varepsilon\sqrt{\varphi_y}\right)\right)^2 -\frac 12 \int \varepsilon^2 \varphi_y + C |s|^{-3+\theta}. \end{equation} \noindent \emph{Estimate for $f_2$.} Since $ \mathcal H(y \varepsilon_y) = y \mathcal H( \varepsilon_y)$, we have \[\int y \varepsilon_y (D^1 \varepsilon) = \int y \varepsilon_y \mathcal H(\varepsilon_y) = - \int \mathcal H(y \varepsilon_y)(\varepsilon_y) = - \int y \mathcal \mathcal H(\varepsilon_y) (\varepsilon_y) =0,\] and so \[ 2 \int (\Lambda \varepsilon) (D^1 \varepsilon)= \int |D^{\frac 12} \varepsilon|^2 . \] By integration by parts, \[ 2 \int (\Lambda \varepsilon) \varepsilon\varphi= -\int \varepsilon^2 y \varphi_y. \] Moreover, \[ - 2 \int \Lambda \varepsilon \left((Q_b+\varepsilon)^3-Q_b^3\right) =- \int \varepsilon \left((Q_b+\varepsilon)^3-Q_b^3\right) - 2 \int y\varepsilon_y \left((Q_b+\varepsilon)^3-Q_b^3\right)\] where, integrating by parts, \begin{align*} &- 2 \int y\varepsilon_y \left((Q_b+\varepsilon)^3-Q_b^3\right)\\ & =-\frac 12 \int y \left(\left( Q_{b}+\varepsilon\right)^4-Q_{b}^4- 4 Q_{b}^3 \varepsilon\right)_y +2 \int y (Q_b)_y \left(\left( Q_{b}+\varepsilon\right)^3-Q_{b}^3- 3 Q_{b}^2 \varepsilon \right)\\ & =\frac 12 \int \left(\left( Q_{b}+\varepsilon\right)^4-Q_{b}^4- 4 Q_{b}^3 \varepsilon\right) +2 \int y (Q_b)_y \left(\left( Q_{b}+\varepsilon\right)^3-Q_{b}^3- 3 Q_{b}^2 \varepsilon \right). \end{align*} Note that \begin{align*} &- \int \varepsilon \left((Q_b+\varepsilon)^3-Q_b^3\right)+ \int \left(\left( Q_{b}+\varepsilon\right)^4-Q_{b}^4- 4 Q_{b}^3 \varepsilon\right) +2 \int y (Q_b)_y \left(\left( Q_{b}+\varepsilon\right)^3-Q_{b}^3- 3 Q_{b}^2 \varepsilon \right)\\ &= 6 \int \varepsilon^2 Q_b \Lambda Q_b + 2 \int \varepsilon^3 \Lambda Q_b . \end{align*} Therefore, for the first part of $f_2$, we obtain \begin{align*} 2 \frac{\lambda_s}{\lambda} \int (\Lambda \varepsilon) V_\varphi - \frac {\lambda_s} \lambda F &= \frac{\lambda_s}{\lambda} \left( -\int \varepsilon^2 y \varphi_y - \int \varepsilon^2 \varphi +6 \int \varepsilon^2 Q_b \Lambda Q_b + 2 \int \varepsilon^3 \Lambda Q_b \right)\\ &= - \frac{\lambda_s}{\lambda} \int \varepsilon^2 y \varphi_y \\ & + (\frac {\lambda_s}{\lambda} + b) \left( - \int \varepsilon^2 \varphi +6 \int \varepsilon^2 Q_b \Lambda Q_b + 2 \int \varepsilon^3 \Lambda Q_b \right), \\& -b \left( - \int \varepsilon^2 \varphi +6 \int \varepsilon^2 Q_b \Lambda Q_b + 2 \int \varepsilon^3 \Lambda Q_b \right), \end{align*} Then, we observe that \begin{align*} 2 \int (\Lambda Q_b) V_\varphi &= 2 \int (\Lambda Q_b) (D^1 \varepsilon + \varepsilon \varphi - 3 Q_b^2 \varepsilon - 3 Q_b \varepsilon^2 - \varepsilon^3)\\ &= 2 \int \varepsilon (D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q_b^2 \Lambda Q_b) - 6 \int \varepsilon^2 Q_b \Lambda Q_b - 2 \int \varepsilon^3 \Lambda Q_b. \end{align*} Thus, summing these two expressions, we obtain \begin{align*} f_2 &= - \frac{\lambda_s}{\lambda} \int \varepsilon^2 y \varphi_y \\ & +2 (\frac {\lambda_s}{\lambda} + b) \left[\int (D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q_b^2 \Lambda Q_b) \varepsilon -\frac 12 \int \varepsilon^2 \varphi\right],\\ & - 2 (b_s+b^2) \int \frac{\partial Q_b}{\partial b} V_\varphi \\& -b \left( - \int \varepsilon^2 \varphi +6 \int \varepsilon^2 Q_b \Lambda Q_b + 2 \int \varepsilon^3 \Lambda Q_b \right)\\ &= f_{2,1}+f_{2,2}+f_{2,3}+f_{2,4}. \end{align*} First, we split $f_{2,1}$ into two parts \begin{equation}\label{e.f21} f_{2,1} =- \left(\frac{\lambda_s}{\lambda}+b\right) \int \varepsilon^2 y \varphi_y + b \int \varepsilon^2 y \varphi_y. \end{equation} We split the first term in right-hand side of \eqref{e.f21} using $Z_1=\int \varepsilon^2 y \varphi_y \chi_1$, where $\chi_1$ is defined in \eqref{defZ} \[ \int \varepsilon^2 y \varphi_y = Z_1 + \int \varepsilon^2 y \varphi_y \chi( y |s|^{-2/3}). \] Note that \begin{align*} \left|\int \varepsilon^2 y \varphi_y \chi( y |s|^{-2/3})\right| &\lesssim \int_{|y|<2|s|^{2/3}} |y| \varphi_y \varepsilon^2+ \int_{y>2|s|^{2/3}} |y| \varphi_y \varepsilon^2\\ &\lesssim |s|^{\frac 23} \int \varepsilon^2\varphi_y + |s|^{-2/3} \mathcal N(\varepsilon)^2, \end{align*} and thus by \eqref{BS.2}, \begin{align*} \left|\frac{\lambda_s}{\lambda}+b\right|\left|\int \varepsilon^2 y \varphi_y \chi( y |s|^{-2/3})\right| &\lesssim C^\star |s|^{-\frac 13+\frac \theta 2} \int \varepsilon^2\varphi_y + (C^\star)^3 |s|^{-\frac {11} 3+\frac 32 \theta}\\ &\lesssim o(1) \int \varepsilon^2\varphi_y + |s|^{-3+\theta}. \end{align*} For the second term on the right-hand side of \eqref{e.f21}, we see that \begin{align*} |b| \int \varepsilon^2 |y| \varphi_y &\lesssim |s|^{-\frac 12 (1-\theta)} \int_{|y|< |s|^{\frac 12(1+\theta)}}\varepsilon^2 \varphi_y +|s|^{-1} \int_{|y|> |s|^{\frac 12(1+\theta)}} \varepsilon^2 |y \varphi_y|\\ & \lesssim |s|^{-\frac 12 (1-\theta)} \int \varepsilon^2 \varphi_y +|s|^{-1}\int_{|y|> |s|^{\frac 12(1+\theta)}} \frac{\varepsilon^2}{|y|} \lesssim |s|^{-\frac 12 (1-\theta)} \int \varepsilon^2 \varphi_y +|s|^{-\frac 52 - \frac\theta2}.\end{align*} Thus, \[ f_{2,1} = - \left(\frac{\lambda_s}{\lambda}+b\right) Z_1+ o(1) \int \varepsilon^2 \varphi_y +\mathcal O(|s|^{-3+\theta})\quad \hbox{where}\quad Z_1 = \int \varepsilon^2 y \varphi_y \chi_1. \] Second, we compute $D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q_b^2 \Lambda Q_b$ to simplify the expression of $f_{2,2}$. First, we claim that for any function $f$, \begin{equation}\label{calculs.L} \mathcal L(f')=(\mathcal Lf)'+6QQ'f,\quad \mathcal L(\Lambda f)=\frac 32 \mathcal Lf + y (\mathcal Lf)' +6yQQ'f-f + 3Q^2 f. \end{equation} The first identity follows directly from the definition of $\mathcal L$. For the second one, we proceed as follows \begin{align*} \mathcal L(\Lambda f)& = \frac 12 \mathcal Lf + \mathcal H (yf')'+yf'-3Q^2 yf' =\frac 32 \mathcal Lf + \mathcal H (yf'') -f + 3Q^2 f +yf'-3Q^2 yf'\\ & =\frac 32 \mathcal Lf + y (\mathcal H (f'))' -f + 3Q^2 f +yf'-3Q^2 yf'\\ & =\frac 32 \mathcal Lf + y (\mathcal Lf)' -f + 3Q^2 f+6yQQ'f. \end{align*} Since $\mathcal L\Lambda Q = -Q$ and $Q_b = Q+bP_b$, we have \begin{align*} &D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q^2 \Lambda Q_b= \mathcal L(\Lambda Q) + b \mathcal L(\Lambda P_b) + (\Lambda Q+b\Lambda P_b)(\varphi-1)\\ & = - Q + \frac 32 b \mathcal L P_b + by(\mathcal LP_b)_y - bP_b +3bQ^2 P_b + 6 by QQ'P_b + (\Lambda Q+b\Lambda P_b)(\varphi-1). \end{align*} We use the notation from the proof of Lemma \ref{lprofile}, $\Psi_1 = (\mathcal LP_b - \mathcal LP)_y = (\mathcal LP_b)_y - \Lambda Q.$ Thus, \begin{align*} &D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q^2 \Lambda Q_b\\ & = - Q + by\Lambda Q +by\Psi_1 +\frac 32 b \mathcal L P_b - bP_b +3bQ^2 P_b + 6b y QQ'P_b + (\Lambda Q+b\Lambda P_b)(\varphi-1). \end{align*} Then, \begin{align*} &D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q^2 \Lambda Q_b\\ & = - Q + by\Lambda Q +by\Psi_1 +\frac 32 b \mathcal L_\varphi P_b - b\varphi P_b +3bQ^2 P_b + 6b y QQ'P_b + (\Lambda Q+by(P_b)_y)(\varphi-1). \end{align*} Next, we see that \begin{align*} - 3 (Q_b^2-Q^2) \Lambda Q_b = -6 b Q \Lambda Q P_b -b^2 \left(\frac 92 QP_b^2 + 6yQP_b(P_b)_y + 3yQ'P_b^2\right) -3 b^3 P_b^2 \Lambda P_b. \end{align*} Combining these computations, we obtain \begin{align*} &D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q_b^2 \Lambda Q_b\\ & = - Q + b \left(y\Lambda Q +\frac 32 \mathcal L_\varphi P_b +y P\chi_b' \varphi - P_b \varphi\right)+by(\Psi_1-P\chi_b') + (\Lambda Q+byP'\chi_b)(\varphi-1)\\ &-b^2 \left(\frac 92 QP_b^2 + 6yQP_b(P_b)_y + 3yQ'P_b^2\right) -3 b^3 P_b^2 \Lambda P_b . \end{align*} We claim the following estimates \begin{align} & \| \mathcal L_\varphi \frac {\partial Q_b}{\partial b} - (\mathcal L_\varphi P_b + y P \chi_b'\varphi)\|_{L^2} \lesssim |s|^{-\frac 12}, \label{cl.1}\\ & \mathcal L_\varphi P_b - P_b \varphi = R_b,\label{cl.2}\\ & \|b y (\Psi_1 - P \chi_b') \|_{L^2}\lesssim |b|^{\frac32} ,\label{cl.3}\\ & \|QP_b^2\|_{L^2} + \|yQP_b(P_b)_y \|_{L^2}+ \|yQ'P_b^2\|_{L^2}+ |b|^\frac 12 \|P_b^2 \Lambda P_b\|_{L^2} \lesssim1.\label{cl.4} \end{align} Indeed, by using \eqref{def.DQb}, \[ \mathcal L_\varphi \frac {\partial Q_b}{\partial b} - (\mathcal L_\varphi P + y P \chi_b'\varphi) = D^1 (yP\chi_b')-3 Q^2 P \chi_b' \] and $\|D^1 (yP\chi_b')\|_{L^2} \lesssim \|P\chi_b'\|_{L^2}+\|yP'\chi_b'\|_{L^2}+\|yP\chi_b''\|_{L^2}\lesssim |b|^{\frac 12},$ $\|Q^2 P \chi_b'\|_{L^2}\lesssim |b|^{\frac 52}$, which proves \eqref{cl.1}. Next, \eqref{cl.2} is a simple consequence of the definitions of $\mathcal L_\varphi$ and $R_b$ in \eqref{defK} and \eqref{Rb}. Then, estimate \eqref{cl.3} is proved as in \eqref{lprofile.205}. Finally, \eqref{cl.4} is straightforward. Inserting \eqref{cl.1}-\eqref{cl.4}, we obtain \begin{align*} &D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q_b^2 \Lambda Q_b\\ & = - Q +b R_b + b y\Lambda Q + b \mathcal L_\varphi \frac {\partial Q_b} {\partial b} -\frac 12 \mathcal L_{\varphi} P_b + (\Lambda Q+byP'\chi_b)(\varphi-1) +\mathcal O_{L^2} (|s|^{-\frac 32}), \end{align*} and thus, using \eqref{modulation.6} and $\|\varepsilon\|_{L^2} \lesssim |s|^{-\frac 12}$, \begin{align*} &\int (D^1 (\Lambda Q_b) + \Lambda Q_b \varphi - 3 Q_b^2 \Lambda Q_b) \varepsilon -\frac 12 \int \varepsilon^2 \varphi \\ &\quad = -\frac 12 F -\frac b2 K + b \int \varepsilon \mathcal L_\varphi \frac {\partial Q_b} {\partial b} +b \int \varepsilon y \Lambda Q+ \int (\Lambda Q+byP'\chi_b)(\varphi-1)\varepsilon + \mathcal O(|s|^{-2}\ln |s|). \end{align*} Concerning $f_{2,3}$, by the definition of $V_\varphi$, one has \begin{align*} \int \frac{\partial Q_b}{\partial b} V_\varphi = \int \varepsilon \mathcal L_\varphi \frac{\partial Q_b}{\partial b} - \int \frac{\partial Q_b}{\partial b} \left((6bQP_b+3b^2P_b^2) \varepsilon+3 Q_b \varepsilon^2+ \varepsilon^3\right) \end{align*} One sees easily by using \eqref{def.DQb} that \[ \left| \int \frac{\partial Q_b}{\partial b} \left((6bQP_b+3b^2P_b^2) \varepsilon+3 Q_b \varepsilon^2+ \varepsilon^3\right) \right| \lesssim |s|^{-1} \mathcal N(\varepsilon) + \mathcal N(\varepsilon)^2 \lesssim (C^\star)^2 |s|^{-2+\theta}, \] and thus by \eqref{BS.2}, since $b=-\lambda$, \[ f_{2,3} =- 2 b \left(\frac {\lambda_s}{\lambda}+b\right)\int \varepsilon\mathcal L_\varphi \frac{\partial Q_b}{\partial b} + \mathcal O( (C^\star)^3 |s|^{-4+\frac 32\theta}). \] For $f_{2,2}$ and $f_{2,3}$, we thus obtain \begin{align*} f_{2,2}+f_{2,3} &= -\left(\frac{\lambda_s}{\lambda} + b\right) (F + b K ) +2 b \left(\frac{\lambda_s}{\lambda} + b\right) \int \varepsilon y \Lambda Q\\ &+2 \left(\frac{\lambda_s}{\lambda} + b\right) \int (\Lambda Q+byP'\chi_b)(\varphi-1)\varepsilon + \mathcal O( |s|^{-3+\theta}). \end{align*} We split the third term in the right-hand side as follows \begin{align*} &2 \left(\frac{\lambda_s}{\lambda} + b\right) \int (\Lambda Q+byP'\chi_b)(\varphi-1)\varepsilon \\&= - \left(\frac{\lambda_s}{\lambda} + b\right)Z_2 + 2 \left(\frac{\lambda_s}{\lambda} + b\right) \int \left(\Lambda Q\chi\left( y \frac4B |s|^{-\theta}\right)+ byP'\chi_b\right)(\varphi-1)\varepsilon \end{align*} where $Z_2 = - 2 \int \Lambda Q(\varphi-1)\varepsilon \chi_2$. It follows from \eqref{phi_B} and $|\Lambda Q|\lesssim (1+|y|)^{-2}$ that \[ \left| \int \Lambda Q\chi\left( y \frac4B |s|^{-\theta}\right)(\varphi-1)\varepsilon \right| \lesssim |s|^{-2\theta}\int_{y>- \frac B2|s|^{\theta}} \frac {|\varepsilon|}{1+|y|} \lesssim |s|^{-2\theta} \|\varepsilon\|_{L^2} \lesssim |s|^{-2\theta-\frac 12}. \] Thus, by \eqref{BS.2}, since $\theta>3/5$, for $|S_0|$ large enough, \[ \left|\left(\frac{\lambda_s}{\lambda} + b\right) \int \Lambda Q\chi\left( y \frac4B |s|^{-\theta}\right)(\varphi-1)\varepsilon \right| \lesssim C^\star |s|^{-\frac 32 -\frac 32\theta} \lesssim |s|^{-3+\theta}. \] Next, since $P'\in \mathcal Y_2$, using again \eqref{phi_B}, \begin{align*} \left| \int byP'\chi_b (\varphi-1)\varepsilon\right|&\lesssim |s|^{-1-2\theta} \int_{|y|< \frac B2|s|^{\theta}} |\varepsilon| +|s|^{-1} \int_{|y|> \frac B2|s|^{\theta}} \frac{|\varepsilon|}{|y|} \\ &\lesssim |s|^{-1-\frac 32 \theta} \|\varepsilon\|_{L^2} + |s|^{-1-\frac \theta2} \|\varepsilon\|_{L^2} \lesssim |s|^{-\frac 32-\frac \theta2}. \end{align*} Thus, by \eqref{BS.2}, since $\theta>\frac 12$, we obtain \[ \left|\left(\frac{\lambda_s}{\lambda} + b\right) \int byP'\chi_b (\varphi-1)\varepsilon\right| \lesssim C^\star |s|^{-\frac 52}\lesssim |s|^{-3+\theta}. \] In conclusion for $f_{2,2}$ and $f_{2,3}$, we obtain \begin{align*} f_{2,2}+f_{2,3} = -\left(\frac{\lambda_s}{\lambda} + b\right) (F + b K +Z_2 ) +2 b \left(\frac{\lambda_s}{\lambda} + b\right) \int \varepsilon y \Lambda Q + \mathcal O( |s|^{-3+\theta}). \end{align*} where $Z_2 = - 2 \int \Lambda Q(\varphi-1)\varepsilon \chi_2$. For $f_{2,4}$, we claim \begin{equation}\label{cl.30} f_{2,4} = b \left( \int \varepsilon^2 \varphi -6 \int \varepsilon^2 Q \Lambda Q \right) +\mathcal O(|s|^{-3+\theta}). \end{equation} Indeed, we check that \[ |b| \int \varepsilon^2 |Q_b \Lambda Q_b-Q\Lambda Q| \lesssim |b|^2 \int |\varepsilon|^2 \lesssim |s|^{-3}, \] and \[ |b| \left|\int \varepsilon^3 \Lambda Q_b \right|\lesssim |b|\int |\varepsilon|^3\lesssim |b| \left(\int |\varepsilon|^2 \right)\left(\int |D^{\frac 12} \varepsilon|^2\right)^{\frac 12} \lesssim C^\star |s|^{-3 + \frac \theta 2}. \] In conclusion for $f_2$, we obtain \begin{align} \label{eq:f2} f_2 &= -\left(\frac{\lambda_s}{\lambda} + b\right) (F + b K +Z )\nonumber \\ & +b \left( \int \varepsilon^2 \varphi -6 \int \varepsilon^2 Q \Lambda Q + 2 \left(\frac{\lambda_s}{\lambda} + b\right) \int \varepsilon y \Lambda Q\right)\\ &+ o(1) \int \varepsilon^2 \varphi_y+\mathcal O(|s|^{-3+\theta}). \nonumber \end{align} \noindent \emph{Estimate for $f_3$.} By integration by parts, we have \begin{align*} 2 \int (Q_b+\varepsilon)_y V_\varphi & = 2 \int (Q_b+\varepsilon)_y (D^1 \varepsilon + \varepsilon \varphi - 3 Q_b^2 \varepsilon - 3 Q_b \varepsilon^2 - \varepsilon^3)\\ & = 2 \int \varepsilon \left(D^1 (Q_b)_y + (Q_b)_y \varphi - 3 Q_b^2 (Q_b)_y \right) - \int \varepsilon^2 \varphi_y . \end{align*} As before, using \eqref{calculs.L}, since $\mathcal LQ'=0$, \begin{align*} D^1 (Q_b)_y + (Q_b)_y \varphi - 3 Q^2 (Q_b)_y &= \mathcal L Q' + b \mathcal L (P_b)_y + (Q+bP_b)_y (\varphi-1)\\ & = b (\mathcal LP_b)_y + 6bQQ'P_b + (Q+bP_b)_y (\varphi-1)\\ &= b \Lambda Q + b \Psi_1 + 6bQQ'P_b + (Q+bP_b)_y (\varphi-1), \end{align*} and \begin{align*} - 3 (Q_b^2-Q^2) (Q_b)_y = - 6 bQ Q' P_b - b^2 (3Q' P_b^2+6QP_b(P_b)_y) - 3b^3 P_b^2 (P_b)_y. \end{align*} Thus, \begin{align*} D^1 (Q_b)_y + (Q_b)_y \varphi - 3 Q_b^2 (Q_b)_y &= b \Lambda Q + b \Psi_1 + (Q+bP_b)_y (\varphi-1)\\ &\quad- b^2 (3Q' P_b^2+6QP_b(P_b)_y) - 3b^3 P_b^2 (P_b)_y. \end{align*} We claim \begin{align} &\|b \Psi_1\|_{L^2} \lesssim |s|^{-\frac 32},\label{cl.10}\\ &\|Q'P_b^2\|_{L^2} + \|P_b^2 (P_b)_y\|_{L^2}+ \|QP_b(P_b)_y\|_{L^2} \lesssim 1, \label{cl.11}\\ & \int \varepsilon \Lambda Q = \int \varepsilon y Q' + \mathcal O\big((C^\star)^2|s|^{-2+\theta}\big),\label{cl.12}\\ & \|Q' (\varphi-1)\|_{L^2} + |b| \|(P_b)_y (\varphi-1)\|_{L^2} \lesssim |s|^{-2 \theta}. \label{cl.13} \end{align} Indeed, \eqref{cl.10} is a consequence of \eqref{lprofile.205} and \eqref{lprofile.206}, \eqref{cl.11} is a direct consequence of the definition of $P_b$ and \eqref{cl.12} follows from $\Lambda Q = \frac 12 Q+ yQ'$ and \eqref{BS.4}. To prove \eqref{cl.13}, first we apply \eqref{phi_B}, so that \[ \|Q' (\varphi-1)\|_{L^2}^2\lesssim \int \frac {|\varphi-1|^2}{1+y^6} dy\lesssim |s|^{- 4 \theta}\int_{|y| < \frac B2 |s|^\theta} \frac {dy}{1+y^4} + \int_{|y| > \frac B2 |s|^\theta} \frac {dy}{1+y^6}\lesssim |s|^{-4 \theta}. \] Second, since $P\in \mathcal Z$, \begin{align*} \|{ |b|}(P_b)_y (\varphi-1)\|_{L^2}^2&\lesssim { |b|^2}\int \frac {|\varphi-1|^2}{1+y^4} dy+ { |b|^2}\int |\chi_b'|^2 \\ &\lesssim |s|^{{ -2}- 4 \theta}\int_{|y| < \frac B2 |s|^\theta} \frac {dy}{1+y^2} +{ |s|^{-2}} \int_{|y| > \frac B2 |s|^\theta} \frac {dy}{1+y^4}+ |s|^{{ -3}} \lesssim |s|^{ -3 }. \end{align*} Thus, using also \eqref{BS.2}, we obtain, for $S_0$ large (possibly depending on $C^\star$), using $\theta>\frac 35$, \begin{align} \label{eq:f3} f_3 = 2 b\left(\frac{x_s}{\lambda}-1\right) \int \varepsilon y Q' +o(1) \int \varepsilon^2 \varphi_y +\mathcal O(|s|^{-3+ \theta}). \end{align} \noindent \emph{Estimate for $f_4$.} Recall that \begin{align*} f_4 & = 2 \int \Psi_b V_\varphi =2 \int \Psi_b \left(D^1 \varepsilon + \varepsilon \varphi - \left((Q_b+\varepsilon)^3 - Q_b^3\right)\right). \end{align*} First, using \eqref{lprofile.3}, \begin{align*} \left| \int \Psi_b D^1 \varepsilon \right| & \lesssim \left| \int (D^{\frac 12}\Psi_b) (D^{\frac 12} \varepsilon) \right|\lesssim \|D^{\frac 12}\Psi_b\|_{L^2} \|D^{\frac 12}\varepsilon\|_{L^2}\lesssim |s|^{-2}|\ln |s||^{\frac12} \mathcal N(\varepsilon). \end{align*} Second, by \eqref{lprofile.2}, we have \begin{equation} \label{PsibVarphi} \left(\int \Psi_b^2 \varphi \right)^{\frac 12}\lesssim |b|^2 + |b| \left( \int P^2 (\chi_b')^2 \varphi \right)^{\frac 12} + |b|^2 \left( \int P_b^2\varphi \right)^{\frac 12} \lesssim |b|^{2-\frac \theta 2}. \end{equation} Thus, \begin{align*} \left| \int \Psi_b \varepsilon \varphi \right| & \lesssim \left(\int \Psi_b^2 \varphi\right)^{\frac 12}\left(\int \varepsilon^2 \varphi\right)^{\frac 12} \lesssim |s|^{-2+\frac \theta 2}\mathcal N(\varepsilon)\lesssim C^\star |s|^{-3+\theta}. \end{align*} Using \eqref{gnp} (with $p=6$), \eqref{lprofile.2} and \eqref{BS.1}, we also have \begin{displaymath} \left| \int \Psi_b \varepsilon^3 \right| \le \|\Psi_b\|_{L^2}\|\varepsilon\|_{L^6}^3 \lesssim |s|^{-\frac32}\|\varepsilon\|_{L^2} \mathcal{N}(\epsilon)^2 \lesssim (C^{\star})^2|s|^{-4+\theta} . \end{displaymath} Estimating the other terms $\int \Psi_bQ_b ^2 \varepsilon$ and $\int \Psi_b Q_b \varepsilon^2$ similarly, we obtain \begin{equation} \label{eq:f4} |f_4|\lesssim |s|^{-2+\frac \theta 2}\mathcal N(\varepsilon)\lesssim C^\star |s|^{-3+\theta}. \end{equation} \noindent \emph{Estimate for $f_5$.} \begin{align*} f_5 & = -2 \int (Q_b)_s \left( (Q_b+\varepsilon)^3 - Q_b^3 - 3 Q_b^2 \varepsilon\right) + \int \varepsilon^2 \partial_s \varphi = f_{5,1}+f_{5,2}. \end{align*} First, by \eqref{def.DQb} and \eqref{BS.3}, \[ |(Q_b)_s| = |b_s| \left| \frac{\partial Q_b}{\partial b}\right| \lesssim (|s|^{-2} + |s|^{-1}\mathcal N(\varepsilon)) \lesssim C^\star |s|^{-2 +\frac \theta 2}. \] Thus, \begin{align*} |f_{5,1}|&\lesssim C^\star |s|^{-2 +\frac \theta 2} \int |\varepsilon|^2 (Q+|b|) + |\varepsilon|^3 \lesssim C^\star |s|^{-2 +\frac \theta 2} \left( \mathcal N(\varepsilon)^2 + |s|^{-\frac 32}\right)\\ &\lesssim (C^\star)^3 |s|^{-4 +\frac 32\theta }\lesssim |s|^{-3+\theta}. \end{align*} Second, we see from the definition of $\varphi$ in \eqref{def.varphi} \begin{align} \label{varphi_s} \partial_s \varphi & = \theta s^{-1} |s|^\theta \frac {\phi'\left(\frac y B + |s|^\theta\right)}{\phi(|s|^\theta)} - \theta s^{-1} |s|^\theta \frac {\phi\left(\frac y B + |s|^\theta\right)\phi'(|s|^\theta)}{\phi^2(|s|^\theta)}. \end{align} We also have that \[ |s|^{-1+\theta} \int \varepsilon^2 \frac {\phi'\left(\frac y B + |s|^\theta\right)}{\phi(|s|^\theta)} \lesssim |s|^{-1+\theta} \int \varepsilon^2 \varphi_y, \] and \[ |s|^{-1+\theta} \int \varepsilon^2 \frac {\phi\left(\frac y B + |s|^\theta\right)\phi'(|s|^\theta)}{\phi^2(|s|^\theta)} \lesssim |s|^{-1-\theta} \int \varepsilon^2 \varphi \lesssim |s|^{-1-\theta} (\mathcal N(\varepsilon))^2.\] Thus, \begin{equation} \label{eq:f5} f_5 = o(1) \int \varepsilon^2 \varphi_y + \mathcal O(|s|^{-3+\theta}) . \end{equation} Therefore, we conclude the proof of \eqref{eq:F} gathering \eqref{eq:f1}, \eqref{eq:f2}, \eqref{eq:f3}, \eqref{eq:f4} and \eqref{eq:f5}. \end{proof} \subsection{Virial-type estimate} Now, to extend to the cubic Benjamin-Ono equation the technique developed in \cite{RaSz}, we prove the following suitably localized Virial-type identity. \begin{lemma}\label{le:GG} For $|S_0|$ large enough possibly depending on $C^{\star}$, for all $s\in \mathcal I^\star$, \begin{equation}\label{bg:G} |G|\lesssim |s|^{-\frac 12} \end{equation} and \begin{equation}\label{eq:G}\begin{aligned} \frac {d G}{ds} &=-2 \int \big|D^{\frac 12} (\varepsilon \rho)\big|^{2} -\int (\varepsilon \rho)^2 + 6\int Q^2 (\varepsilon \rho)^2 -6 \int Q\Lambda Q \varepsilon^2 \\ &\quad +2 \left(\frac{\lambda_s}{\lambda} + b\right) \int \varepsilon y \Lambda Q+2 \left(\frac{x_s}{\lambda} -1\right) \int \varepsilon y Q' \\ & \quad+ \mathcal O(|s|^{-2+\theta}) + \mathcal O(|s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2)\, , \end{aligned} \end{equation} where \begin{equation}\label{def.rho} \rho(s,y) = \frac {\chi(y|s|^{-\frac 35})}{\left(1+y^2 |s|^{-1}\right)^{\frac 12}}. \end{equation} \end{lemma} \begin{proof} The bound \eqref{bg:G} follows from $\|\varepsilon\|_{L^2}\lesssim |s|^{-1}$ (see \eqref{BS.1}) and the bound $|\psi|\lesssim |s|^{\frac 12}$ coming directly from the definition of $\psi$ in \eqref{defG}. Now, we prove \eqref{eq:G}. We compute using \eqref{modulation.4}, \[ \frac { dG}{ds} = 2 \int \varepsilon_s \varepsilon \psi+ \int \varepsilon^2 \psi_s= g_1 + g_2 +g_3+g_4+g_5+g_6+g_7, \] where \begin{align*} &g_1 = 2 \int V_y \varepsilon \psi\,,& g_2 &= 2\frac{\lambda_s}{\lambda}\int (\Lambda \varepsilon) \varepsilon \psi\,, \\ &g_3 = 2\left(\frac{\lambda_s}{\lambda}+b\right) \int \Lambda Q_b \varepsilon \psi\,, & g_4 &= 2 \left(\frac {x_s}\lambda -1\right) \int (Q_b+\varepsilon)_y \varepsilon\psi\,,\\ &g_5 = -2 (b_s+b^2) \int \frac{\partial Q_b}{\partial b} \varepsilon \psi\, , & g_6& = 2\int \Psi_b \varepsilon\psi ,\\& g_7= \int \varepsilon^2 \psi_s . \end{align*} We claim that the following technical facts on $\psi$ and $\rho$. \begin{lemma}\label{le:psi} The following hold. \noindent{\rm (i)} Pointwise estimates. \begin{equation}\label{bd.psi} |\psi(y)-y|\lesssim |s|^{-\frac 12} |y|^2,\quad |\psi|\lesssim |s|^{\frac 12}, \end{equation} \begin{equation}\label{e.g10} |y \psi_y|\lesssim |s|^{\frac 12} \mathbf{1}_{y>-2 |s|^{\frac 35}}, \end{equation} \begin{equation}\label{e.g14} \psi_y = \frac {\chi^2(y|s|^{-\frac 35})}{1+y^2 |s|^{-1}} + \mathbf{1}_{-2|s|^{\frac 35} <y<-|s|^{\frac 35}}\mathcal O( |s|^{-\frac 1{10}}), \end{equation} \begin{equation}\label{e.g15} |\psi_y-1|+|\rho-1|\lesssim \frac {y^2 |s|^{-1}}{1+y^2 |s|^{-1}} ,\quad |\psi_y|\lesssim 1 ,\quad |\psi_{yy}|\lesssim |s|^{-\frac 12}, \end{equation} \begin{equation}\label{e.g17} |\psi_{yyy}|\lesssim \frac {|s|^{-1}}{(1+y^2 |s|^{-1})^2}+ |s|^{-\frac {11}{10}} \mathbf{1}_{-2|s|^{ \frac 35}<y<-|s|^{\frac 35}} \lesssim |s|^{-1} . \end{equation} {\rm (ii)} Norm estimates. \begin{equation}\label{e.g16} \|\rho\|_{L^2}+\|\psi_y\|_{L^2} \lesssim |s|^{\frac 14},\quad \|\rho_y\|_{L^2}+\|\psi_{yy}\|_{L^2} \lesssim |s|^{-\frac 14}. \end{equation} \end{lemma} \begin{proof} By Taylor expansion, since $\phi(0)=\frac 12$, $\phi'(0)=\frac 1{\pi}$ and $\sup_{\mathbb{R}} |\phi''|<+\infty$, one finds for $|y|<|s|^{\frac 12}$, \[ \Big|\phi( |s|^{-\frac 12} y) - \frac 12 - \frac 1{\pi} |s|^{-\frac 12} y\Big| \lesssim |s|^{-1}|y|^2. \] The definition of $\psi$ in \eqref{defG} then implies \eqref{bd.psi} for $|y|<|s|^{\frac 12}$. For $|y|>|s|^{\frac 12}$, \eqref{bd.psi} is a consequence of $|\psi|\lesssim |s|^{\frac 12}$. Next, note that by the definition of $\psi$ in \eqref{defG}, \begin{equation}\label{e.001}\begin{aligned} \psi_y &= \pi \phi'(y|s|^{-\frac 12}) \chi^2(y|s|^{-\frac 35}) + |s|^{-\frac 1{10}} \pi \Big(\phi(y|s|^{-\frac 12})-\frac 12\Big) (\chi^2)'(y |s|^{-\frac 35})\\ &= \frac {\chi^2(y|s|^{-\frac 35})}{1+y^2 |s|^{-1}} + |s|^{-\frac 1{10}} \pi \Big(\phi(y|s|^{-\frac 12})-\frac 12\Big) (\chi^2)'(y |s|^{-\frac 35}), \end{aligned}\end{equation} which implies directly \eqref{e.g10} and \eqref{e.g14}. Moreover, \[ |\psi_y-1|\lesssim \frac {y^2 |s|^{-1}}{1+y^2 |s|^{-1}} \mathbf{1}_{y>-|s|^{\frac 35}} + |s|^{-\frac 1{10}} \mathbf{1}_{y<-|s|^{\frac 35}} . \] Differentiating \eqref{e.001}, we have \begin{align*} \psi_{yy} &= |s|^{-\frac 12} \pi \phi''\left(y|s|^{-\frac 12}\right) \chi^2(y|s|^{-\frac 35}) + 2 |s|^{-\frac 35} \pi \phi'\left(y|s|^{-\frac 12}\right) (\chi^2)'(y|s|^{-\frac 35})\\ & + |s|^{-\frac 7{10}} \pi \left(\phi\left(y|s|^{-\frac 12}\right) -\frac 12\right)(\chi^2)''(y|s|^{-\frac 35}).\end{align*} and thus $|\psi_{yy}|\lesssim |s|^{-\frac 12}$. This proves \eqref{e.g15}. Now, we estimate $\psi_{yyy}$. By direct computations, we have \begin{align*} \psi_{yyy} & = |s|^{-1} \pi \phi'''\left(y|s|^{-\frac 12}\right) \chi^2(y|s|^{-\frac 35}) + 3 |s|^{-\frac 12 - \frac 35} \pi \phi''\left(y|s|^{-\frac 12}\right) (\chi^2)'(y|s|^{-\frac 35})\\ & + 3 |s|^{-\frac 65} \pi \phi'\left(y|s|^{-\frac 12}\right) (\chi^2)''(y|s|^{-\frac 35}) + |s|^{-\frac 95 +\frac 12} \pi\left( \phi\left(y|s|^{-\frac 12}\right)-\frac 12\right) (\chi^2)'''(y|s|^{-\frac 35}). \end{align*} Thus, \[ |\psi_{yyy}|\lesssim \frac {|s|^{-1}}{(1+y^2 |s|^{-1})^2}+ |s|^{-\frac {11}{10}} \mathbf{1}_{-2|s|^{-\frac 35}<y<-|s|^{\frac 35}} \lesssim |s|^{-1}. \] Finally, we prove \eqref{e.g16}. First, \[ \int \rho^2 \lesssim \int \frac{dy}{1+y^2 |s|^{-1}} \lesssim |s|^{\frac 12}. \] Note that by direct computation \[ \rho_y = |s|^{-\frac 35} \frac{\chi'(y|s|^{-\frac 35})}{(1+y^2 |s|^{-1})^{\frac 12}} - |s|^{-\frac 12} \frac {y |s|^{-\frac 12} \chi(y|s|^{-\frac 35})}{(1+y^2 |s|^{-1})^{\frac 32}} \] and thus, \[ \int (\rho_y)^2 \lesssim |s|^{-\frac 65} \int \frac{dy}{1+y^2 |s|^{-1}} + |s|^{-1} \int \frac{y^2 |s|^{-1} dy}{(1+y^2 |s|^{-1})^{3}} \lesssim |s|^{-\frac 12}. \] By \eqref{e.001}, we have $\int (\psi_y-\rho)^2 \lesssim |s|^{-\frac 15} \int_{-2|s|^{\frac 35} <y<-|s|^{\frac 35}} dy\lesssim |s|^{\frac 25}$ and \[ |(\psi_y-\rho)_y|\lesssim |s|^{-\frac 35 }\mathbf{1}_{-2|s|^{\frac 35}<y<-|s|^{\frac 35}}\quad \hbox{and so}\quad \int |(\psi_y-\rho)_y|^2 \lesssim |s|^{-\frac 35 }, \] which finishes the proof of \eqref{e.g16}. \end{proof} \noindent \emph{Estimate for $g_1$.} We claim \begin{equation}\label{e.g1} \begin{split} g_1&= -2 \int \big|D^{\frac 12} (\varepsilon \rho)\big|^{2} -\int (\varepsilon \rho)^2+ {6\int Q^2 (\varepsilon \rho)^2 -6 \int Q\Lambda Q \varepsilon^2}\\ & \quad+\mathcal O(|s|^{-2+\theta}) + \mathcal O( |s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2). \end{split} \end{equation} By using the definition of $V$ in \eqref{defV} and integrations by parts, we decompose $g_1$ as \[ g_1=g_{1,1}+g_{1,2}+g_{1,3} \, , \] where \begin{align*} g_{1,1} & = 2 \int (D^1 \varepsilon)_y \varepsilon \psi ,\\ g_{1,2} &=- \int \varepsilon^2 \psi_y ,\\ g_{1,3}&= -\frac 12 \int \big((Q_b+\varepsilon)^4 - Q_b^4 - 4Q_b^3 \varepsilon\big) \psi_y +{ 2} \int \big((Q_b+\varepsilon)^3 - Q_b^3 \big) \varepsilon \psi_y \\ & \quad -2 \int \big((Q_b+\varepsilon)^3-Q_b^3 - 3 Q_b^2 \varepsilon\big) (Q_b)_y \psi . \end{align*} First, concerning $g_{1,1}$, we claim the following two estimates \begin{equation}\label{e:g02} \left| \int (D^1 \varepsilon) \varepsilon_y \psi \right|\lesssim |s|^{-2+\theta}, \end{equation} \begin{equation}\label{e:g03} \left| \int (D^1 \varepsilon) \varepsilon \psi_y -\int \big|D^{\frac 12} (\varepsilon\rho)\big|^2 \right|\lesssim |s|^{-2+\theta} + |s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2. \end{equation} {With these estimates in hand, we obtain after integration by parts that \begin{equation} \label{e.g11} \begin{split} g_{1,1}&=-2\int (D^1\varepsilon) \varepsilon_y \psi-2\int (D^1\varepsilon) \varepsilon \psi_y \\ & =-2\int \big|D^{\frac 12} (\varepsilon\rho)\big|^2+\mathcal{O}(|s|^{-2+\theta}) + \mathcal O( |s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2)\, . \end{split} \end{equation}} \noindent \textit{Proof of \eqref{e:g02}}. Using \eqref{m.1} with $a=\psi$, since $\|\psi_{yy}\|_{L^\infty} \lesssim |s|^{-\frac 12}$ (see \eqref{e.g15}), we obtain from \eqref{BS.1}, \[ \left| \int (D^1 \varepsilon) \varepsilon_y \psi \right|\lesssim \| \varepsilon\|_{L^2}^2 \|\psi_{yy}\|_{L^\infty} \lesssim |s|^{-\frac 32} \lesssim |s|^{-2+\theta}. \] \noindent \textit{Proof of \eqref{e:g03}}. Using \eqref{m.2} with $a=\psi$, since $\|a''\|_{L^2} \lesssim |s|^{-\frac 14}$ and $\|a'\|_{L^2} \lesssim |s|^{\frac 14}$, we have \begin{align*} \left| \int (D^1 \varepsilon) \varepsilon \psi_y - \int |D^{\frac 12} \varepsilon|^2\psi_y \right| &\lesssim \|\varepsilon\|_{\dot H^{\frac 12}}^{\frac 32} \|\varepsilon\|_{L^2}^{\frac 12} \| \psi_{yy}\|_{L^2}^{\frac 34} \|\psi_y\|_{L^2}^{\frac 14}\\ &\lesssim (C^\star)^{\frac 32} |s|^{-\frac {15}8 + \frac 34 \theta}\lesssim |s|^{-2+\theta}, \end{align*} since $\theta>\frac 12$. By \eqref{e.001} and the definition of $\rho= \frac {\chi(y|s|^{-\frac 35})}{(1+y^2 |s|^{-1})^{\frac12}}$, we see that $|\Psi_y - \rho^2 |\lesssim |s|^{-\frac 1{10}} \mathbf{1}_{-2|s|^{\frac 35}<y<-|s|^{\frac 35}} $, and thus \begin{align*} \left| \int |D^{\frac 12} \varepsilon|^2\psi_y - \int |D^{\frac 12} \varepsilon|^2\rho^2\right| &\lesssim |s|^{-\frac 1{10}} \|D^{\frac 12} \varepsilon\|_{L^2}^2\lesssim |s|^{-\frac 1{10}} (\mathcal N(\varepsilon))^2 \lesssim |s|^{-2+\theta}. \end{align*} Now, we claim \begin{equation} \label{e:g003} \left| \int | D^{\frac 12} \varepsilon|^2\rho^2 -\int |D^{\frac 12} (\varepsilon\rho)|^2 \right|\lesssim |s|^{-2+\theta}+ |s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2, \end{equation} which is sufficient to finish the proof of \eqref{e:g03}. Indeed, using \eqref{COMM1} and \eqref{gnp}, \begin{align*} \left|\int |D^{\frac 12} \varepsilon|^2\rho^2 -\int |D^{\frac 12} (\varepsilon\rho)|^2\right| & = \left|\int \left( (D^{\frac 12} \varepsilon)\rho + D^{\frac 12} (\varepsilon\rho)\right) \left( (D^{\frac 12} \varepsilon)\rho - D^{\frac 12} (\varepsilon\rho)\right)\right| \\ & \lesssim \left(\|(D^{\frac 12} \varepsilon) \rho\|_{L^2} + \|D^{\frac 12}(\varepsilon \rho)\|_{L^2} \right) \|[D^{\frac 12},\rho] \varepsilon \|_{L^2}\\ & \lesssim \left(\|D^{\frac 12} \varepsilon \|_{L^2} + \|D^{\frac 12}(\varepsilon \rho)\|_{L^2} \right) \|\varepsilon\|_{\dot H^{\frac 12}}^{\frac 12} \|\varepsilon\|_{L^2}^{\frac 12} \|\rho_y\|_{L^2}^{\frac 34} \|\rho\|_{L^2}^{\frac 14} \\& \lesssim |s|^{-2+\theta} + (C^{\star})^{\frac12}|s|^{-\frac78+\frac \theta 4} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2} \\& \lesssim |s|^{-2+\theta} + |s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2 . \end{align*} Second, we see from \eqref{e.g14}, \begin{align} - g_{1,2} &= \int \varepsilon^2 \psi_y = \int \varepsilon^2 \frac {\chi^2(y|s|^{-\frac 35})}{1+y^2 |s|^{-1}} +\mathcal O( |s|^{-\frac 1{10}})\int_{-2|s|^{\frac 35} <y<-|s|^{\frac 35}} \varepsilon^2 \nonumber\\ &= \int \varepsilon^2 \frac {\chi^2(y|s|^{-\frac 35})}{1+y^2 |s|^{-1}} +\mathcal O((C^\star)^2 |s|^{-\frac {21}{10}+\theta})\nonumber\\&= \int {\varepsilon^2} \rho^2 +\mathcal O( |s|^{- 2+\theta}), \label{e.g12} \end{align} since (by $\theta>\frac 35$), $\int_{-2|s|^{\frac 35} <y<-|s|^{\frac 35}} \varepsilon^2\lesssim \mathcal N(\varepsilon)^2\lesssim (C^\star)^2 |s|^{-2+\theta}$. Next, we claim \begin{equation} g_{1,3} = {6} \int Q^2 (\varepsilon\rho)^2 -6 \int Q \Lambda Q \varepsilon^2 + \mathcal O(|s|^{-2+\theta})\label{e.g31}. \end{equation} We start with rough bounds, using \eqref{bd.psi} (in particular $|\psi Q'|\lesssim 1$), and \eqref{e.g15} , and then \eqref{gnp}, \begin{align*} &\left| g_{1,3} - {3}\int \psi_y Q^2 \varepsilon^2 + 6 \int \psi QQ' \varepsilon^2\right|\\ & \lesssim \int |\psi_y| \left( |b| |\varepsilon|^2 +|\varepsilon|^3 + |\varepsilon|^4\right) +\int |\psi| \left( |b||\varepsilon|^2 + (|Q'|+|b|) |\varepsilon|^3 \right)\\ & \lesssim |s|^{-\frac 12} \int \varepsilon^2 + \int |\varepsilon|^3 + \int |\varepsilon|^4 \lesssim |s|^{-\frac 32}\lesssim |s|^{-2+\theta}. \end{align*} Now, using \eqref{e.g15}, \[ \left|\int \psi_y Q^2 \varepsilon^2 - \int Q^2 \varepsilon^2 \right| \lesssim |s|^{-1} \int \varepsilon^2 \lesssim |s|^{-2} \lesssim |s|^{-2 +\theta}, \] and, using \eqref{bd.psi}, \[ \left|\int \psi QQ' \varepsilon^2-\int y QQ' \varepsilon^2 \right| \lesssim |s|^{-\frac 12} \int \varepsilon^2 \lesssim |s|^{-\frac 32} \lesssim |s|^{-2 +\theta}. \] Thus, \begin{align*} g_{1,3}& = {3}\int Q^2 \varepsilon^2 -6 \int y QQ' \varepsilon^2+\mathcal O(|s|^{-2 +\theta}) \\ &={6} \int Q^2 \varepsilon^2-6 \int Q\Lambda Q \varepsilon^2 +\mathcal O(|s|^{-2 +\theta}). \end{align*} Hence, to finish the proof of \eqref{e.g31}, we only have to prove \begin{equation}\label{bof2} \int Q^2 \varepsilon^2 |1-\rho^2| \lesssim |s|^{-2 +\theta}, \end{equation} but similarly as before, this follows from $|\rho^2-1|\lesssim y^2 |s|^{-1}$. Therefore, we conclude the proof of \eqref{e.g1} gathering \eqref{e.g11}, \eqref{e.g12} and \eqref{e.g31}. \noindent \emph{Estimate for $g_2$.} We claim \begin{equation}\label{e.g2} g_2=\mathcal O(|s|^{-2+\theta}). \end{equation} Indeed, integrating by parts, \[ g_2 = - \frac {\lambda_s}{\lambda} \int \varepsilon^2 y \psi_y \] Note that since $\theta>\frac 35$, $\varphi \gtrsim 1$ for $y>-2|s|^{\frac 35}$, and thus $\int_{y>-2 |s|^{\frac 35}} |\varepsilon|^2 \lesssim \mathcal N(\varepsilon)^2 \lesssim (C^\star)^2 |s|^{-2+\theta}$. Combining this, \eqref{e.g10} and \eqref{BS.2}, we find $ |g_2| \lesssim (C^\star)^3 |s|^{-\frac 52+\frac 32 \theta} $, which implies \eqref{e.g2} since $\theta<1$. \noindent \emph{Estimate for $g_3$.} We claim \begin{equation}\label{e.g3} g_3= 2 \left(\frac{\lambda_s}{\lambda} + b\right) \int \varepsilon y \Lambda Q+\mathcal O( |s|^{-2+\theta}). \end{equation} First, by \eqref{bd.psi} \begin{align*} \left|\int \Lambda Q \varepsilon (\psi-y)\right|& \lesssim \int_{|y|>|s|^{\frac 12}} |\varepsilon| (|\psi|+|y|)\frac {dy}{y^2} + |s|^{-\frac 12} \int_{|y|<|s|^{\frac 12}} |\varepsilon| \\ & \lesssim \|\varepsilon\|_{L^2} \left(\int_{|y|>|s|^{\frac 12}} dy/y^2\right)^{\frac 12}+ |s|^{-\frac 14} \|\varepsilon\|_{L^2} \lesssim |s|^{-\frac 34}. \end{align*} Second, since $\psi\equiv 0$ for $y<-2|s|^{\frac 35}$, $|\psi|\lesssim |s|^{\frac 12}$, and $|\Lambda P_b |\lesssim (1+y_+)^{-1}$ (here $y_+=\max(0,y)$), \begin{align*} \left|b \int \Lambda P_b \varepsilon \psi \right| \lesssim |s|^{-\frac 12} \int_{y>-2|s|^{\frac 35 }} |\varepsilon| (1+y_+)^{-1} \lesssim |s|^{- \frac 15} \|\varepsilon\|_{L^2} \lesssim |s|^{-\frac 7{10}}. \end{align*} Thus, \eqref{e.g3} follows from \eqref{BS.2} and $\theta>\frac 35$. \noindent \emph{Estimate for $g_4$.} The proof of the following estimate is similar and easier (due to the stronger decay of $Q'$ with respect to $\Lambda Q$) \begin{equation}\label{e.g4} g_4=2 \left(\frac{x_s}{\lambda} -1\right) \int \varepsilon y Q' +\mathcal O( |s|^{-2+\theta}). \end{equation} {Note that to deal with the term $2\left(\frac{x_s}{\lambda} -1\right) \int \varepsilon_y\varepsilon \psi$, we integrate by parts and use \eqref{e.g15} so that \[ \left|2\left(\frac{x_s}{\lambda} -1\right) \int \varepsilon_y\varepsilon \psi \right|= \left|\left(\frac{x_s}{\lambda} -1\right) \int \varepsilon^2 \psi_y \right| \lesssim \mathcal{N}(\epsilon) \int \varepsilon^2 \lesssim C^{\star} |s|^{-2+\frac{\theta}2} \, . \]} \noindent \emph{Estimate for $g_5$.} We claim \begin{equation}\label{e.g5} g_5=\mathcal O(|s|^{-2+\theta}). \end{equation} From \eqref{def.DQb}, one has $\Big|\frac{\partial Q_b}{\partial b}\Big|\lesssim (1+y_+)^{-1} $ and thus, using \eqref{BS.3} and \eqref{bd.psi}, \begin{align*} |g_5|&\lesssim C^\star |s|^{-2+\frac \theta 2} \|\psi\|_{L^\infty} \int_{y>-2 |s|^{\frac 35}} |\varepsilon|(1+y_+)^{-1} \\ &\lesssim C^\star |s|^{-\frac {17}{10} + \frac \theta 2} { |s|^{\frac12}}\|\varepsilon\|_{L^2} \lesssim C^\star |s|^{-{ \frac {17}{10}} + \frac \theta 2} \lesssim |s|^{-2+\theta}, \end{align*} since $\theta>\frac35$. \noindent \emph{Estimate for $g_6$.} We claim \begin{equation}\label{e.g6} g_6=\mathcal O(|s|^{-2+\theta}). \end{equation} Indeed, by $\|\psi\|_{L^\infty} \lesssim |s|^{\frac 12}$, Cauchy-Schwarz inequality and \eqref{lprofile.2}, we have (using $\theta>\frac 12$) \[ |g_6|\lesssim |s|^{-1} \|\varepsilon\|_{L^2} \lesssim |s|^{-\frac 32} \lesssim |s|^{-2+\theta}. \] \noindent \emph{Estimate for $g_7$.} We claim \begin{equation}\label{e.g7} g_7=\mathcal O(|s|^{-2+\theta}). \end{equation} By the definition of $\psi$ in \eqref{defG}, \[ \partial_s \psi = - \frac 12 |s|^{-1} \psi + \frac{\pi}2 |s|^{-1} y \phi'(y|s|^{-\frac 12}) \chi^2(y|s|^{-\frac 35}) + \frac 35 |s|^{-\frac{11}{10}} \pi \Big(\phi(y|s|^{-\frac 12})-\frac 12\Big) y (\chi^2)'(y |s|^{-\frac 35}). \] Thus, $|\partial_s \psi|\lesssim |s|^{-\frac 12} \mathbf{1}_{y>-2 |s|^{\frac 35}}$ and $$|g_7|\lesssim |s|^{-\frac 12} \mathcal N(\varepsilon)^2 \lesssim (C^\star)^2 |s|^{-\frac 52 + \theta} \lesssim |s|^{-2+\theta} .$$ Therefore, combining \eqref{e.g1}, \eqref{e.g2}, \eqref{e.g3}, \eqref{e.g4}, \eqref{e.g5}, \eqref{e.g6} and \eqref{e.g7}, we finish the proof of \eqref{eq:G}. \end{proof} \subsection{Estimates on the functional $K$} Recall that $K=\int \varepsilon \mathcal{L}_{\varphi} P_b$ is defined in \eqref{bootstrap.4}. \begin{lemma}\label{le:KK} For $|S_0|$ large enough possibly depending on $C^{\star}$, for all $s\in \mathcal I^\star$, \begin{equation} \label{bg:K} |K| \lesssim \mathcal{N}(\varepsilon)|s|^{\frac{\theta}2} \lesssim C^{\star}|s|^{-1+\theta} \, . \end{equation} and \begin{equation}\label{eq:K} \Big|\frac {d K}{ds}+\big(\frac{\lambda_s}{\lambda}+b\big)p_0 \Big| \lesssim \left( \int \varepsilon^2 \varphi_y \right)^{\frac12} + (C^\star)^2 |s|^{ -2+\frac{3\theta}2} \, . \end{equation} \end{lemma} \begin{proof} We begin with the proof of \eqref{bg:K}. By using the definition of $\mathcal{L}_{\varphi}$ in \eqref{defK}, we decompose $K$ as \begin{displaymath} K=\int \varepsilon D^1P_b +\int \varepsilon P_b \varphi-3\int \varepsilon Q^2P_b \, . \end{displaymath} First, we deduce from \eqref{Pb}, \eqref{def.varphi} and \eqref{defN} that \begin{displaymath} \begin{split} \Big| \int \varepsilon P_b \varphi \Big| &\lesssim \int_{y>0} \frac{|\varepsilon|}{1+y}+\int_{ -2B|s|^{\theta}<y<0}|\varepsilon| \varphi+\int_{-2|s|<y<-2B|s|^{\theta}}|\varepsilon| \varphi \\ & \lesssim \mathcal{N}(\varepsilon)+\mathcal{N}(\varepsilon)|s|^{ \frac{\theta}2}+\mathcal{N}(\varepsilon)\left( \int_{-2|s|<y<-2B|s|^{\theta}}\frac1{1+|y|}\right)^{\frac12} \\ & \lesssim \mathcal{N}(\varepsilon)|s|^{ \frac{\theta}2} \, . \end{split} \end{displaymath} Second, we deduce from \eqref{bd.PbRb} and \eqref{defN} that \begin{displaymath} \Big| \int \varepsilon D^1P_b \Big| = \Big| \int D^{\frac12}\varepsilon D^{\frac12}P_b \Big| \lesssim \|D^{\frac12}\varepsilon\|_{L^2} \|D^{\frac12}P_b\|_{L^2} \lesssim \mathcal{N}(\varepsilon) \ln|s| \, . \end{displaymath} Moreover, we get easily that \begin{displaymath} \Big|\int \varepsilon Q^2P_b \Big| \lesssim \mathcal{N}(\varepsilon) \, . \end{displaymath} Those estimates together with the bootstrap hypothesis \eqref{bootstrap.4} conclude the proof of \eqref{bg:K}. Next, we turn to the proof of \eqref{eq:K}. By using the equation of $\varepsilon_s$ in \eqref{modulation.4}, we compute \begin{equation} \label{eq:K.1} \begin{split} \frac{dK}{ds}&=\int \varepsilon_s \mathcal{L}_{\varphi}P_b+\int \varepsilon \varphi_s P_b+b_s \int \varepsilon \mathcal{L}_{\varphi}\left(\frac{\partial P_b}{\partial b}\right)\\ &=k_1+k_2+k_3+k_4+k_5+k_6+k_7 \, , \end{split} \end{equation} where \begin{align*} & k_1 = \int V_y \mathcal{L}_{\varphi}P_b \, , & k_2& = \frac{\lambda_s}{\lambda} \int (\Lambda \varepsilon) \mathcal{L}_{\varphi}P_b \, , \\ &k_3=(\frac {\lambda_s}{\lambda} + b) \int (\Lambda Q_b) \mathcal{L}_{\varphi}P_b \, , & k_4& = (\frac {x_s}\lambda-1) \int (Q_b+\varepsilon)_y \mathcal{L}_{\varphi}P_b\, , \\ &k_5= -(b_s+b^2) \int \frac{\partial Q_b}{\partial b} \mathcal{L}_{\varphi}P_b\, , & k_6& = \int \Psi_b \mathcal{L}_{\varphi}P_b\, , \\ &k_7 = \int \varepsilon \varphi_s P_b+b_s \int \varepsilon \mathcal{L}_{\varphi}\left(\frac{\partial P_b}{\partial b}\right)\, . \end{align*} In the rest of the proof, we estimate $k_1, \cdots, k_7$ separately, taking $S_0$ large enough, possibly depending on $C^{\star}$. \noindent \emph{Estimate for $k_1$.} In order to estimate $k_1$, we rewrite $\mathcal{L}_{\varphi}P_b$ as \begin{equation} \label{LvarphiPb} \begin{split} \mathcal{L}_{\varphi}P_b&=D^1P_b+P_b\varphi-3Q^2P_b \\ &=\mathcal{L}P+D^1(P(\chi_b-1))+P(\chi_b\varphi-1)-3Q^2P(\chi_b-1) \, . \end{split} \end{equation} Moreover, integrating by parts and using the definition of $V$ in \eqref{defV}, \begin{displaymath} \begin{split} k_1&=-\int \mathcal{L}\varepsilon \big(\mathcal{L}_{\varphi}P_b \big)_y+3\int (Q_b^2-Q^2) \varepsilon \big(\mathcal{L}_{\varphi}P_b \big)_y+\int \big(3Q_b\varepsilon^2+\varepsilon^3\big)\big(\mathcal{L}_{\varphi}P_b \big)_y \\ &=k_{1,1}+k_{1,2}+k_{1,3} \, . \end{split} \end{displaymath} First, we deal with $k_{1,1}$. Recalling that $\mathcal{L}(\Lambda Q)=-Q$, the equation in \eqref{nlprofile.1} and \eqref{LvarphiPb}, we have \begin{displaymath} \begin{split} k_{1,1}&=\int \varepsilon Q-\int \mathcal{L}\varepsilon \big(D^1(P(\chi_b-1))\big)_y-\int \mathcal{L}\varepsilon \big(P(\chi_b\varphi-1)\big)_y+3\int \mathcal{L}\varepsilon \big(Q^2P(\chi_b-1)\big)_y\\ &=\int \varepsilon Q+k_{1,1,1}+k_{1,1,2}+k_{1,1,3} \, . \end{split} \end{displaymath} From the definition of $\mathcal{L}$, \eqref{BS.1}, the definition of $\chi_b$ in \eqref{Pb} and $P\in \mathcal Z$, we get that \begin{equation} \label{eq:k111} |k_{1,1,1}| \lesssim \|\varepsilon\|_{L^2}\big(\|(P(\chi_b-1))'''\|_{L^2}+\|(P(\chi_b-1))''\|_{L^2}\big) \lesssim |s|^{-2} \, , \end{equation} and \begin{equation} \label{eq:k113} |k_{1,1,3}| \lesssim \|\varepsilon\|_{L^2}\big(\|(Q^2P(\chi_b-1))'\|_{L^2}+\|(Q^2P(\chi_b-1))''\|_{L^2}\big) \lesssim |s|^{-\frac92} \, . \end{equation} We rewrite $k_{1,1,2}$ as \begin{displaymath} k_{1,1,2}=\int D^1\varepsilon \big(P(\chi_b\varphi-1)\big)_y+\int(1-3Q^2) \varepsilon \big(P(\chi_b\varphi-1)\big)_y \, . \end{displaymath} To treat the first term above, we use the decomposition $$\big(P(\chi_b\varphi-1)\big)_y=P'(\chi_b\varphi-1)+P\chi_b'\varphi+P_b\varphi_y \, .$$ On the one hand, observe that \begin{displaymath} \int D^1\varepsilon \big(P'(\chi_b\varphi-1)+P\chi_b'\varphi\big) =-\int \mathcal{H}\varepsilon\Big(P''(\chi_b\varphi-1)+2P'\chi_b'\varphi +P\chi_b''\varphi+P\chi_b'\varphi_y\Big) , \end{displaymath} \[ \|P''(1-\chi_b\varphi)\|_{L^2}\lesssim \left(\int_{y<-\frac{B}2|s|^{\theta}}\frac1{|y|^6} \right)^{\frac12} +|s|^{-2\theta}\|yP''\|_{L^2}\lesssim |s|^{-2\theta} \, , \] thanks to \eqref{phi_B}, using $P'\in \mathcal Y_2$ and the definition of $\varphi$ in \eqref{def.varphi}, \[ \|P'\chi_b'\varphi\|_{L^2}\lesssim |b|\left(\int_{-2|s|<y<-|s|}\frac1{|y|^6} \right)^{\frac12} \lesssim |s|^{-\frac72} \, , \] \[ \|P\chi_b''\varphi\|_{L^2}\lesssim |b|^2\left(\int_{-2|s|<y<-|s|}\frac1{|y|^2} \right)^{\frac12} \lesssim |s|^{-\frac52} \, , \] and \[ \|P\chi_b'\varphi_y\|_{L^2}\lesssim |b|\left(\int_{-2|s|<y<-|s|}\frac1{|y|^4} \right)^{\frac12} \lesssim |s|^{-\frac52} \, . \] Then, we deduce from \eqref{BS.1} that \begin{displaymath} \left|\int D^1\varepsilon \big(P'(\chi_b\varphi-1)+P\chi_b'\varphi\big)\right|\lesssim |s|^{-\frac12-2\theta} \, . \end{displaymath} On the other hand, Lemma \ref{pw.HY} and the translation invariance of $\mathcal{H}$ together with \eqref{def.Dvarphi} imply that \[ |\mathcal{H}\partial_y\big(P_b \varphi_y\big)| \lesssim \varphi_y, \] and thus \begin{displaymath} \left|\int D^1\varepsilon P_b \varphi_y \right|= \left|\int \varepsilon \mathcal{H}\partial_y\big(P_b \varphi_y\big) \right| \lesssim \int |\varepsilon|\varphi_y \lesssim \left(\int \varepsilon^2 \varphi_y \right)^{\frac12} \, . \end{displaymath} Hence, we conclude that \begin{equation} \label{eq:k112.1} \left|\int D^1\varepsilon \big(P(\chi_b\varphi-1)\big)_y\right| \lesssim |s|^{-\frac12-2\theta}+\left(\int \varepsilon^2 \varphi_y \right)^{\frac12} \, . \end{equation} We write the second term in the decomposition of $k_{1,1,2}$ as \begin{displaymath} \begin{split} \int(1&-3Q^2) \varepsilon \big(P(\chi_b\varphi-1)\big)_y \\&=\int(1-3Q^2) \varepsilon P'(\chi_b\varphi-1)+\int(1-3Q^2) \varepsilon P\chi_b'\varphi+\int(1-3Q^2) \varepsilon P_b\varphi_y \, . \end{split} \end{displaymath} By using \eqref{BS.1}, $P'\in \mathcal Y_2$, and \eqref{phi_B}, we have \begin{displaymath} \begin{split} \left|\int(1-3Q^2) \varepsilon P'(\chi_b\varphi-1) \right| &\lesssim \|\varepsilon\|_{L^2}\left(\int_{y<-\frac{B}2|s|^{\theta}}\frac1{|y|^4}\right)^{\frac12}+|s|^{-2\theta}\int_{y>-\frac{B}2|s|^{\theta}} |\varepsilon|\frac{|y|}{(1+|y|)^2} \\ & \lesssim |s|^{-\frac12-\frac{3\theta}2}+|s|^{-2\theta}\mathcal{N}(\varepsilon) \, , \end{split} \end{displaymath} \[ \left|\int(1-3Q^2) \varepsilon P\chi_b'\varphi \right| \lesssim |b|\left(\int\varepsilon^2\varphi \right)^{\frac12}\left(\int_{-2|s|<y<-|s|}\frac1{|y|} \right)^{\frac12} \lesssim |s|^{-1}\mathcal{N}(\varepsilon) \, , \] and \[\left|\int(1-3Q^2) \varepsilon P_b\varphi_y \right| \lesssim \left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \|\varphi_y\|_{L^1}^{\frac 12} \lesssim \left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \, . \] Hence, we deduce gathering those estimates and using \eqref{bootstrap.4} that \begin{equation} \label{eq:k112.2} \left|\int(1-3Q^2) \varepsilon \big(P(\chi_b\varphi-1)\big)_y\right| \lesssim |s|^{-\frac12-\frac{3\theta}2}+\left(\int \varepsilon^2 \varphi_y \right)^{\frac12} \, , \end{equation} since $2-\frac{\theta}2>\frac12+\frac{3\theta}2$. Therefore, we conclude from \eqref{eq:k112.1} and \eqref{eq:k112.2} that \begin{equation} \label{eq:k112} |k_{1,1,2}| \lesssim |s|^{-\frac12-\frac{3\theta}2}+\left(\int \varepsilon^2 \varphi_y \right)^{\frac12} \, . \end{equation} Next, we derive an estimate for $k_{1,2}$. Since \[k_{1,2}=3\int (2bQP_b+b^2P_b^2) \varepsilon \big(D^1P_b+P_b\varphi-3Q^2P_b \big)_y \, , \] we deduce easily from \eqref{BS.1} that \begin{equation} \label{eq:k12} |k_{1,2}| \lesssim |b|\|\varepsilon\|_{L^2} \lesssim |s|^{-\frac32} \, . \end{equation} Finally, since \[ k_{1,3}=\int \big(3(Q+bP_b)\varepsilon^2+\varepsilon^3\big)\big(D^1P_b+P_b\varphi-3Q^2P_b \big)_y \, ,\] we deduce by using the Cauchy-Schwarz inequality, the Gagliardo-Nirenberg inequalities \eqref{gnp} with $p=4$ and $p=6$, \eqref{bootstrap.4} and \eqref{BS.1} that \begin{equation} \label{eq:k13} |k_{1,3}| \lesssim \left( \int \varepsilon^4\right)^{\frac12}+\left( \int \varepsilon^6\right)^{\frac12} \lesssim \|\varepsilon\|_{L^2} \, \big(\mathcal{N}(\varepsilon)+\mathcal{N}(\varepsilon)^2\big) \lesssim C^{\star}|s|^{-\frac32+\frac{\theta}2} \, . \end{equation} Therefore, we conclude gathering \eqref{BS.4} (to control $( \varepsilon,Q)$), \eqref{eq:k111}, \eqref{eq:k113}, \eqref{eq:k112}, \eqref{eq:k12} and \eqref{eq:k13} that \begin{equation} \label{eq:k1} |k_1| \lesssim C^{\star}|s|^{-\frac32+\frac{\theta}2} +\left(\int \varepsilon^2 \varphi_y \right)^{\frac12}\, . \end{equation} \noindent \emph{Estimate for $k_2$.} We decompose $k_2$ as \begin{displaymath} \begin{split} k_2&=\frac {\lambda_s}\lambda \int \Lambda \epsilon\big( D^1P_b+P_b\varphi-3Q^2P_b \big) \\ &=k_{2,1}+k_{2,2}+k_{2,3} \, , \end{split} \end{displaymath} and estimate each term separately. By the definition of $\Lambda \varepsilon$, we have \[ \int \Lambda \varepsilon D^1P_b=\frac12\int \varepsilon D^1P_b+\int y\varepsilon_y D^1P_b \, . \] On the one hand, we see from \eqref{bd.PbRb} and \eqref{BS.1} that \[ \left|\int \varepsilon D^1P_b\right|\lesssim \|D^{\frac12}\varepsilon\|_{L^2} \|D^{\frac12}P_b\|_{L^2} \lesssim \mathcal{N}(\varepsilon) |\ln |s||^{\frac12} \, . \] On the other hand, we have from the properties of $\mathcal{H}$ \[ \int y\varepsilon_y D^1P_b=-\int \mathcal{H}(y\varepsilon_y)P_b'=-\int y (\mathcal{H}\varepsilon_y) P_b'=-\int yD^1\varepsilon P_b' \, , \] so that by \eqref{yPbprime} \[ \left|\int y\varepsilon_y D^1P_b \right| \lesssim \|D^{\frac12}\varepsilon\|_{L^2} \|D^{\frac12}(yP_b')\|_{L^2} \lesssim \mathcal{N}(\varepsilon) \, . \] Hence, it follows from \eqref{bootstrap.4}, \eqref{BS.1} and \eqref{BS.2} that \[ |k_{2,1}| \lesssim (C^{\star})^2|s|^{-2+\theta}|\ln |s||^{\frac12} \, . \] After integrating by parts, we have \[ \int \Lambda \varepsilon P_b\varphi=-\frac12 \int \varepsilon P_b\varphi- \int y\varepsilon P_b'\varphi -\int y\varepsilon P_b\varphi_y\, . \] Moreover, \[ \left| \int \varepsilon P_b\varphi \right|+\left| \int y\varepsilon P_b'\varphi \right| \lesssim \left(\int \varepsilon^2 \varphi\right)^{\frac12} \left(\left(\int P_b^2\varphi\right)^{\frac12} +\left(\int (yP_b')^2\varphi\right)^{\frac12}\right) \lesssim \mathcal{N}(\varepsilon) |s|^{\frac{\theta}2} \, , \] \[ \left| \int y\varepsilon P_b\varphi_y \right| \lesssim \left(\int (yP_b)^2 \varphi_y\right)^{\frac12}\left(\int \varepsilon^2 \varphi_y\right)^{\frac12} \] and \[ \int (yP_b)^2 \varphi_y \lesssim \int_{y>0}\varphi_y+\int_{-2B|s|^{\theta}<y<0}|y|^2\varphi_y+\int_{-2|s|<y<-2B|s|^{\theta}}B\frac{y^2}{1+y^2}\lesssim |s|^{2\theta} \, . \] Hence, we deduce from \eqref{bootstrap.4} and \eqref{BS.1} and \eqref{BS.2} that \[ |k_{2,2}| \lesssim (C^{\star})^2|s|^{-2+\frac{3\theta}2} +C^{\star}|s|^{-1+\frac{3\theta}2}\left(\int \varepsilon^2 \varphi_y\right)^{\frac12} \, . \] Finally, we get easily \[ \left|\int \Lambda \varepsilon Q^2P_b \right| \lesssim \|\varepsilon\|_{L^2} \|\Lambda (Q^2P_b)\|_{L^2} \lesssim \|\varepsilon\|_{L^2} \, , \] so that, by using \eqref{BS.1}, \[ |k_{2,3}| \lesssim C^{\star}|s|^{-\frac32+\frac{\theta}2} \, . \] Therefore, we conclude gathering those estimates that \begin{equation} \label{eq:k2} |k_2| \lesssim (C^{\star})^2|s|^{-2+\frac{3\theta}2} +C^{\star}|s|^{-1+\frac{3\theta}2}\left(\int \varepsilon^2 \varphi_y\right)^{\frac12} \, . \end{equation} \noindent \emph{Estimate for $k_3$.} We split $k_3$ as \begin{displaymath} \begin{split} k_3&=(\frac {\lambda_s}\lambda+b) \int \Lambda Q \mathcal{L}_{\varphi}P_b +(\frac {\lambda_s}\lambda+b)b \int \Lambda P_b \mathcal{L}_{\varphi}P_b\\ &=k_{3,1}+k_{3,2} \, , \end{split} \end{displaymath} and estimate each term separately. By using \eqref{LvarphiPb} and the identity $\mathcal{L}(\Lambda Q)=-Q$, we have (see the definition of $p_0=(P,Q)$ in \eqref{nlprofile.2}) \[k_{3,1}=(\frac {\lambda_s}\lambda+b)\left(-p_0+\int \Lambda Q\big(D^1(P(\chi_b-1))+P(\chi_b\varphi-1)-3Q^2P(\chi_b-1)\big)\right) \, . \] Moreover, since $\Lambda Q \in \mathcal{Y}_2$, Lemma \ref{HY} implies that $\mathcal{H}(\Lambda Q) \in \mathcal{Y}_1$, and thus \[ \left|\int \Lambda Q D^1(P(\chi_b-1))\right| = \left|\int P(\chi_b-1) \partial_y\mathcal{H}(\Lambda Q) \right| \lesssim \int_{y <-|s|}\frac1{|y|^2} \lesssim |s|^{-1} \, . \] By using \eqref{phi_B}, $\Lambda Q\in \mathcal Y_2$, $P\in \mathcal Z$, we also have that \[ \begin{split} &\left|\int \Lambda Q (P(\chi_b\varphi-1))\right|\\ & \lesssim \int_{y <-\frac{B}2|s|^{\theta}}\frac1{|y|^2}+|s|^{-2\theta}\left(\int_{-\frac{B}2|s|^{\theta}<y<0}\frac{|y|}{(1+|y|)^2}+\int_{y>0}\frac{|y|}{(1+|y|)^3}\right) \lesssim |s|^{-\theta} \, , \end{split} \] and \[ \left|\int \Lambda Q Q^2P(\chi_b-1))\right| \lesssim \int_{y<-|s|}\frac1{|y|^6} \lesssim |s|^{-5} \, . \] Hence, we deduce from \eqref{bootstrap.1} and \eqref{BS.2} that \[ \big|k_{3,1}+(\frac {\lambda_s}\lambda+b)p_0\big| \lesssim C^{\star}|s|^{-1-\frac{\theta}2}. \] Next, we look at $k_{3,2}$. We have that \[ k_{3,2}=(\frac {\lambda_s}\lambda+b)b\int \big(\frac12 P_b+yP_b'\big) \big(D^1P_b+P_b\varphi-3Q^2P_b \big) \] By using \eqref{bd.PbRb} and \eqref{yPbprime}, we see that \[ \left|\int P_b D^1P_b \right| =\|D^{\frac12}P_b\|_{L^2}^2 \lesssim \ln|s| \] and \[ \left|\int yP_b' D^1P_b \right| \le \|D^{\frac12}(yP_b')\|_{L^2}\|D^{\frac12}P_b\|_{L^2} \lesssim |\ln|s||^{\frac12} \, . \] We also have that (see the definition of $\varphi$ in \eqref{def.varphi}) \begin{equation}\label{encore.truc} \left|\int P_b^2\varphi \right| \lesssim \int_{-2 |b|^{-1}<y<-10 B|s|^{\theta}} \frac {dy} {|y|} + \int_{y>-10 B|s|^{\theta}} P^2 \lesssim |s|^{\theta}, \end{equation} \[ \left|\int yP_b'P_b\varphi \right| \lesssim \int |P'|+|b|\int |\chi'(|b|y)| \lesssim 1 \, \] and \[ \left|\int \Lambda P_bQ^2P_b \right| \lesssim \int Q^2 \lesssim 1 \, . \] Hence, we deduce from \eqref{bootstrap.4} and \eqref{BS.1} that \[ \big|k_{3,2}\big| \lesssim C^{\star}|s|^{-2+\frac{3\theta}2} \] Therefore, we conclude gathering those estimates that \begin{equation} \label{eq:k3} \big|k_{3}+(\frac {\lambda_s}\lambda+b)p_0\big| \lesssim C^{\star} |s|^{-2+\frac{3\theta}2} \, . \end{equation} \noindent \emph{Estimate for $k_4$.} We decompose $k_4$ as follows. \begin{displaymath} \begin{split} k_4&=(\frac {x_s}\lambda-1) \int Q' \mathcal{L}_{\varphi}P_b +(\frac {x_s}\lambda-1)b \int P_b'\mathcal{L}_{\varphi}P_b +(\frac {x_s}\lambda-1)\int \varepsilon_y\mathcal{L}_{\varphi}P_b \\ &=k_{4,1}+k_{4,2}+k_{4,3} \, . \end{split} \end{displaymath} We estimate each term separately. The decomposition \eqref{LvarphiPb} and the property $\mathcal{L}(Q')=0$ imply \[k_{4,1}=(\frac {x_s}\lambda-1)\int Q'\big(D^1(P(\chi_b-1))+P(\chi_b\varphi-1)-3Q^2P(\chi_b-1)\big) \, . \] Moreover, we have the following bounds \[\left| \int Q' D^1(P(\chi_b-1))\right| \lesssim \|Q\|_{L^2}\|(P(\chi_b-1))''\|_{L^2}\lesssim |s|^{-\frac32} \, , \] \[\left| \int Q' P(\chi_b\varphi-1)\right| \lesssim \int_{y<-\frac{B}2|s|^{\theta}}\frac1{|y|^3}+|s|^{-2\theta}\int \frac{|y|}{(1+|y|)^3} \lesssim |s|^{-2\theta}\, , \] thanks to \eqref{phi_B}, and \[\left| \int Q' Q^2P(\chi_b-1)\right| \lesssim \int_{y < -|s|}\frac1{|y|^7} \lesssim |s|^{-6}\, . \] Thus, it follows from \eqref{BS.2} that $|k_{4,1}| \lesssim C^{\star}|s|^{-1-\frac{3\theta}2}$. By using that $\int P_b'D^1P_b=\int P_b'\mathcal{H}(P_b')=0$, we rewrite $k_{4,2}$ as \[ k_{4,2}=(\frac {x_s}\lambda-1)b \int P_b' P_b\varphi-3(\frac {x_s}\lambda-1)b \int P_b'Q^2P_b \, . \] We also observe that \[\left| \int P_b' P_b\varphi \right| =\frac 12 \left| \int P_b^2\varphi_y \right| \lesssim 1 \quad \hbox{and}\quad \left| \int P_b' Q^2P_b \right| \lesssim \int \frac1{1+|y|^2}\lesssim 1 \, . \] Thus, it follows from \eqref{BS.1} and \eqref{BS.2} that $|k_{4,2}| \lesssim C^{\star}|s|^{-2+\frac{\theta}2}$. We rewrite $k_{4,3}$ as \[ k_{4,3}=(\frac {x_s}\lambda-1)\int \varepsilon_y\big(D^1P_b+P_b\varphi-3Q^2P_b \big) \, . \] Moreover, we observe that \[ \left| \int \varepsilon_yD^1P_b\right| \lesssim \|D^{\frac12}\varepsilon\|_{L^2} \|D^{\frac32}P_b\|_{L^2} \lesssim \mathcal{N}(\varepsilon) \, ,\] \[ \begin{split} \left| \int \varepsilon_yP_b\varphi\right| & \le \left| \int \varepsilon P_b'\varphi\right|+ \left| \int \varepsilon P_b\varphi_y\right|\\ & \lesssim \left(\int \varepsilon^2\varphi \right)^{\frac12} \|P_b'\|_{L^2}+ \left(\int \varepsilon^2\varphi_y\right)^{\frac12} \left(\int \varphi_y\right)^{\frac12} \lesssim \mathcal{N}(\varepsilon) \, , \end{split} \] and \[ \left| \int \varepsilon_yQ^2P_b\right| \lesssim \|D^{\frac12}\varepsilon\|_{L^2} \|D^{\frac12}(Q^2P_b)\|_{L^2} \lesssim \mathcal{N}(\varepsilon) \, .\] Then, we deduce from \eqref{bootstrap.4} that $ |k_{4,3}| \lesssim (C^{\star})^2|s|^{-2+\theta}$. Therefore, we conclude gathering those estimates that \begin{equation} \label{eq:k4} |k_4| \lesssim (C^{\star})^2|s|^{-2+\theta} \, . \end{equation} \noindent \emph{Estimate for $k_5$.} By using \eqref{def.DQb}, we decompose $k_5$ as \begin{displaymath} k_5 = -(b_s+b^2) \int \frac{\partial Q_b}{\partial b} \mathcal{L}_{\varphi}P_b=-(b_s+b^2)\int \left( P_b+y\chi_b'P\right)\left(D^1P_b+P_b \varphi-3Q^2P_b \right) \, , \end{displaymath} and estimate each term separately. First, we deduce from \eqref{bd.PbRb} that \begin{displaymath} \left|\int P_bD^1P_b \right|=\|D^{\frac12}P_b\|_{L^2}^2 \lesssim \ln|s| \, , \end{displaymath} and \begin{displaymath} \begin{split} \left|\int y\chi_b'PD^1P_b \right|&\le\|D^{\frac12}(y\chi_b'P)\|_{L^2}\|D^{\frac12}P_b\|_{L^2} \\ &\lesssim \|y\chi_b'P\|_{L^2}^{\frac12}\big\|\big(y\chi_b'P\big)'\big\|_{L^2}^{\frac12} \|D^{\frac12}P_b\|_{L^2} \lesssim |\ln|s||^{\frac12} \, . \end{split} \end{displaymath} Next, in addition to \eqref{encore.truc}, $\left|\int P_b^2 \varphi \right| \lesssim |s|^{\theta} $, we have \begin{displaymath} \left|\int y\chi_b'PP_b \varphi \right| \lesssim |b| \int_{-2|s| \le y \le -|s|}\frac{|y|}{1+|y|}|\chi'(|b|y)| \lesssim 1 \, . \end{displaymath} Finally, we see easily that \begin{displaymath} \left|\int \left( P_b+y\chi_b'P\right)Q^2P_b\right| \lesssim 1 \, . \end{displaymath} Therefore, we conclude gathering those estimates and using \eqref{BS.3} that \begin{equation} \label{eq:k5} |k_5| \lesssim C^{\star}|s|^{-2+\frac{3\theta}2}\, . \end{equation} \noindent \emph{Estimate for $k_6$.} Recall that \begin{displaymath} k_6 = \int \Psi_b \mathcal{L}_{\varphi}P_b=\int \Psi_b\left(D^1P_b+P_b \varphi-3Q^2P_b \right) \, . \end{displaymath} First, it follows from \eqref{bd.PbRb} and \eqref{lprofile.3} that \begin{displaymath} \left|\int \Psi_b D^1P_b \right| \le \|D^{\frac12}\Psi_b\|_{L^2}\|D^{\frac12}P_b\|_{L^2} \lesssim |s|^{-2}\ln|s| \, . \end{displaymath} Second, we deduce from \eqref{PsibVarphi} and \eqref{encore.truc} that \begin{displaymath} \left|\int \Psi_b P_b \varphi \right| \le \left( \int \Psi_b^2\varphi \right)^{\frac12}\left( \int P_b^2\varphi \right)^{\frac12} \lesssim |s|^{-2+\theta} \, . \end{displaymath} Finally, \eqref{lprofile.2} yields \begin{displaymath} \left|\int \Psi_b Q^2P_b \right| \le \left( \int \Psi_b^2Q^2\right)^{\frac12}\left( \int P_b^2Q^2 \right)^{\frac12} \lesssim |s|^{-2} \, . \end{displaymath} Therefore, we conclude that \begin{equation} \label{eq:k6} |k_6| \lesssim |s|^{-2+\theta}\, . \end{equation} \noindent \emph{Estimate for $k_7$.} By using \eqref{varphi_s}, we decompose the first term of $k_7$ as \begin{displaymath} \int \varepsilon \varphi_s P_b=\theta s^{-1}|s|^{\theta}\int \varepsilon \frac{\phi'(\frac{y}{B}+|s|^{\theta})}{\phi(|s|^{\theta})}P_b-\theta s^{-1}|s|^{\theta}\int \varepsilon \frac{\phi(\frac{y}{B}+|s|^{\theta}) \phi'(|s|^{\theta})}{\phi^2(|s|^{\theta})}P_b \, . \end{displaymath} We deduce from \eqref{def.Dvarphi} that \begin{displaymath} |s|^{-1+\theta}\Big|\int \varepsilon \frac{\phi'(\frac{y}{B}+|s|^{\theta})}{\phi(|s|^{\theta})}P_b \Big| \lesssim |s|^{-1+\theta}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \end{displaymath} and by \eqref{encore.truc} \begin{displaymath} \begin{split} |s|^{-1+\theta}\left|\int \varepsilon \frac{\phi(\frac{y}{B}+|s|^{\theta}) \phi'(|s|^{\theta})}{\phi^2(|s|^{\theta})}P_b\right| &\lesssim |s|^{-1-\theta}\left( \int \varepsilon^2 \varphi \right)^{\frac12} \left( \int P_b^2 \varphi \right)^{\frac12} \lesssim |s|^{-1-\frac{\theta}2} \mathcal{N}(\varepsilon) \, . \end{split} \end{displaymath} Hence, it follows from \eqref{bootstrap.4} that \begin{equation} \label{eq:k7.1} \left| \int \varepsilon \varphi_s P_b \right| \lesssim |s|^{-1+\theta}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12}+C^{\star}|s|^{-2} \, . \end{equation} To deal with the second term in $k_7$, we use $\frac{\partial P_b}{\partial b}=-y\chi'(|b|y)P$, so that \begin{displaymath} \mathcal{L}_{\varphi}\left(\frac{\partial P_b}{\partial b}\right)= D^1\left(\frac{\partial P_b}{\partial b}\right)-y\chi'(|b|y)P\varphi+3Q^2y\chi'(|b|y)P \, . \end{displaymath} We estimate each corresponding terms separately. First, we deduce by using \eqref{dPb_db} that \begin{displaymath} \left| \int \varepsilon D^1\left(\frac{\partial P_b}{\partial b}\right) \right| \lesssim \|D^{\frac12}\varepsilon\|_{L^2}\Big\| D^{\frac12}\left(\frac{\partial P_b}{\partial b}\right) \Big\|_{L^2} \lesssim |s| \mathcal{N}(\varepsilon) \, . \end{displaymath} Moreover, \begin{displaymath} \left| \int \varepsilon \frac{\partial P_b}{\partial b} \varphi \right| \lesssim \left( \int \varepsilon^2 \varphi \right)^{\frac12} \left( \int |y \chi'(|b|y)|^2 \frac1{|y|} \right)^{\frac12} \lesssim |s| \mathcal{N}(\varepsilon) \, , \end{displaymath} and \begin{displaymath} \left| \int Q^2\varepsilon \frac{\partial P_b}{\partial b} \right| \lesssim \left( \int \varepsilon^2 \varphi \right)^{\frac12} \left( \int |y \chi'(|b|y)|^2 \frac1{|y|^6} \right)^{\frac12} \lesssim |s|^{-\frac32} \mathcal{N}(\varepsilon) \, , \end{displaymath} since $Q \in \mathcal{Y}_2$. Then it follows from \eqref{bootstrap.4} and \eqref{BS.5} that \begin{equation} \label{eq:k7.2} \left| b_s \int \varepsilon \mathcal{L}_{\varphi}\left(\frac{\partial P_b}{\partial b}\right)\right| \lesssim (C^{\star})^2 |s|^{-2+\theta} \, . \end{equation} Therefore, we deduce gathering \eqref{eq:k7.1} and \eqref{eq:k7.2} that \begin{equation} \label{eq:k7} |k_7| \lesssim |s|^{-1+\theta}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12}+(C^{\star})^2 |s|^{-2+\theta} \, . \end{equation} Finally, we conclude the proof of estimate \eqref{eq:K} gathering estimates \eqref{eq:k1}, \eqref{eq:k2}, \eqref{eq:k3}, \eqref{eq:k4}, \eqref{eq:k5}, \eqref{eq:k6}, \eqref{eq:k7} and taking $|S_0|$ large enough. \end{proof} \subsection{Remaining terms} \begin{lemma} For $|S_0|$ large enough possibly depending on $C^{\star}$ and for all $s\in \mathcal I^\star$, \begin{equation}\label{eq:Z} |Z|\lesssim |s|^{-2+\theta}, \end{equation} and \begin{equation}\label{eq:dZ} \left|\frac {d Z}{ds}\right|\lesssim |s|^{-2}+|s|^{-\frac{3\theta}2}\left(\int \varepsilon^2 \varphi_{y} \right)^{\frac12}. \end{equation} \end{lemma} \begin{proof} Recall that $Z=Z_{1}+Z_{2}$, where \[Z_1=\int \varepsilon^2 y \varphi_y \chi_1,\quad Z_2 = - 2 \int \Lambda Q(\varphi-1)\varepsilon \chi_2.\] \noindent \textit{Proof of \eqref{eq:Z}}. Indeed, \[ |Z_{1}|\lesssim \int_{y<-|s|^{\frac 23}} |\varepsilon|^2 |y|^{-1} \lesssim |s|^{-1-\frac 23}\lesssim |s|^{-2+\theta}, \] and, since $\theta>\frac 35$, \[ |Z_{2}|\lesssim \int_{y<-\frac B 4 |s|^{\theta}} |\varepsilon| |y|^{-2} \lesssim \|\varepsilon\|_{L^2} \left(\int_{y<-\frac B 4 |s|^{\theta}} |y|^{-4} \right)^{\frac 12} \lesssim |s|^{-\frac 12 -\frac 32 \theta}\lesssim |s|^{-2+\theta}. \] Now, we prove \begin{equation}\label{eq:dZ1} \left|\frac {d Z_1}{ds}\right|\lesssim |s|^{-2}. \end{equation} and \begin{equation}\label{eq:dZ2} \left|\frac {d Z_2}{ds}\right|\lesssim |s|^{-2}+|s|^{-\frac{3\theta}2}\left(\int \varepsilon^2 \varphi_{y} \right)^{\frac12}. \end{equation} which prove \eqref{eq:dZ}. \noindent \textit{Proof of \eqref{eq:dZ1}.} Observe from the definition of $Z_1$ that \[ \frac{dZ_1}{ds}=2\int \varepsilon \varepsilon_s y \varphi_y \chi_1+\int \varepsilon^2\partial_s(\varphi_y)\chi_1+\int \varepsilon^2\varphi_y\partial_s(\chi_1)\] which implies by using \eqref{modulation.4} that \[ \frac{dZ_1}{ds}=z_{11}+z_{12}+z_{13}+z_{14}+z_{15}+z_{16}+z_{17} , \] where \begin{align*} z_{11}&=2\int V_y \varepsilon y \varphi_y \chi_1, & z_{12}&=2\frac{\lambda_s}{\lambda}\int \varepsilon \Lambda \varepsilon y \varphi_y \chi_1, \\ z_{13}&=2\big(\frac{\lambda_s}{\lambda}+b\big)\int \Lambda Q_b \varepsilon y \varphi_y \chi_1, & z_{14}&=2\big(\frac{x_s}{\lambda}-1\big)\int \big( Q_b+\varepsilon\big)_y \varepsilon y \varphi_y \chi_1,\\ z_{15}&=-2\big(b_s+b^2\big)\int \frac{\partial Q_b}{\partial b} \varepsilon y \varphi_y \chi_1,& z_{16}&=2\int \Psi_b \varepsilon y \varphi_y \chi_1,\\ z_{17}&=\int \varepsilon^2\partial_s(\varphi_y)\chi_1+\int \varepsilon^2\varphi_y\partial_s(\chi_1)\, . \end{align*} First, we claim the following estimates \begin{equation} \label{eq:dZ1.1} \|y\chi_1 \varphi_y\|_{L^{\infty}} \lesssim |s|^{-\frac23}, \quad \|(y\chi_1 \varphi_y)_y\|_{L^{\infty}} \lesssim |s|^{-\frac43}, \quad \|(y\chi_1 \varphi_y)_{yy}\|_{L^{\infty}} \lesssim |s|^{-2} , \end{equation} and \begin{equation} \label{eq:dZ1.2} \|y\chi_1 \varphi_y\|_{L^2} \lesssim|s|^{-\frac13}, \quad \|(y\chi_1 \varphi_y)_y\|_{L^2} \lesssim |s|^{-1}, \quad \|(y\chi_1 \varphi_y)_{yy}\|_{L^2} \lesssim |s|^{-\frac53} \, , \end{equation} which follow directly from the definition of $\varphi$ in \eqref{def.varphi} and the definition of $\chi_1$ in \eqref{chi1chi2}. \noindent \emph{Estimate for $z_{11}$.} By using the definition of $V$ in \eqref{defV}, we rewrite $z_{11}$ as \begin{align*} z_{11} &= 2\int (D^1\varepsilon)_y \varepsilon y \chi_1 \varphi_y+2\int \big((1-3Q^2)\varepsilon\big)_y \varepsilon y \chi_1 \varphi_y -6\int \big((Q_b^2-Q^2)\varepsilon\big)_y \varepsilon y \chi_1 \varphi_y\\ &\quad -6\int\big(Q_b\varepsilon^2\big)_y \varepsilon y \chi_1 \varphi_y -2\int(\varepsilon^3)_y \varepsilon y \chi_1 \varphi_y \\ &=z_{111}+z_{112}+z_{113}+z_{114}+z_{115} \, . \end{align*} We estimate each term separately. First, we observe after integration by parts that \[ z_{111} =- 2\int (D^1\varepsilon) \varepsilon_y (y \chi_1 \varphi_y) - 2\int (D^1\varepsilon) \varepsilon (y \chi_1 \varphi_y)_y \, . \] Applying \eqref{m.1} with $a=y \chi_1 \varphi_y$, from \eqref{eq:dZ1.1} and \eqref{BS.1}, one obtains \[ \left| \int (D^1\varepsilon) \varepsilon_y (y \chi_1 \varphi_y)\right|\lesssim \| \varepsilon\|_{L^2}^2 \| (y \chi_1 \varphi_y)_{yy}\|_{L^\infty} \lesssim |s|^{-3}. \] Applying \eqref{m.2} with $a=y \chi_1 \varphi_y$, from \eqref{eq:dZ1.1}-\eqref{eq:dZ1.2} and \eqref{bootstrap.4}, \eqref{BS.1}, one obtains \begin{align*} \left| \int (D^1\varepsilon) \varepsilon (y \chi_1 \varphi_y)_y\right| & \lesssim \int |D^{\frac 12} \varepsilon|^2 |(y \chi_1 \varphi_y)_y| + \|D^{\frac 12} \varepsilon\|_{L^2}^{\frac 32} \| \varepsilon\|_{L^2}^{\frac 12} \|(y \chi_1 \varphi_y)_{yy}\|_{L^2}^{\frac 34} \|(y \chi_1 \varphi_y)_{y}\|_{L^2}^{\frac 14} \\ & \lesssim (C^\star)^2 |s|^{-\frac {10}3 + \theta} + (C^\star)^{-\frac 32} |s|^{-3+\frac 34 \theta} \lesssim |s|^{-2}.\end{align*} Hence, \begin{equation} \label{eq:dZ111} |z_{111}| \lesssim |s|^{-2} \,. \end{equation} Second, we get after integration by parts that \begin{displaymath} z_{112}= -6\int QQ'\varepsilon^2 y\chi_1 \varphi_y -\int (1-3Q^2) \varepsilon^2 (y \chi_1 \varphi_y)_y \end{displaymath} Thus it follows from \eqref{eq:dZ1.1} and \eqref{BS.1} that \begin{equation} \label{eq:dZ112} |z_{112}| \lesssim \left( \|QQ'y\chi_1 \varphi_y \|_{L^{\infty}} +\|(y\chi_1 \varphi_y)_y \|_{L^{\infty}}\right) \|\varepsilon\|_{L^2}^2 \lesssim |s|^{-\frac73} \, . \end{equation} Also integrating by parts, we have \begin{displaymath} \begin{split} z_{113}&=-3b\int (2QP_b+bP_b^2)_y \varepsilon^2 y \chi_1 \varphi_y +3b\int (2QP_b+bP_b^2) \varepsilon^2 (y \chi_1 \varphi_y)_y \end{split} \end{displaymath} so that \begin{equation} \label{eq:dZ113} |z_{113}| \lesssim |b|\big( \|(2QP_b+bP_b^2)_yy\chi_1 \varphi_y \|_{L^{\infty}} +\|(2QP_b+bP_b^2)(y\chi_1 \varphi_y)_y \|_{L^{\infty}}\big) \|\varepsilon\|_{L^2}^2 \lesssim |s|^{-\frac{8}3} \, , \end{equation} thanks to \eqref{BS.1} and \eqref{eq:dZ1.1}. Similarly, \begin{displaymath} z_{114}=-4 \int (Q+bP_b)_y \varepsilon^3 y \chi_1 \varphi_y +2\int (Q+bP_b) \varepsilon^3 (y \chi_1 \varphi_y)_y \, . \end{displaymath} Hence, it follows from \eqref{bootstrap.4}, \eqref{BS.1} and \eqref{gnp} (with $p=3$) that \begin{equation} \label{eq:dZ114}\begin{aligned} |z_{114}| &\lesssim \big( \|(Q+bP_b)_yy\chi_1 \varphi_y \|_{L^{\infty}} +\|(Q+bP_b)(y\chi_1 \varphi_y)_y \|_{L^{\infty}}\big) \int |\varepsilon|^3 \\ & \lesssim C^\star |s|^{-\frac 23} \|\varepsilon\|_{L^2}^2 \mathcal N(\varepsilon) \lesssim C^{\star}|s|^{-\frac83+\frac{\theta}2} \lesssim |s|^{-2} . \end{aligned}\end{equation} Finally, integration by parts and \eqref{gnp} (with $p=4$) yield \begin{equation} \label{eq:dZ115} |z_{115}| =\left|\frac12\int \varepsilon^4 (y\chi_1 \varphi_y)_y \right| \lesssim \|(y\chi_1 \varphi_y)_y \|_{L^{\infty}} \|D^{\frac12}\varepsilon\|_{L^2}^2\|\varepsilon\|_{L^2}^2 \lesssim (C^{\star})^2|s|^{-\frac{13}3+\theta} \lesssim |s|^{-2} . \end{equation} Therefore, we deduce combining \eqref{eq:dZ111}-\eqref{eq:dZ115} that \begin{equation} \label{eq:dZ11} |z_{11}| \lesssim |s|^{-2} \, . \end{equation} \noindent \emph{Estimate for $z_{12}$.} We have $\Lambda \varepsilon = \frac \varepsilon 2 + y \varepsilon_y$ and integrating by parts, \begin{displaymath} z_{12}=\frac{\lambda_s}{\lambda} \int \varepsilon^2 y\chi_1\varphi_y -\frac{\lambda_s}{\lambda} \int \varepsilon^2 (y^2\chi_1\varphi_y)_y \, . \end{displaymath} Moreover, by \eqref{BS.1} and \eqref{BS.2}, $|\frac{\lambda_s}{\lambda}|\lesssim |\frac{\lambda_s}{\lambda}+b|+|b|\lesssim C^\star |s|^{-1+\frac \theta 2}$, and thus, by \eqref{eq:dZ1.1}, \begin{equation} \label{eq:dZ12} |z_{12}| \lesssim C^\star |s|^{-1+\frac \theta 2} \big( \|y\chi_1 \varphi_y \|_{L^{\infty}} +\|(y^2\chi_1 \varphi_y)_y \|_{L^{\infty}}\big)\|\varepsilon\|_{L^2}^2 \lesssim C^{\star}|s|^{-\frac{8}3+\frac{\theta}2} \lesssim |s|^{-2}, \end{equation} since $\theta < \frac43$. \noindent \emph{Estimate for $z_{13}$.} Since \begin{displaymath} z_{13}=2\big(\frac{\lambda_s}{\lambda}+b\big)\int \big(\Lambda Q+b\Lambda P_b\big) \varepsilon y \varphi_y \chi_1 \, , \end{displaymath} we deduce from \eqref{BS.1}, \eqref{BS.2} and \eqref{eq:dZ1.2} that \begin{equation} \label{eq:dZ13} \begin{split} |z_{13}| &\lesssim C^\star |s|^{-1+\frac \theta 2}\big( \|\Lambda Qy\chi_1 \varphi_y \|_{L^2} +|b|\|\Lambda P_by\chi_1 \varphi_y \|_{L^2}\big)\|\varepsilon\|_{L^2} \\ & \lesssim C^{\star}|s|^{-1+\frac{\theta}{2}} (|s|^{-\frac53}+|s|^{-\frac43})|s|^{-\frac12} \lesssim C^{\star}|s|^{-\frac{17}6+\frac{\theta}2} \lesssim |s|^{-2}\, . \end{split} \end{equation} \noindent \emph{Estimate for $z_{14}$.} Integrating by parts, we have \begin{displaymath} z_{14}=\big(\frac{x_s}{\lambda}-1\big)\left(2\int \big( Q+bP_b\big)_y \varepsilon y \chi_1\varphi_y -\int \varepsilon^2 ( y \chi_1\varphi_y )_y\right), \end{displaymath} Moreover, observe from \eqref{BS.1} and $Q\in \mathcal Y_2$ that \begin{displaymath} \left|\int \big( Q+bP_b\big)_y \varepsilon y \chi_1\varphi_y \right| \lesssim \big\|\big( Q+bP_b\big)_y y \chi_1\varphi_y \big\|_{L^2} \|\varepsilon\|_{L^2} \lesssim |s|^{-\frac73-\frac12} \end{displaymath} and from \eqref{BS.1} and \eqref{eq:dZ1.1} that \begin{displaymath} \left|\int \varepsilon^2 ( y \chi_1\varphi_y )_y \right| \lesssim \|( y \chi_1\varphi_y )_y\|_{L^{\infty}} \|\varepsilon\|_{L^2}^2 \lesssim |s|^{-\frac73} \, . \end{displaymath} Hence, we deduce from \eqref{BS.2} that \begin{equation} \label{eq:dZ14} |z_{14}| \lesssim C^{\star}|s|^{-1+\frac{\theta}2}|s|^{-\frac73} \lesssim |s|^{-2} . \end{equation} \noindent \emph{Estimate for $z_{15}$.} Recalling \eqref{def.DQb}, we have \begin{displaymath} z_{1,5}=-2\big(b_s+b^2\big)\int \big( P_b+yP\chi_b'\big) \varepsilon y \chi_1\varphi_y \, . \end{displaymath} Moreover, \begin{displaymath} \left|\int P_b \varepsilon y \chi_1\varphi_y \right| \lesssim \|y \chi_1\varphi_y \|_{L^2}\|\varepsilon\|_{L^2} \lesssim |s|^{-\frac56} \, , \end{displaymath} thanks to \eqref{eq:dZ1.2}, and \begin{displaymath} \left|\int yP\chi_b' \varepsilon y \chi_1\varphi_y \right| \lesssim \|\chi_b'\|_{L^2}\|\varepsilon\|_{L^2} \lesssim |s|^{-1} \, . \end{displaymath} Then, we deduce from \eqref{BS.3} that \begin{equation} \label{eq:dZ15} |z_{15}| \lesssim C^{\star}|s|^{-\frac {17}6+\frac{\theta}2} \lesssim |s|^{-2} . \end{equation} \noindent \emph{Estimate for $z_{16}$.} By using \eqref{lprofile.2}, \eqref{BS.1} and \eqref{eq:dZ1.1}, we get that \begin{equation} \label{eq:dZ16} |z_{16}| \lesssim \|y \chi_1\varphi_y \|_{L^{\infty}}\|\Psi_b\|_{L^2}\|\varepsilon\|_{L^2} \lesssim |s|^{-\frac83} \, . \end{equation} \noindent \emph{Estimate for $z_{17}$.} First, we compute $\partial_s(\chi_1)(s,y)=\frac23|s|^{-\frac53}y\chi'(y|s|^{-\frac23})$. Then, \begin{displaymath} \left| \int \varepsilon^2\varphi_y\partial_s(\chi_1) \right| \lesssim |s|^{-\frac53} \|\varepsilon\|_{L^2}^2 \lesssim |s|^{-\frac83} \, . \end{displaymath} Second, arguing as in \eqref{varphi_s}, we get that \begin{displaymath} \left| \int \varepsilon^2\partial_s(\varphi_y)\chi_1 \right| \lesssim |s|^{-1+\theta} \|\frac1{|y|^2}\chi_1\|_{L^{\infty}}\|\varepsilon\|_{L^2}^2 +|s|^{-1-\theta}\|\frac1{|y|}\chi_1\|_{L^{\infty}}\|\varepsilon\|_{L^2}^2 \lesssim |s|^{-\frac{10}3+\theta} \, . \end{displaymath} Thus, we deduce that \begin{equation} \label{eq:dZ17} |z_{17}| \lesssim |s|^{-\frac83} \, , \end{equation} for $|s|$ large enough since $\theta<\frac23$. Therefore, we conclude the proof of \eqref{eq:dZ1} gathering \eqref{eq:dZ11}-\eqref{eq:dZ17}. \noindent \textit{Proof of \eqref{eq:dZ2}.} Observe from the definition of $Z_2$ that \[-\frac12 \frac{dZ_2}{ds}=\int \varepsilon_s \Lambda Q \chi_2 (\varphi-1) +\int \varepsilon\Lambda Q \chi_2 \varphi_s+\int \varepsilon \Lambda Q (\chi_2)_s (\varphi-1) \, ,\] which implies by using \eqref{modulation.4} that \[ -\frac12 \frac{dZ_2}{ds}=z_{21}+z_{22}+z_{23}+z_{24}+z_{25}+z_{26}+z_{27} \] where \begin{align*} z_{21}&=\int V_y \Lambda Q \chi_2 (\varphi-1), & z_{22}&=\frac{\lambda_s}{\lambda}\int \Lambda \varepsilon \Lambda Q \chi_2 (\varphi-1), \\ z_{23}&=\big(\frac{\lambda_s}{\lambda}+b\big)\int \Lambda Q_b \Lambda Q \chi_2 (\varphi-1), & z_{24}&=\big(\frac{x_s}{\lambda}-1\big)\int \big( Q_b+\varepsilon\big)_y \Lambda Q \chi_2 (\varphi-1),\\ z_{25}&=-\big(b_s+b^2\big)\int \frac{\partial Q_b}{\partial b} \Lambda Q \chi_2 (\varphi-1),& z_{26}&=\int \Psi_b \Lambda Q \chi_2 (\varphi-1),\\ z_{27}&=\int \varepsilon\Lambda Q \chi_2 \partial_s\varphi+\int \varepsilon \Lambda Q \partial_s(\chi_2) (\varphi-1)\, . \end{align*} First, we claim the following estimates \begin{equation} \label{eq:dZ2.1} \|\Lambda Q \chi_2\|_{L^2} \lesssim|s|^{-\frac{3\theta}2}, \quad \|(\Lambda Q \chi_2)_y\|_{L^2} \lesssim|s|^{-\frac{5\theta}2}, \quad \text{and} \quad \|(\Lambda Q \chi_2)_{yy}\|_{L^2} \lesssim |s|^{-\frac{7\theta}2} , \end{equation} which follow directly from the fact that $\Lambda Q \in \mathcal{Y}_2$ and from the definition of $\chi_2$ in \eqref{chi1chi2}. Recall that $B$ is a fixed universal constant chosen in \eqref{theta.def}. \noindent \emph{Estimate for $z_{21}$.} By using the definition of $V$ in \eqref{defV}, we rewrite $z_{21}$ as \begin{align*} z_{21} &= \int (D^1\varepsilon)_y \Lambda Q \chi_2 (\varphi-1) +\int \big((1-3Q^2)\varepsilon\big)_y \Lambda Q \chi_2 (\varphi-1) -3\int \big((Q_b^2-Q^2)\varepsilon\big)_y \Lambda Q \chi_2 (\varphi-1)\\ &\quad -3\int\big(Q_b\varepsilon^2\big)_y \Lambda Q \chi_2 (\varphi-1) -\int(\varepsilon^3)_y \Lambda Q \chi_2 (\varphi-1) \\ &=z_{211}+z_{212}+z_{213}+z_{214}+z_{215} \, . \end{align*} We estimate each term separately. First, we see integrating by parts that \begin{equation} \label{eq:Z21.1} z_{211}=-\int (D^1 \varepsilon) (\Lambda Q \chi_2 )_y (\varphi-1) -\int (D^1 \varepsilon) \Lambda Q \chi_2 \varphi_y \, . \end{equation} We deduce from \eqref{bootstrap.4} and \eqref{eq:dZ2.1} that \begin{displaymath} \begin{split} \left|\int D^1 \varepsilon (\Lambda Q \chi_2 )_y (\varphi-1)\right| &\lesssim \|D^{\frac12}\varepsilon\|_{L^2}\|D^{\frac12}\big((\Lambda Q \chi_2 )_y(\varphi-1)\big) \|_{L^2}\\ & \lesssim \mathcal{N}(\varepsilon)\|(\Lambda Q \chi_2 )_y \|_{L^2}^{\frac12} \|\big((\Lambda Q \chi_2 )_{y}(\varphi-1)\big)_y\|_{L^2}^{\frac12} \\ & \lesssim C^{\star}|s|^{-1-2\theta} \, . \end{split} \end{displaymath} In order to deal with the second term on the right-hand side of \eqref{eq:Z21.1}, we see integrating by parts again that \begin{displaymath} -\int D^1 \varepsilon \Lambda Q \chi_2 \varphi_y= \int \mathcal{H} \varepsilon (\Lambda Q \chi_2)_y \varphi_y +\int \mathcal{H} \varepsilon \Lambda Q \chi_2 \varphi_{yy} \, . \end{displaymath} From \eqref{BS.1} and \eqref{eq:dZ2.1}, we get \begin{displaymath} \left| \int \mathcal{H} \varepsilon (\Lambda Q \chi_2)_y \varphi_y\right| \lesssim \|\varepsilon\|_{L^2}\|(\Lambda Q \chi_2)_y\|_{L^2} \lesssim |s|^{-\frac12-\frac{5\theta}2} \lesssim |s|^{-2} \, , \end{displaymath} since $\theta>\frac35$. Moreover, \begin{displaymath} -\int \mathcal{H} \varepsilon \Lambda Q \chi_2 \varphi_{yy} = \int \varepsilon \Lambda Q \chi_2 \mathcal{H}\varphi_{yy} +\int \varepsilon [ \mathcal{H} , \Lambda Q \chi_2] \varphi_{yy} \, . \end{displaymath} From the definition of $\varphi$ and Lemma \ref{HY}, we have $| \mathcal{H}\varphi_{yy}|\lesssim \varphi_y$. Thus, thanks to \eqref{eq:dZ2.1}, \begin{displaymath} \left| \int \varepsilon \Lambda Q \chi_2 \mathcal{H}\varphi_{yy}\right| \lesssim \|\Lambda Q \chi_2\|_{L^2} \left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \lesssim |s|^{-\frac{3\theta}2}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} . \end{displaymath} We deduce from the Calder\'on commutator estimate (see \eqref{Calderon} with $l=0$ and $m=1$) that \begin{displaymath} \left| \int \varepsilon [ \mathcal{H} , \Lambda Q \chi_2] \varphi_{yy}\right| \lesssim \|\varepsilon\|_{L^2} \big\|[ \mathcal{H} , \Lambda Q \chi_2] \varphi_{yy} \big\|_{L^2} \lesssim \|\varepsilon\|_{L^2}\|(\Lambda Q \chi_2)_{y}\|_{L^{\infty}}\|\varphi_y\|_{L^2}\lesssim |s|^{-\frac12-3\theta} \, . \end{displaymath} Therefore, we conclude gathering those estimates that \begin{equation} \label{eq:dZ211} |z_{211}| \lesssim |s|^{-\frac12-\frac{5\theta}2}+|s|^{-\frac{3\theta}2}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} . \end{equation} Next, we see integrating by parts that \begin{displaymath} z_{212}=-6\int QQ'\varepsilon \Lambda Q \chi_2 (\varphi-1) -\int (1-3Q^2) \varepsilon (\Lambda Q \chi_2)_y (\varphi-1) -\int (1-3Q^2) \varepsilon \Lambda Q \chi_2 \varphi_y . \end{displaymath} Moreover, it follows from \eqref{BS.1} and \eqref{eq:dZ2.1} that \begin{displaymath} \left| \int QQ'\varepsilon \Lambda Q \chi_2 (\varphi-1) \right| \lesssim \|QQ'\Lambda Q \chi_2 \|_{L^2}\|\varepsilon\|_{L^2} \lesssim |s|^{-\frac12-\frac{13\theta}2} \, , \end{displaymath} \begin{displaymath} \left| \int (1-3Q^2) \varepsilon (\Lambda Q \chi_2)_y (\varphi-1)\right| \lesssim \|(\Lambda Q \chi_2)_y \|_{L^2}\|\varepsilon\|_{L^2} \lesssim |s|^{-\frac12-\frac{5\theta}2} \, , \end{displaymath} and \begin{displaymath} \left| \int (1-3Q^2) \varepsilon \Lambda Q \chi_2 \varphi_y\right| \lesssim \|\Lambda Q \chi_2 \|_{L^2} \left( \int \varepsilon^2 \varphi_y \right)^{\frac12}\lesssim |s|^{-\frac{3\theta}2}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \, . \end{displaymath} Hence, we deduce that \begin{equation} \label{eq:dZ212} |z_{212}| \lesssim |s|^{-\frac12-\frac{5\theta}2}+|s|^{-\frac{3\theta}2}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} , \end{equation} for $|s|$ large enough. Integrating by parts again, we get \begin{displaymath} z_{213}=3b \int (2QP_b+bP_b^2)\varepsilon \big(\Lambda Q \chi_2 (\varphi-1)\big)_y \, , \end{displaymath} so that \begin{equation} \label{eq:dZ213} |z_{213}| \lesssim |b|\|\varepsilon\|_{L^2} \big\|(\frac1{|y|^2}+|b|)\big(\Lambda Q \chi_2 (\varphi-1)\big)_y \big\|_{L^2} \lesssim |s|^{-\frac52-\frac{3\theta}2} \, . \end{equation} Similarly, \begin{equation} \label{eq:dZ214} \begin{split} |z_{214}| &=3\left|\int (Q+bP_b)\varepsilon^2 \big(\Lambda Q \chi_2 (\varphi-1)\big)_y \right| \\ & \lesssim \|\varepsilon\|_{L^2}^2 \big\|(\frac1{|y|^2}+|b|) \big(\Lambda Q \chi_2 (\varphi-1)\big)_y \big\|_{L^{\infty}} \\ & \lesssim |s|^{-2-2\theta} \end{split} \end{equation} and \begin{displaymath} \begin{split} |z_{215}| &=\left|\int \varepsilon^3 \big(\Lambda Q \chi_2 (\varphi-1)\big)_y \right| \lesssim \|\varepsilon\|_{L^3}^3 \big\| \big(\Lambda Q \chi_2 (\varphi-1)\big)_y \big\|_{L^{\infty}} \lesssim |s|^{-2\theta}\|\varepsilon\|_{L^2}^2\|D^{\frac12}\varepsilon\|_{L^2} \end{split} \end{displaymath} thanks to \eqref{gnp} (with $p=3$), so that \begin{equation} \label{eq:dZ215} |z_{215}| \lesssim C^{\star}|s|^{-2-\frac{3\theta}2} \, . \end{equation} Therefore, we conclude gathering \eqref{eq:dZ211}-\eqref{eq:dZ215} that \begin{equation} \label{eq:dZ21} |z_{21}| \lesssim |s|^{-\frac12-\frac{5\theta}2}+|s|^{-\frac{3\theta}2}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} . \end{equation} \noindent \emph{Estimate for $z_{22}$.} We see integrating by parts that \begin{displaymath} z_{22}=-\frac12\frac{\lambda_s}{\lambda}\int \varepsilon \Lambda Q \chi_2 (\varphi-1) -\frac{\lambda_s}{\lambda}\int \varepsilon y(\Lambda Q \chi_2)_y (\varphi-1) -\frac{\lambda_s}{\lambda}\int \varepsilon y\Lambda Q \chi_2 \varphi_y \, . \end{displaymath} Hence, it follows from \eqref{BS.1}, \eqref{BS.2} and \eqref{eq:dZ2.1} that \begin{equation} \label{eq:dZ22} \begin{split} |z_{22}|& \lesssim C^{\star}|s|^{-1+\frac \theta 2}\|\left[\varepsilon\|_{L^2} \big(\|\Lambda Q \chi_2\|_{L^2}+\|y(\Lambda Q )_y\chi_2\|_{L^2} \big) + \left( \int \varepsilon^2 \varphi_y \right)^{\frac12}\|y\Lambda Q \chi_2\|_{L^2}\right] \\ & \lesssim C^{\star}|s|^{-\frac32-\theta}+C^{\star}|s|^{-1}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \lesssim |s|^{-2} + |s|^{-\frac {3\theta}2} \left( \int \varepsilon^2 \varphi_y \right)^{\frac12}. \end{split} \end{equation} \noindent \emph{Estimate for $z_{23}$.} Note that from the definition of $Q_b$, \begin{displaymath} z_{23}=\big(\frac{\lambda_s}{\lambda}+b\big)\int (\Lambda Q)^2 \chi_2 (\varphi-1) +\big(\frac{\lambda_s}{\lambda}+b\big)b\int \Lambda P_b \Lambda Q \chi_2 (\varphi-1)\, . \end{displaymath} Moreover, it follows from \eqref{BS.1} and \eqref{BS.2} that \begin{displaymath} \left|\big(\frac{\lambda_s}{\lambda}+b\big)\int (\Lambda Q)^2 \chi_2 (\varphi-1) \right| \lesssim C^{\star}|s|^{-1+\frac \theta 2} \int_{y<-\frac{B}4|s|^{\theta}} \frac1{|y|^4} \lesssim C^{\star}|s|^{-1-\frac{5\theta}2} \end{displaymath} and \begin{displaymath} \begin{split} \left|\big(\frac{\lambda_s}{\lambda}+b\big)b\int \Lambda P_b \Lambda Q \chi_2 (\varphi-1) \right| &\lesssim |b|C^{\star}|s|^{-1+\frac \theta 2} \left( \int_{y<-\frac{B}4|s|^{\theta}} \frac1{|y|^2} +|b|\int_{-2|b|^{-1}<y<-|b|^{-1}}\frac1{|y|}\right) \\ & \lesssim C^{\star}|s|^{-2-\frac{\theta}2} \, . \end{split} \end{displaymath} Then, we deduce that \begin{equation} \label{eq:dZ23} |z_{23}| \lesssim C^{\star}|s|^{-2-\frac{\theta}2} \, , \end{equation} for $|s|$ large enough. \noindent \emph{Estimate for $z_{24}$.} We have after integrating by parts that \begin{displaymath} z_{24}=\big(\frac{x_s}{\lambda}-1\big)\left(\int \big( Q+bP_b\big)_y \Lambda Q \chi_2 (\varphi-1) -\int \varepsilon (\Lambda Q \chi_2)_y (\varphi-1)-\int \varepsilon \Lambda Q \chi_2 \varphi_y\right) \, . \end{displaymath} Moreover, we get from \eqref{BS.1} and \eqref{eq:dZ2.1} that \begin{displaymath} \left|\int \big( Q+bP_b\big)_y \Lambda Q \chi_2 (\varphi-1) \right|\lesssim \int_{y<-\frac{B}4|s|^{\theta}} \frac1{|y|^5}+|b| \int_{y<-\frac{B}4|s|^{\theta}} \frac1{|y|^4}+|b|^2 \int\frac{\chi'(|b|y)}{|y|^2} \lesssim |s|^{-4\theta} \, , \end{displaymath} \begin{displaymath} \left|\int \varepsilon (\Lambda Q \chi_2)_y (\varphi-1) \right|\lesssim \|\varepsilon\|_{L^2} \|(\Lambda Q \chi_2)_y\|_{L^2} \lesssim |s|^{-\frac12-\frac{5\theta}2} \end{displaymath} and \begin{displaymath} \left|\int \varepsilon \Lambda Q \chi_2 \varphi_y\right| \lesssim \|\Lambda Q \chi_2\|_{L^2} \left( \int \varepsilon^2 \varphi_y\right)^{\frac12} \lesssim |s|^{-\frac{3\theta}2}\left( \int \varepsilon^2 \varphi_y\right)^{\frac12} . \end{displaymath} Thus we conclude from \eqref{BS.2} that \begin{equation} \label{eq:dZ24} |z_{24}| \lesssim C^{\star}|s|^{-\frac32-2\theta}+C^{\star}|s|^{-1-\theta} \left( \int \varepsilon^2 \varphi_y\right)^{\frac12}\lesssim |s|^{-2} . \end{equation} \noindent \emph{Estimate for $z_{25}$.} We recall from the definition of $\frac{\partial Q_b}{\partial b}$ in \eqref{def.DQb} that \begin{displaymath} z_{25}=-\big(b_s+b^2\big)\int \big(P_b+yP\chi_b' \big) \Lambda Q \chi_2 (\varphi-1) \, . \end{displaymath} Moreover, \begin{displaymath} \left| \int P_b \Lambda Q \chi_2 (\varphi-1) \right| \lesssim \int_{y<-\frac{B}4|s|^{\theta}}\frac1{|y|^2} \lesssim |s|^{-\theta} \end{displaymath} and \begin{displaymath} \left|\int yP\chi_b' \Lambda Q \chi_2 (\varphi-1) \right| \lesssim |b|\int \frac{\chi'(|b|y)}{|y|} \lesssim |s|^{-1} \, . \end{displaymath} Then, it follows from \eqref{BS.3} that \begin{equation} \label{eq:dZ25} |z_{25}| \lesssim C^{\star}|s|^{-2-\frac{\theta}2}\lesssim |s|^{-2} \, . \end{equation} \noindent \emph{Estimate for $z_{26}$.} We deduce from \eqref{lprofile.2} and \eqref{eq:dZ2.1} that \begin{equation} \label{eq:dZ26} |z_{26}| \lesssim \|\Psi_b\|_{L^2} \|\Lambda Q \chi_2\|_{L^2} \lesssim |s|^{-\frac32-\frac{3\theta}2} \, . \end{equation} \noindent \emph{Estimate for $z_{27}$.} First, we compute $\partial_s(\chi_2)(s,y)=\frac4B \theta|s|^{-\theta-1}y\chi'(\frac4By|s|^{-\theta})$. Then, \begin{displaymath} \left| \int \varepsilon \Lambda Q \partial_s(\chi_2) (\varphi-1) \right| \lesssim |s|^{-1-\theta} \|\varepsilon\|_{L^2} \left(\int_{y<-\frac{B}4|s|^{\theta}} \frac1{|y|^2} \right)^{\frac12} \lesssim |s|^{-\frac32-\frac{3\theta}2}\, . \end{displaymath} Second, using \eqref{varphi_s} and \eqref{eq:dZ2.1}, we obtain \begin{displaymath} \begin{split} \left| \int \varepsilon\Lambda Q \chi_2 \partial_s\varphi \right| &\lesssim |s|^{-1+\theta}\|\Lambda Q\chi_2\|_{L^2} \left(\int \varepsilon^2\varphi_y\right)^{\frac12}+|s|^{-1-\theta} \|\Lambda Q\chi_2\|_{L^2}\|\varepsilon\|_{L^2} \\ & \lesssim |s|^{-1-\frac{\theta}{2}}\left(\int \varepsilon^2\varphi_y\right)^{\frac12} +|s|^{-\frac32-\frac{5\theta}{2}} \, . \end{split} \end{displaymath} Hence, we deduce that \begin{equation} \label{eq:dZ27} |z_{27}| \lesssim |s|^{-\frac32-\frac{3\theta}2}+|s|^{-1-\frac{\theta}{2}}\left(\int \varepsilon^2\varphi_y\right)^{\frac12} \, . \end{equation} Therefore, we conclude the proof of \eqref{eq:dZ2} gathering \eqref{eq:dZ21}-\eqref{eq:dZ27}. \end{proof} \subsection{Coercivity lemma} We state and prove two consequences of \eqref{coercivity} and \eqref{BS.4}. \begin{lemma}\label{coer1} There exits $\kappa>0$ such that \begin{equation}\label{coer2} \int \left[\big|D^{\frac 12} (\varepsilon \rho)\big|^{2} + \varepsilon^2 \left(\frac {\rho^2+\varphi}2\right) -3 Q^2 (\varepsilon \rho)^2 \right] \geq \kappa \int \left[\big|D^{\frac 12} (\varepsilon \rho)\big|^{2} + \varepsilon^2\rho^2\right] +\mathcal O(|s|^{-2 +\theta}) , \end{equation} where $\rho$ is defined in \eqref{def.rho}, and \begin{equation}\label{eq:cF} F \geq \kappa \mathcal N(\varepsilon)^2 + \mathcal O(|s|^{-2+\theta}). \end{equation} \end{lemma} \begin{proof} \emph{Proof of \eqref{coer1}}. It is clear from the definitions of $\varphi$ and $\rho$ (see~\eqref{def.varphi} and~\eqref{def.rho}) and $\theta>\frac 35$ that $\rho^2\lesssim \varphi$. More precisely, for $y>-2|s|^{\frac 35} $, we have \[ 1- \varphi(s,y) \leq 1- \varphi(s,-2|s|^{\frac 35} ) =\frac {\phi(|s|^\theta)-\phi(-\frac 2B |s|^{\frac 35} +|s|^\theta)}{\phi(|s|^\theta)} \leq |s|^{-\frac 35}, \] and thus \begin{equation}\label{bof} \varphi(s)\geq \rho^2 - |s|^{-\frac 35}. \end{equation} From the definition of $\rho$, we have \begin{align*} \|(1-\rho) Q\|_{L^2}^2 & \lesssim \|(1-\rho^2) Q\|_{L^2}^2\lesssim \|Q\|_{L^2(y<-|s|^{\frac 35})}^2 + \int \frac 1{(1+y^2)^2 }\frac {y^2 |s|^{-1}}{(1+y^2 |s|^{-1})}\\ & \lesssim |s|^{-\frac 95} + |s|^{-1}\int \frac 1{(1+y^2) }\frac {1}{(1+y^2 |s|^{-1})}\lesssim |s|^{-1}. \end{align*} Thus, by \eqref{BS.1}, \eqref{BS.4}, \begin{align} |(\varepsilon \rho, Q)| &= |(\varepsilon , Q) - (\varepsilon,(1-\rho) Q)| \lesssim |(\varepsilon,Q)|+ \|(1-\rho) Q\|_{L^2}\|\varepsilon\|_{L^2} \nonumber \\ & \lesssim (C^\star)^2 |s|^{-2+\theta} + |s|^{-\frac 12} \|\varepsilon\|_{L^2} \lesssim |s|^{-1}. \label{coer.100} \end{align} By a similar argument, using \eqref{BS.4} we obtain \begin{equation} \label{coer.101} |(\varepsilon \rho, Q')|+|(\varepsilon \rho, \Lambda Q)|\lesssim |s|^{-1} \, . \end{equation} Therefore, applying \eqref{coercivity} to $ \varepsilon \rho $, we find \begin{align*} \int \left[\big|D^{\frac 12} (\varepsilon \rho)\big|^{2} + \right. & \left. \varepsilon^2 \left(\frac {\rho^2+\varphi}2\right) -3 Q^2 (\varepsilon \rho)^2 \right] \\ &\geq \int \left[\big|D^{\frac 12} (\varepsilon \rho)\big|^{2} + \varepsilon^2 \rho^2-3 Q^2 (\varepsilon \rho)^2 \right] +\mathcal O\left(|s|^{-\frac 35} \| \varepsilon\|_{L^2}^2\right)\\ & \geq \ \kappa \int\left[\big|D^{\frac 12} (\varepsilon \rho)\big|^{2} + \varepsilon^2 \rho^2 \right] +\mathcal O(|s|^{-\frac 85}) , \end{align*} and the result follows since $\frac 85 > 2 -\theta$. \noindent \emph{Proof of \eqref{eq:cF}.} First, we observe that \[\left| \left(\left( Q_{b}+\varepsilon\right)^4-Q_{b}^4- 4 Q_{b}^3 \varepsilon\right) - 6 Q^2 \varepsilon^2\right| \\ \lesssim |s|^{-1} | \varepsilon|^2 + | \varepsilon|^3 + | \varepsilon|^4, \] and thus, using \eqref{gnp} with $p=4$ (which implies $\| \varepsilon\|_{L^4}^2\lesssim \| \varepsilon\|_{L^2}\| \varepsilon\|_{\dot H^{\frac 12}}\lesssim C^\star|s|^{-\frac 32 + \frac \theta 2}$), \begin{align} \left|\int\left(\left(Q_{b}+\varepsilon\right)^4-Q_{b}^4-4Q_{b}^3\varepsilon\right)-6Q^2\varepsilon^2\right| &\lesssim|s|^{-1}\|\varepsilon\|_{L^2}^2 + \|\varepsilon\|_{L^2}\|\varepsilon\|_{L^4}^2 + \|\varepsilon\|_{L^4}^4 \nonumber \\ &\lesssim|s|^{-2}+C^\star |s|^{-2+\frac \theta 2} +(C^\star)^2 |s|^{-3 + \theta }\lesssim |s|^{-2+ \theta}. \label{paraF} \end{align} Next, using \eqref{e:g003}, we have \[ \int |D^{\frac 12} \varepsilon|^2 \geq \int |D^{\frac 12} \varepsilon|^2 \rho^2 \geq \int |D^{\frac 12} ( \varepsilon \rho)|^2 + \mathcal O(|s|^{-2 + \theta}) +\mathcal{O}(|s|^{-\frac1{20}}\|D^{\frac12}(\varepsilon\rho)\|_{L^2}^2). \] Thus, using also \eqref{bof}, \eqref{bof2} and \eqref{paraF}, we obtain \begin{align*} F & \geq \frac \kappa 4 \mathcal N(\varepsilon)^2 + \int \left[\left(1-\frac \kappa 4\right) |D^{\frac 12} ( \varepsilon \rho)|^2 + \left(1-\frac \kappa 4\right) (\varepsilon \rho)^2 - 3 Q^2 (\varepsilon \rho)^2 \right] \\ & \quad +\mathcal O(|s|^{-2 + \theta})+\mathcal{O}(|s|^{-\frac1{20}}\|D^{\frac12}(\varepsilon\rho)\|_{L^2}^2)\\ & \geq \frac \kappa 8 \mathcal N(\varepsilon)^2 + \mathcal O(|s|^{-2 + \theta}), \end{align*} applying \eqref{coercivity} on $ \varepsilon \rho$ as before. For future reference, we claim the following bound \begin{equation}\label{eq:bF} |F|\lesssim \mathcal N(\varepsilon)^2 + |s|^{-2+\theta}. \end{equation} Indeed, \eqref{eq:bF} is a direct consequence of \eqref{paraF} and the estimate $\int Q^2\varepsilon^2 \lesssim \int \varepsilon^2 \varphi \lesssim \mathcal{N}(\varepsilon)^2$. \end{proof} \subsection{Closing estimates on $\varepsilon$} Let \begin{equation}\label{defH} H =\left(1-\frac {K}{p_{0}}\right) \frac F\lambda + G + \frac 1{2p_{0}} K^2 -\frac 1{p_{0}} \frac {KZ}\lambda. \end{equation} \begin{proposition}\label{FJH} For $|S_0|$ large enough, possibly depending on $C^{\star}$ and for all $s\in \mathcal I^\star$, the following hold \begin{itemize} \item[(i)] Bound. \begin{equation}\label{eq:bH} |H(s)| \lesssim \frac {\mathcal N(\varepsilon)^2} {\lambda} +|s|^{-1+\theta}. \end{equation} \item[(ii)] Coercivity. There exists $\kappa>0$ such that \begin{equation}\label{eq:cH} H(s) \geq \kappa \frac {\mathcal N(\varepsilon)^2} {\lambda} +\mathcal O(|s|^{-1+\theta }). \end{equation} \item[(iii)] Estimate of the time derivatives. \begin{equation}\label{eq:H} \frac {dH}{ds} \lesssim C^\star |s|^{-2+\theta }. \end{equation} \end{itemize} \end{proposition} \begin{proof} \emph{Proof of \eqref{eq:bH}.} Recall from \eqref{bg:G}, \eqref{bg:K} and \eqref{eq:Z} that \begin{equation}\label{autres} |G| + K^2 + \frac {|KZ|}{\lambda} \lesssim |s|^{-\frac 12} +|s|^{-2+2\theta} \lesssim |s|^{-\frac 12}. \end{equation} By \eqref{autres} and \eqref{eq:bF}, we obtain \eqref{eq:bH}. \noindent \emph{Proof of \eqref{eq:cH}.} By \eqref{autres} and \eqref{eq:cF}, we observe that \[ H(s) \geq \kappa \frac {\mathcal N(\varepsilon)^2} {\lambda} +\mathcal O(|s|^{-1+\theta}) +\mathcal O(|s|^{-\frac 12}), \] which implies \eqref{eq:cH}. \noindent \emph{Proof of \eqref{eq:H}.} First, from \eqref{eq:F} and~\eqref{eq:G}, we have \begin{equation}\label{eq:F+G}\begin{aligned} \frac {d}{ds}\left( \frac F\lambda + G\right)+\frac 1{4\lambda} \int \varepsilon^2 \varphi_y &\le -2 \int \big|D^{\frac 12} (\varepsilon \rho)\big|^{2} -\int (\varepsilon \rho)^2 -\int \varepsilon^2 \varphi + 6\int Q^2 (\varepsilon \rho)^2 \\ &\quad -\left(\frac{\lambda_s}{\lambda} + b\right) \left(\frac{F}\lambda { -} K + \frac Z\lambda\right) \\ &\quad + \mathcal O(C^\star|s|^{-2+\theta}) + \mathcal O(|s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2)\,. \end{aligned} \end{equation} Second, from \eqref{eq:F} and \eqref{BS.2}, \eqref{bg:K}, \eqref{eq:Z}, we have \begin{equation}\label{eq:FFF} \left| \frac {d}{ds}\left( \frac F\lambda\right) \right| \lesssim \frac 1{\lambda} \int \varepsilon^2 \varphi_y + (C^\star)^2|s|^{-2+\frac {3\theta}2}\,. \end{equation} From~\eqref{eq:K} and then \eqref{bg:K}, \eqref{eq:FFF}, \eqref{eq:bF}, \begin{align} \left| \frac {d}{ds}\right. & \left.\left( \frac {KF}\lambda\right) + p_{0}\left(\frac{\lambda_s}{\lambda} + b\right) \frac{F}\lambda\right| \\ & \lesssim \left| K \frac {d}{ds}\left( \frac {F}\lambda\right)\right| + \left(\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} + |s|^{ -2+\frac{3\theta}2}\right) \frac F\lambda \nonumber \\ &\lesssim C^\star |s|^{-1+\theta} \frac 1{\lambda} \int \varepsilon^2 \varphi_y + (C^\star)^3|s|^{-3+\frac {5\theta}2} + \left(\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} + (C^\star)^2|s|^{ -2+\frac{3\theta}2}\right) (C^\star)^2 |s|^{-1+\theta}\nonumber \\ & \leq \frac {p_{0}}{20 \lambda} \int \varepsilon^2 \varphi_y + \mathcal O(|s|^{-2+\theta}). \label{eq:KF} \end{align} Therefore, combining \eqref{eq:F+G} and \eqref{eq:KF}, \begin{align} \frac {d}{ds}\left( \Big(1-\frac {K}{p_{0}}\Big) \frac F\lambda + G\right)+\frac 1{5\lambda} \int \varepsilon^2 \varphi_y &\le -2 \int \big|D^{\frac 12} (\varepsilon \rho)\big|^{2} -\int (\varepsilon \rho)^2 -\int \varepsilon^2 \varphi \nonumber \\ & \quad -\left(\frac{\lambda_s}{\lambda} + b\right) \left( - K + \frac Z\lambda\right) \nonumber \\ & \quad + \mathcal O(C^\star|s|^{-2+\theta}) + \mathcal O(|s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2)\,. \label{eq:F+KF+G} \end{align} From~\eqref{eq:K} and then \eqref{bg:K}, \begin{equation}\label{K2}\begin{aligned} \frac 1{2p_{0}} \frac d{ds} ( K^2 ) & = \frac {KK_{s}}{p_{0}} =- \left(\frac{\lambda_s}{\lambda} + b\right) K +\mathcal O\left(K \left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \right)+\mathcal O\left(K(C^\star)^2 |s|^{ -2+\frac{3\theta}2} \right) \\ & \leq- \left(\frac{\lambda_s}{\lambda} + b\right) K + \frac 1{20\lambda} \int \varepsilon^2 \varphi_y + \mathcal O\left(\lambda K^2 \right)+\mathcal O\left(K(C^\star)^2 |s|^{ -2+\frac{3\theta}2} \right) \\ & \leq- \left(\frac{\lambda_s}{\lambda} + b\right) K + \frac 1{20\lambda} \int \varepsilon^2 \varphi_y +\mathcal O\left(|s|^{-2+\theta} \right). \end{aligned}\end{equation} Moreover, from~\eqref{eq:K} and~\eqref{eq:dZ1},~\eqref{eq:dZ2}, \begin{equation}\label{KZ}\begin{aligned} -\frac 1{p_{0}}\frac d{ds} \left( \frac {KZ}\lambda \right) & = - \frac 1{p_{0}}\frac {K_{s}Z}\lambda - \frac 1{p_{0}}\frac {KZ_{s}}\lambda + \frac 1{p_{0}} \frac{\lambda_{s}}{\lambda} \frac {KZ}{\lambda} \\ & = \left(\frac{\lambda_s}{\lambda} + b\right) \frac Z\lambda + \mathcal O\left( \left( \int \varepsilon^2 \varphi_y \right)^{\frac12}\frac {Z}{\lambda} \right) + \mathcal O\left( (C^\star)^2|s|^{-2+\frac {3\theta} 2} \frac {Z}{\lambda}\right) \\& + \mathcal O\left(\frac {K}{\lambda} |s|^{-2}\right) + \mathcal O\left(\frac {K}{\lambda} |s|^{-\frac {3\theta}2}\left( \int \varepsilon^2 \varphi_y \right)^{\frac12} \right)+ \mathcal O\left(\frac{\lambda_s}{\lambda} \frac {KZ}{\lambda}\right). \end{aligned}\end{equation} Thus, using \eqref{BS.2}, and then \eqref{bg:K}, \eqref{eq:Z}, \begin{equation}\label{KZbis}\begin{aligned} -\frac 1{p_{0}}\frac d{ds} \left( \frac {KZ}\lambda \right) & \leq \left(\frac{\lambda_s}{\lambda} + b\right) \frac Z\lambda + \frac 1{20\lambda} \int \varepsilon^2 \varphi_y + \mathcal O\left( \frac {Z^2}{\lambda} \right) + \mathcal O\left(|s|^{-3\theta}\frac {K^2}{\lambda} \right) \\& \quad+ \mathcal O\left( (C^\star)^2|s|^{-2+\frac {3\theta} 2} \frac {Z}{\lambda} \right) + \mathcal O\left(\frac {K}{\lambda} |s|^{-2}\right) + \mathcal O\left(C^\star|s|^{-1+\frac \theta 2} \frac {KZ}{\lambda}\right)\\ & \leq \left(\frac{\lambda_s}{\lambda} + b\right) \frac Z\lambda + \frac 1{20\lambda} \int \varepsilon^2 \varphi_y + \mathcal O(|s|^{-2+\theta }). \end{aligned}\end{equation} Thus, setting $H = \left(1-\frac {K}{p_{0}}\right) \frac F\lambda + G + \frac 1{2p_{0}} K^2 -\frac 1{p_{0}} \frac {KZ}\lambda$, we see that \begin{equation}\label{eq:F+G2}\begin{aligned} \frac {d}{ds}H+\frac 1{10\lambda} \int \varepsilon^2 \varphi_y &\le -2 \int \big|D^{\frac 12} (\varepsilon \rho)\big|^{2} -\int (\varepsilon \rho)^2 -\int \varepsilon^2 \varphi + 6\int Q^2 (\varepsilon \rho)^2 \\ &\quad + \mathcal O(C^\star|s|^{-2+\theta}) + \mathcal O(|s|^{-\frac 1{20}} \|D^{\frac 12}(\varepsilon \rho)\|_{L^2}^2)\,. \end{aligned} \end{equation} Using \eqref{coer2}, we obtain \begin{equation}\label{pourH} \frac {d}{ds}H+\frac 1{10\lambda} \int \varepsilon^2 \varphi_y + \frac \kappa 2 \int \left[\big(D^{\frac 12} (\varepsilon \rho)\big)^{2} + \varepsilon^2\rho^2\right] \lesssim C^\star|s|^{-2 +\theta}, \end{equation} which implies \eqref{eq:H}. \end{proof} Observe by \eqref{epsilton.initial}, \eqref{bootstrap.2} and \eqref{eq:bH} that \begin{equation}\label{bHn} |H(S_{n})|\lesssim |S_{n}|^{-1+\theta}. \end{equation} Let $s\in \mathcal I^\star$. Integrating \eqref{eq:H} on $[S_{n},s]$, we obtain \[ H(s) - H(S_{n}) \lesssim C^\star |s|^{-1+\theta}. \] Therefore, by \eqref{bHn}, \eqref{eq:cH} and $\lambda(s) \sim |s|^{-1}$, we obtain \[ \mathcal N(\varepsilon)^2 \leq C_{2} C^\star |s|^{-2+\theta}, \] where the constant $C_{2}$ above does not depend on $C^\star$ nor on $n$. Now, we fix the positive constant $C^{\star}$ so that $C^\star =4C_{2}$. Then, we deduce \begin{equation}\label{closing.0} \mathcal N(\varepsilon) \le \frac{C^{\star}}2|s|^{-1+\frac \theta 2} \, , \end{equation} which strictly improves \eqref{bootstrap.4}. \section{Parameters estimates}\label{S5} In this section, we finish the proof of Proposition~\ref{bootstrapprop} by closing the estimates for the parameters $\mu_n(s)$, $\lambda_n(s),b_n(s)$ and $x_n(s)$. Indeed, we strictly improve estimates \eqref{bootstrap.1}, \eqref{bootstrap.2} and \eqref{bootstrap.3}. We consider an arbitrary $n \ge n_{0}$. As in Sect.~\ref{S4}, for the sake of simplicity, we will omit the subscript $n$ in this section and write $\mu$, $\lambda$, $b$, $x$ and $\varepsilon$ for $\mu_n$, $\lambda_n$, $b_n$, $x_n$ and $\varepsilon_n$. Recall that the constant $C^\star$ has been fixed in Sect.~\ref{S4}, thus from now on, we omit to mention dependency in $C^\star$. \subsection{Refined scaling control} Recall that we set \begin{equation}\label{defJ} \rho(y)=\int_{-\infty}^y \Lambda Q(y') dy', \quad J(s)= \int \varepsilon(s,y) \rho(y) \chi(-y|s|^{-\frac 23}) dy, \end{equation} and \begin{equation}\label{defl0} \mu(s) =|1-J(s)|^{\frac 1{p_{0}}} \lambda(s). \end{equation} Such a functional $J(s)$ was introduced in \cite{MaMejmpa,MaMe} in similar context (see also \cite{BoSoSt}). It corresponds to the fact that the orthogonality $( \varepsilon, \Lambda Q)$ is not especially interesting for the estimate of $\lambda_{s}$. Indeed, from \eqref{BS.1} and \eqref{BS.2}, the best information one can get is $|\frac{\lambda_s}{\lambda}|\lesssim C^\star |s|^{-1+\frac \theta 2}$. Thus, one needs to introduce a functional related to the cancellation $(\Lambda Q,Q)=0$. Unlike for NLS-type equation, where such cancellation can be used easily (see e.g. \cite{RaSz}), the fact that $\rho\not \in L^2$ creates serious difficuty and imposes the use of cut-off term in the definition of $J$. We claim the following result. \begin{lemma} For $|S_0|$ large enough and for all $s\in \mathcal I^\star$, \begin{equation}\label{bJ} |J(s)|\lesssim |s|^{-\frac 23 + \frac \theta2}, \end{equation} \begin{equation}\label{dsJ} \left|\frac{dJ}{ds} - p_{0}\left(\frac{\lambda_s}{\lambda}+b\right) \right| \lesssim |s|^{-\frac 43 + \frac \theta 2}, \end{equation} \begin{equation}\label{dsl0} \left|\frac{\mu_s}{\mu}+b\right| \lesssim |s|^{-\frac 43 + \frac \theta 2}. \end{equation} \end{lemma} \begin{remark} Since $\theta<\frac 23$, we see that from \eqref{dsl0}, $\left|\frac{\mu_s}{\mu}+b\right| \ll |s|^{-1}$, while $|b|\sim |s|^{-1}$, thus this estimate is much more precise than $|\frac{\lambda_s}{\lambda} |\lesssim C^\star |s|^{-1+\frac \theta 2}$. \end{remark} \begin{proof} First, we see by using the decay of $\rho$ and \eqref{bootstrap.4} that \begin{displaymath} |J(s)|\lesssim \int_{y<0} \frac{|\varepsilon(s,y)|}{1+|y|} dy + \int_{0<y<2 |s|^{\frac 23}} |\varepsilon(s,y)| dy \lesssim \|\varepsilon(s)\|_{L^2}+ |s|^{\frac 13} \mathcal N(\varepsilon)\lesssim |s|^{-\frac 23 + \frac \theta2} \, , \end{displaymath} which proves \eqref{bJ}. Now, we use \eqref{modulation.4} to compute $\frac{dJ}{ds}$, \begin{align*} \frac{dJ}{ds} & = \int \varepsilon_s \rho \chi(-y|s|^{-\frac 23}) - \frac 23|s|^{-1} \int y |s|^{-\frac 23} \chi'(-y|s|^{-\frac 23}) \rho \varepsilon \\ & = j_1 + j_2+ j_3+j_4+j_5+j_6+j_7\end{align*} where \begin{equation*} \begin{aligned} & j_1=\int V_y \rho \chi(-y|s|^{-\frac 23}) , & j_2& = \frac{\lambda_s}{\lambda} \int\Lambda \varepsilon \rho \chi(-y|s|^{-\frac 23}),\\ & j_3 = (\frac{\lambda_s}{\lambda}+b) \int \Lambda Q_b \rho \chi(-y|s|^{-\frac 23}), & j_4 &= (\frac{x_s}{\lambda}-1)\int (Q_b+\varepsilon)_y \rho \chi(-y|s|^{-\frac 23}) ,\\ & j_5 = -(b_s+b^2)\int \frac{\partial Q_b}{\partial b} \rho \chi(-y|s|^{-\frac 23}), & j_6 &= \int \Psi_b \rho \chi(-y|s|^{-\frac 23}),\\ & j_7 = -\frac 23|s|^{-1} \int y|s|^{-\frac 23} \chi'(-y|s|^{-\frac 23}) \varepsilon \rho . \end{aligned} \end{equation*} \noindent \emph{Estimate for $j_{1}$.} \[ \begin{split} j_1&=-\int \mathcal{L}\varepsilon \big(\rho \chi(-y|s|^{-\frac 23}) \big)_y +3\int (Q_b^2-Q^2) \varepsilon \big(\rho \chi(-y|s|^{-\frac 23}) \big)_y +\int \big(3Q_b\varepsilon^2+\varepsilon^3\big)\big(\rho \chi(-y|s|^{-\frac 23}) \big)_y \\ &=j_{1,1}+j_{1,2}+j_{1,3} \, . \end{split}\] First, using $\rho'=\Lambda Q$ and $\mathcal L\Lambda Q=-Q$, \begin{align*} j_{1,1}& = - \int \varepsilon \mathcal{L} (\Lambda Q \chi(-y|s|^{-\frac 23})) + |s|^{-\frac23}\int \varepsilon \mathcal{L}(\rho \chi'(-y|s|^{-\frac 23}))\\ & = \int \varepsilon Q+\int \varepsilon \mathcal{L} (\Lambda Q (1-\chi(-y|s|^{-\frac 23}))) + |s|^{-\frac 23} \int \varepsilon \mathcal{L} (\rho \chi'(-y|s|^{-\frac 23})) \, . \end{align*} Note from \eqref{BS.1} that \[ \left| \int \varepsilon D^1 (\Lambda Q (1-\chi(-y|s|^{-\frac 23})) \right| \lesssim \|\varepsilon\|_{L^2} \|(\Lambda Q (1-\chi(-y|s|^{-\frac 23}))_y\|_{L^2} \lesssim |s|^{-\frac {13}6} \, , \] \begin{align*} \left| \int \varepsilon \Lambda Q (1-3Q^2) (1-\chi(-y|s|^{-\frac 23}) \right| &\lesssim \|\varepsilon\|_{L^2} \| \Lambda Q (1-3Q^2)(1-\chi(-y|s|^{-\frac 23}))\|_{L^2}\\ &\lesssim \|\varepsilon\|_{L^2} |s|^{-1}\lesssim |s|^{-\frac 32} \, , \end{align*} \begin{align*} \left|\int \varepsilon D^1(\rho \chi'(-y|s|^{-\frac 23}))\right| & \lesssim \| \varepsilon \|_{L^2} \|(\rho \chi'(-y|s|^{-\frac 23}))_{y}\|_{L^2} \lesssim \| \varepsilon \|_{L^2} |s|^{-\frac 13}\lesssim |s|^{-\frac 56} \, , \end{align*} and by \eqref{bootstrap.4} \begin{align*} \left|\int \varepsilon (1-3Q^2)(\rho \chi'(-y|s|^{-\frac 23}))\right| & \lesssim \mathcal N( \varepsilon)\|\chi'(-y|s|^{-\frac 23}) \|_{L^2} \lesssim \mathcal N( \varepsilon) |s|^{\frac 13}\lesssim |s|^{-\frac 23 + \frac \theta 2} \, . \end{align*} Thus, using also $|\int \varepsilon Q|\lesssim |s|^{-2+\theta} \lesssim |s|^{-\frac 43 + \frac \theta 2}$ (see \eqref{BS.4}), we deduce that \[ |j_{1,1}|\lesssim |s|^{-\frac 43+ \frac \theta 2} \, . \] Next, \begin{align*} |j_{1,2}|& \lesssim |b| \|\varepsilon\|_{L^2}\|\big(\rho \chi(-y|s|^{-\frac 23}) \big)_y\|_{L^2} \lesssim |s|^{-\frac 32} \, , \end{align*} and using \eqref{gnp} \begin{equation*} |j_{1,3}|\lesssim \big(\mathcal N(\varepsilon)^2 + |b| \|\varepsilon\|_{L^2}^2+ \| \varepsilon\|_{L^2}^2 \|D^{\frac12} \varepsilon\|_{L^2}\big) \|(\rho \chi(-y|s|^{-\frac 23}))_{y} \big\|_{L^\infty} \lesssim |s|^{-2+\theta} \lesssim |s|^{-\frac 43} \, . \end{equation*} Hence, we deduce combining those estimates that \begin{equation} \label{eq:j1} |j_{1}| \lesssim |s|^{-\frac43+\frac{\theta}2} \, . \end{equation} \noindent \emph{Estimate for $j_{2}$.} \[ j_2 = -\frac{\lambda_s}{\lambda} \int \varepsilon \Lambda (\rho \chi(-y|s|^{-\frac 23})) =- \frac 12 \frac{\lambda_s}{\lambda} J - \frac{\lambda_s}{\lambda} \int \varepsilon y (\rho \chi(-y|s|^{-\frac 23}))_y. \] Note that \begin{align*} \left| \int \varepsilon y (\rho \chi(-y|s|^{-\frac 23}))_y\right| &\lesssim \left| \int \varepsilon y \Lambda Q \chi(-y|s|^{-\frac 23})\right| +|s|^{-\frac 23} \left| \int \varepsilon y \rho \chi'(-y|s|^{-\frac 23}) \right| \\ & \lesssim \|\varepsilon\|_{L^2}+|s|^{\frac 13} \mathcal N( \varepsilon)\lesssim |s|^{-\frac 23 + \frac \theta 2}. \end{align*} Thus, by \eqref{bJ} and then \eqref{BS.2}, \begin{equation} \label{eq:j2} \left|j_2 \right|\lesssim\left(\left|\frac{\lambda_s}{\lambda}+ b\right| +|b|\right) |s|^{-\frac 23 + \frac \theta 2} \lesssim |s|^{-\frac 53 + \theta}\lesssim |s|^{-\frac 43+ \frac \theta 2} \, . \end{equation} \noindent \emph{Estimate for $j_{3}$.} For the term $j_3$, we first note from \eqref{nlprofile.2} that \[ \int \rho \Lambda Q =\frac 12 \Big(\lim_{y \to +\infty} \rho(y)\Big)^2 = \frac 18\left( \int Q\right)^2=p_{0} \, . \] By the decay properties of $Q$, $P$ and $\rho$, \begin{equation}\label{avr} \left|\int (\Lambda P_b ) \rho \chi(-y |s|^{-\frac 23}) \right| \lesssim \int_{-2 |b|^{-1}<y<|s|^{\frac 23}} \frac {dy}{1+|y|} \lesssim |\ln|b||+|\ln|s|| \lesssim |\ln|s||\,. \end{equation} Thus, \begin{align*} \int (\Lambda Q_b) \rho \chi(-y|s|^{-\frac 23}) & = \int \rho (\Lambda Q) \chi(-y|s|^{-\frac 23}) + b \int (\Lambda P_b) \rho \chi(-y|s|^{-\frac 23}) \\& = \int \rho \Lambda Q +\int \rho \Lambda Q \big(\chi(-y|s|^{-\frac 23})-1\big) + \mathcal O(|s|^{-1}|\ln |s||) \\& = p_{0} + \mathcal O(|s|^{-\frac 23}) \, . \end{align*} Therefore, by~\eqref{BS.2}, \begin{equation} \label{eq:j3} \left| j_{3} - p_{0}\left(\frac{\lambda_s}{\lambda}+ b\right) \right| \lesssim |s|^{-\frac 53 + \frac \theta2} \lesssim |s|^{-\frac 43+ \frac \theta 2}. \end{equation} \noindent \emph{Estimate for $j_{4}$.} Similarly, since $\int \rho Q'=-\int \rho' Q = -\int Q \Lambda Q =0$, we have \[ \int Q_b' \rho \chi(-y|s|^{-\frac 23}) = \mathcal O(|s|^{-\frac 23}). \] Moreover, \[ \int \varepsilon_y \rho \chi(-y|s|^{-\frac 23}) = -\int \varepsilon \rho' \chi(-y|s|^{-\frac 23}) +|s|^{-\frac 23}\int \varepsilon \rho \chi'(-y|s|^{-\frac 23}), \] and so \[ \left| \int \varepsilon_y \rho \chi(-y|s|^{-\frac 23}) \right| \lesssim \| \varepsilon\|_{L^2}\lesssim |s|^{-\frac 12}. \] From \eqref{BS.2}, we obtain \begin{equation} \label{eq:j4} |j_{4}|\lesssim |s|^{-1+\frac \theta 2} |s|^{-\frac 12} \lesssim |s|^{-\frac 32 + \frac\theta 2} \, . \end{equation} \noindent \emph{Estimate for $j_{5}$.} By \eqref{def.DQb} and the decay properties of $P$ and $\rho$ (as in \eqref{avr}), one has \[ \left| \int \frac{\partial Q_b}{\partial b} \rho \chi(-y|s|^{-\frac 23})\right| \lesssim \int |P_{b}||\rho|\chi(-y|s|^{-\frac 23}) +\int_{-2|b|^{-1}<y<-|b|^{-1}} |\rho| \lesssim \ln |s| \, . \] Thus, using \eqref{BS.5}, \begin{equation} \label{eq:j5} |j_{5}|\lesssim |s|^{-2+\frac \theta 2} \ln|s|\lesssim |s|^{-\frac 43 + \frac \theta 2} \, . \end{equation} \noindent \emph{Estimate for $j_{6}$.} Next, by \eqref{lprofile.2} and the properties of $\rho$, \begin{equation} \label{eq:j6} |j_{6}|=\left| \int \Psi_b \rho \chi(-y|s|^{-\frac 23})\right| \lesssim \|\Psi_{b} \|_{L^2} \|\rho \chi(-y|s|^{-\frac 23})\|_{L^2} \lesssim \|\Psi_b\|_{L^2} |s|^{\frac 13} \lesssim |s|^{-\frac 76}\lesssim |s|^{-\frac 43 + \frac \theta 2} \, . \end{equation} \noindent \emph{Estimate for $j_{7}$.} Note by the definition of $\chi$, \begin{equation} \label{eq:j7} | j_{7}| \lesssim |s|^{-1} \int_{|s|^{\frac 23}<y<2|s|^{\frac 23}} |\varepsilon| \lesssim |s|^{-\frac 23}\|\varepsilon\|_{{L^2}}\lesssim |s|^{-\frac 76} \lesssim |s|^{-\frac 43 + \frac \theta 2} \, . \end{equation} Therefore, combining estimates \eqref{eq:j1}-\eqref{eq:j7}, we obtain \eqref{dsJ}. Now, we prove \eqref{dsl0}. Since $\mu =(1-J)^{\frac 1{p_{0}}} \lambda$, by direct computations and then \eqref{dsJ}, \eqref{BS.2} and \eqref{bJ}, we have \begin{align*} \left|p_{0}(1-J) \left(\frac {\mu_{s}} {\mu} +b\right)\right| & =\left|p_{0}(1-J)\left(\frac {\lambda_{s}}{\lambda}+b\right) - {J_{s}}\right|\\ &\le \left| p_{0}\left(\frac {\lambda_{s}}{\lambda}+b\right) - {J_{s}}\right| + p_{0} \left|J \left(\frac {\lambda_{s}}{\lambda}+b\right)\right|\\ & \lesssim |s|^{-\frac 43 + \frac\theta 2} + |s|^{-\frac 53 + \theta} \lesssim |s|^{-\frac 43 +\frac \theta 2} \, , \end{align*} which proves \eqref{dsl0}, since $\theta <\frac 23$. \end{proof} \subsection{Closing parameter estimates} Let $S_{0}<0$ and $n_0\in \mathbb N$, with $|S_0|$ and $n_{0}$ to be fixed large enough. Set \[ \alpha = \frac 16 - \frac \theta 4\in \left(0,\frac 1{60}\right) \quad \hbox{from the condition \eqref{theta.def} on $\theta$.} \] Let $s \in \mathcal I^\star$. From~\eqref{BS.2}, \begin{equation} \label{closing.1} \Big| \frac{x_s}{\lambda}-1\Big| \lesssim |s|^{-1+\frac \theta 2} \,. \end{equation} Moreover, from \eqref{bJ}, \eqref{dsl0} and $b=-\lambda$, \begin{equation}\label{closing.2b} \left| \frac {\mu}{\lambda} -1 \right| \lesssim |J|\lesssim |s|^{-\frac 23 + \frac \theta2}\quad \text{and} \quad \Big| {\mu_s}-{\mu^2}\Big| \le |\mu|\left|\frac{\mu_s}{\mu}+b\right|+|\mu\lambda|\left|\frac{\mu}{\lambda}-1\right| \lesssim |s|^{-2-2\alpha} \,. \end{equation} First, we observe from \eqref{bootstrap.1}, \eqref{BS.1} and \eqref{closing.2b} that \begin{equation} \label{closing.200} \left| \lambda(s)-\frac1{|s|} \right| \le \lambda(s)\left|1-\frac{\mu(s)}{\lambda(s)}\right|+\left| \mu(s)-\frac1{|s|}\right| \lesssim |s|^{-1-\alpha} < \frac 12 |s|^{-1-\frac{\alpha}{2}}\, , \end{equation} for $|S_0|$ large enough. Since $b=-\lambda$, this strictly improves estimates \eqref{bootstrap.2} on $b(s)$ and $\lambda(s)$. Second, we improve the estimate on $|x(s)+\ln(|s|)|$ in \eqref{bootstrap.3}. From \eqref{closing.1}, \eqref{BS.1} and \eqref{closing.200}, we have \[ \Big|x_s+\frac 1{s}\Big|\leq |x_s - \lambda (s)|+\left|\lambda(s)-\frac 1{|s|}\right| \lesssim |s|^{-1-\alpha} \, . \] Integrating on $[S_n,s]$, using $x^{in}=-\ln(|S_n|)$, we obtain \begin{equation}\label{closing.3} |x(s) +\ln(|s|)|\lesssim |s|^{-\alpha} \, . \end{equation} As before, this strictly improves \eqref{bootstrap.3} on $x(s)$ for $|S_0|$ large enough. The last step of the proof is to strictly improve \eqref{bootstrap.1} on $\mu$ by adjusting the initial value of $\lambda$, i.e. $\lambda^{in}$ using a contradiction argument. See \cite{CoMaMe} for a similar argument. Note that such an indirect argument is needed because the estimate \eqref{closing.2b} is relatively tight, which prevents us from choosing explicitly the value of $\lambda^{in}$. Let \begin{equation}\label{corresp} \mu^{in} = \mu(S_n)=(1-J(S_{n}))^{\frac 1{p_{0}}} \lambda^{in}. \end{equation} First, we prove that there exists at least a choice of $\mu^{in}\in [|S_n|^{-1}-|S_n|^{-1-\alpha},|S_n|^{-1}+|S_n|^{-1-\alpha}]$ that allows to strictly improve the bootstrap bound \eqref{bootstrap.1} on $\mathcal I^\star$ using \eqref{closing.2b}. Second, since the dependency of $\mu^{in}$ on $\lambda^{in}$ in \eqref{corresp} is implicit (recall from \eqref{decomposition.2} that $\varepsilon(S_n)$ and thus $J(S_n)$ depends on $\lambda^{in}$), we need to check that the the image of the map $$:\lambda^{in}\in [|S_n|^{-1}-|S_n|^{-1-\frac \alpha2},|S_n|^{-1}+|S_n|^{-1-\frac \alpha2}]\mapsto \mu^{in} $$ contains the interval $ [|S_n|^{-1}-|S_n|^{-1-\alpha},|S_n|^{-1}+|S_n|^{-1-\alpha}]$. Assume for the sake of contradiction that for any $\mu^\sharp\in [-1,1]$, the choice $\mu^{in}=|S_n|^{-1} + \mu^\sharp |S_n|^{-1-\alpha}$ leads to $S^\star_n(\mu^\sharp) = S^\star_n<S_{0}$. By \eqref{closing.0}, \eqref{closing.200} and \eqref{closing.3}, we have strictly improved \eqref{bootstrap.2}, \eqref{bootstrap.3} and \eqref{bootstrap.4}. Thus, at $S_n^\star$, \eqref{bootstrap.1} is saturated, which means that \begin{equation}\label{e:sat} \Big| \mu(S^\star_n)- |S^\star_n|^{-1}\Big| = |S_n^{\star}|^{-1-\alpha}. \end{equation} Define the function $\Phi$ by \[ \Phi \, : \, \mu^\sharp\in [-1,1] \mapsto \Big( \mu(S^\star_n)- |S^\star_n|^{-1}\Big) |S_n^{\star}|^{1+\alpha} \in \{-1,1\}. \] Set \[ f(s) = \Big(\mu(s)+s^{-1}\Big)^2 (-s)^{2+2\alpha}. \] Then, using \eqref{closing.2b} and \eqref{bootstrap.1}, \begin{align*} f'(s) & = 2 \Big(\mu(s)+s^{-1}\Big) \Big(\mu_s(s)-s^{-2}\Big)(-s)^{2+2\alpha} - 2(\alpha+1)\Big(\mu(s)+s^{-1}\Big)^2 (-s)^{1+2\alpha}\\ & = 2 \Big(\mu(s)+s^{-1}\Big) \left( \mu^2(s) - s^{-2} + \mathcal O(|s|^{-2-2\alpha})\right)(-s)^{2+2\alpha} \\ &\quad - 2(1+\alpha)\Big(\mu(s)+s^{-1}\Big)^2 (-s)^{1+2\alpha}\\ & = 2 \Big(\mu(s)+s^{-1}\Big) \left( \mu^2(s) - s^{-2} \right)(-s)^{2+2\alpha} +\mathcal O(|s|^{-1-\alpha}) \\ & \quad - 2(1+\alpha)\Big(\mu(s)+s^{-1}\Big)^2 (-s)^{1+2\alpha}\\ & = 2 \Big(\mu(s)+s^{-1}\Big)^2 \left( \mu(s) + (1+2\alpha) s^{-1} \right)(-s)^{2+2\alpha} + \mathcal O(|s|^{-1- \alpha}). \end{align*} In particular, by \eqref{e:sat}, \begin{equation}\label{closing.6} f(S^\star_n) =1 \quad \text{and} \quad f'(S^\star_n) = 4\alpha |S^\star_n|^{-1} + \mathcal O(|S^\star_n|^{-1-\alpha}) > 3\alpha |S^\star_n|^{-1}, \end{equation} for $|S^\star_n|>|S_0|$ large enough. It follows from the transversality property \eqref{closing.6} that the map $\mu^\sharp\mapsto S^\star_n$ is continuous. Indeed, first, let $\mu^\sharp\in (-1,1)$ so that $S_n<S_n^\star$ and let $0<\epsilon<S_n^\star-S_n$ small. By \eqref{closing.6}, there exists $\delta>0$ such that $f(S_n^\star+\epsilon)>1+\delta$ and, for all $s\in [S_n,S_n^\star-\epsilon]$, $f(s)<1-\delta$. By continuity of the flow for the mBO equation, there exists $\eta>0$ such for all $\tilde \mu^\sharp\in (-1,1)$ with $|\tilde\mu^\sharp-\mu^\sharp|<\eta$, the corresponding $\tilde f$ satisfies $|\tilde f(s)-f(s)|<\delta/2$ on $[S_n,S_n^\star+\epsilon]$. This has two consequences : first, for all $s\in [S_n,S_n^\star-\epsilon]$, $\tilde f(s)<1-\frac \delta 2$ and thus $\tilde S_n^\star> S_n^\star-\epsilon$; second, $\tilde f(S_n^\star+\epsilon)>1+\frac \delta 2$ and thus $\tilde S_n^\star\leq S_n^\star+\epsilon$, which proves continuity of $\mu^\sharp\mapsto S_n^\star$ on $(-1,1)$. Moreover, we see that for $\mu^\sharp=-1$ and $\mu^\sharp=1$, $f(S_n)=1$ and $f'(S_n)>0$ (see \eqref{closing.6}), and thus in this case $S_n^\star=S_n$. By similar arguments as before, $\mu^\sharp\mapsto S_n^\star$ is continuous on $[-1,1]$. Therefore, the function $\Phi$ is also continuous from $[-1,1]$ to $\{1,-1\}$, but this is a contradiction with $\Phi(-1)=-1$ and $\Phi(1)=1$. It follows that there exists at least a value of $\mu^\sharp\in (-1,1)$ such that $\mu^{in}=|S_n|^{-1} + \mu^\sharp |S_n|^{-1-\alpha}$ leads to $S_{n}^\star=S_{0}$. As announced, now we check that this value of $\mu^{in}$ indeed corresponds to a choice of $\lambda^{in}$ satisfying \eqref{Snstar}. This will finish the proof of Proposition \ref{bootstrapprop}. For this, we set $\Omega_n=[n^{-1}-n^{-1-\frac \alpha2},n^{-1}+n^{-1-\frac \alpha2}]$ and we study the map: \[ \lambda^{in} \in \Omega_n \mapsto \mu^{in}=(1-J(S_{n}))^{\frac 1{p_{0}}} \lambda^{in} =\left( 1- \int \varepsilon^{in}(y) \rho(y) \chi(-y n^{-\frac 23}) dy\right)^{\frac 1{p_{0}}} \lambda^{in}, \] where from \eqref{decomposition.1} and \eqref{Pb}, $\varepsilon^{in} = -a^{in}Q + \lambda^{in} \chi(y \lambda^{in}) P$. From the definition of $a^{in}$ in \eqref{ain} and \eqref{Ea}, it is clear that the map $:\lambda^{in} \mapsto a^{in}$ is independent of $n$, smooth and bijective in a neighborhood of $0$ and $\frac{d a^{in}}{d\lambda^{in}}|_{\lambda^{in}=0 }= {p_0}/{\int Q^2}>0$. Since \begin{multline*} \frac{d }{d\lambda^{in}}\int \varepsilon^{in} \rho \chi(-y n^{-\frac 23}) \\= -\frac{d a^{in}} {d\lambda^{in}} \int Q \rho \chi(-y n^{-\frac 23}) + \int \chi(y\lambda^{in}) P \rho \chi(-y n^{-\frac 23}) + \lambda^{in} \int y \chi'(y\lambda^{in}) P \rho , \end{multline*} where $|y\rho(y)|\lesssim 1$ for $y<-1$, we see that the map $:\lambda^{in} \mapsto \mu^{in}(\lambda^{in})$ is $C^1$ on $\Omega_n$ and that $\frac{d \mu^{in}}{d\lambda^{in}}|_{\lambda^{in}\in \Omega_n}\in [\frac 12,\frac 32]$ for all $n$ large enough. Moreover, by the properties of $P$ and $\rho$, \[\mu^{in}|_{\lambda^{in}=n^{-1}}=n^{-1} (1+\mathcal O(n^{-1}\log n)),\] \[\mu^{in}|_{\lambda^{in}=n^{-1}+n^{-1-\frac \alpha 2}} \geq \mu^{in}|_{\lambda^{in}=n^{-1} }+ \frac 12 n^{-1-\frac \alpha 2} \geq n^{-1} + 2 n^{-1-\alpha},\] for $n\geq n_0$, $n_0$ large enough. Therefore, for $n$ large enough, the map $\mu^{in}$ is one-to-one from $\Omega_n$ to $\mu^{in}(\Omega_n)$ and $\mu^{in}(\Omega_n)$ contains the interval $[n^{-1}-n^{-1- \alpha },n^{-1}+n^{-1- \alpha }]$. \section{Compactness arguments}\label{S6} Going back to the original variables $(t,x)$, we claim from Proposition~\ref{bootstrapprop} that there exist $n_{0}>0$ large and $t_{0}>0$ small such that for any $n\geq n_{0}$, the solutions $\{u_{n}\}$ defined in Sect.~\ref{s:BS} satisfy, for all $t\in [T_{n},t_{0}]$, \begin{equation}\label{CC.1} \begin{aligned} & \| \varepsilon_n(t)\|_{L^2} \lesssim t^{\frac 12}, &&\| \varepsilon_n (t)\|_{\dot H^{\frac 12}} + \| \varepsilon_n(t)\|_{L^2(y>-t^{-\frac 35})}\lesssim t^{\frac 23 + 2\alpha}. \\ & |\lambda_{n}(t)-t|+ |b_{n}(t)+t| \lesssim t^{1+\frac \alpha 2}, && \left|x_{n}(t)+|\ln t|\right| \lesssim t^{\frac \alpha 2}. \end{aligned} \end{equation} (Recall that $\alpha = \frac 16 -\frac \theta 4 \in (0,\frac 1{60})$.) Indeed, from \eqref{bootstrap.2}, we have $\lambda_{n}(s)=|s|^{-1} + \mathcal O(|s|^{-1-\frac \alpha 2})$ and so \eqref{sn} rewrites as \[ t-T_{n} = \int_{S_{n}}^s \lambda_{n}^2(s') ds' = \left( |s|^{-1} - |S_{n}|^{-1}\right) + \mathcal O(|s|^{-1-\frac \alpha 2}). \] Since $|S_{n}|^{-1}=T_{n}$, it follows that $t=|s|^{-1} + \mathcal O(|s|^{-1-\frac \alpha 2})$ as $|s|\to +\infty$ and, equivalently, \begin{equation}\label{s.to.t} |s|^{-1} = t+\mathcal O(t^{1+\frac \alpha 2}),\quad |s| = t^{-1} + \mathcal O( t^{-1+\frac \alpha 2}) \quad \hbox{as $t\downarrow 0$}. \end{equation} Thus, the estimates on $\lambda_{n}(t)$, $b_{n}(t)$ and $x_{n}(t)$ in \eqref{CC.1} follow directly from \eqref{bootstrap.2}, \eqref{bootstrap.3} and \eqref{s.to.t}. The estimates on $\| \varepsilon_{n}(t)\|_{L^2}$ and $\| \varepsilon_{n} (t)\|_{\dot H^{\frac 12}}$ follow from \eqref{bootstrap.4} and \eqref{BS.1}. Finally, from the definition of $\varphi$ in \eqref{def.varphi}, it follows that for $y>-|s|^{\theta}$, $\varphi(s,y)\gtrsim 1$. Thus, from \eqref{bootstrap.4}, we have \[ \int_{y>-|s|^{\theta}} \varepsilon_{n}^2(s,y) dy \lesssim |s|^{-2+\theta}. \] Since $t^{-\frac 35} < |s|^{\theta}$ choosing $t_{0}$ small enough, we also obtain \begin{equation}\label{CC.2} \int_{y>-t^{-\frac 35}} \varepsilon_{n}^2(t,y) dy \lesssim t^{2-\theta} =t^{\frac 43 + 4 \alpha}. \end{equation} This completes the proof of \eqref{CC.1}. These estimates imply that the sequences $\{\lambda_{n}(t_{0})\}$, $\{x_{n}(t_{0}\}$ are bounded, and the sequence $\{v_{n}\}$ defined by \[ v_{n} = \lambda_{n}^{\frac 12}(t_{0}) u_{n}(t_{0},\lambda_{n}(t_{0})\cdot +x_{n}(t_{0})) =Q_{b_{n}(t_{0})} + \varepsilon_{n}(t_{0}) \] is bounded in $H^{\frac 12}$. Therefore, there exist subsequences $\{v_{n_{k}}\}$, $\{\lambda_{n_{k}}(t_{0})\}$, $\{x_{n_{k}}(t_{0})\}$ and $v_\infty\in H^{\frac 12}$, $\lambda_\infty>0$, $x_{\infty}$, such that \begin{equation} \label{CC.2b} v_{n_{k}}\underset{k \to \infty}{\rightharpoonup} v_\infty \ \text{weakly in} \ H^{\frac 12}, \quad \lambda_{n_{k}}(t_{0})\underset{k\to +\infty}{ \to} \lambda_\infty \quad \text{and} \quad x_{n_{k}}(t_{0})\underset{k \to \infty}{\to} x_\infty \, . \end{equation} Let $v_{k}$ be the maximal solution of \eqref{mBO} such that $v_{k}(0)=v_{n_{k}}$ and let $(\lambda_{k},x_{k},b_{k},\varepsilon_{k})$ be its decomposition as given by Lemma~\ref{modulation}. Then, by the scaling invariance \eqref{ulambda} and the uniqueness of the Cauchy problem, \[ v_k(t,\cdot)=\lambda_{n_k}^{\frac12}(t_0)u_{n_k}\big(t_0+\lambda_{n_k}^2(t_0)t,\lambda_{n_k}(t_0)\cdot+x_{n_k}(t_0)\big), \quad \forall \, t \in \big[-\frac{t_0-T_{n_k}}{\lambda_{n_k}^2(t_0)},0\big] \, .\] Hence, it follows from the uniqueness of the decomposition in Lemma \ref{modulation} that\begin{equation}\label{CC.3}\begin{aligned} & \lambda_{k}(t)=\frac {\lambda_{n_{k}}(t_{0}+\lambda_{n_{k}}^2(t_{0}) t)}{\lambda_{n_{k}}(t_{0})} \, , & x_{k}(t)&=\frac {x_{n_{k}}(t_{0}+\lambda_{n_{k}}^2(t_{0}) t)-x_{n_{k}}(t_{0})}{\lambda_{n_{k}}(t_{0})} \, ,\\ & b_{k}(t)=b_{n_{k}}(t_{0}+\lambda_{n_{k}}^2(t_{0}) t) \, , & \varepsilon_{k}(t)&= \varepsilon_{n_{k}}(t_{0}+\lambda_{n_{k}}^2(t_{0}) t) \, . \end{aligned}\end{equation} Now let $T_1>0$ be such that $T_1<\frac{t_0}{\lambda_\infty^2}$, so that we may apply Lemma~\ref{le:weak} to $v_k(t)$ on $[-T_1,0]$, since the conditions~\eqref{hypoweak} are fulfilled for $k$ large enough on such an interval. We obtain that the solution $v(t)$ of~\eqref{mBO} such that $v(0)=v_\infty$ exists on $\left(-\frac{t_0}{\lambda_\infty^2},0\right]$, and its decomposition $(\lambda_v,x_v,b_v,\varepsilon_v)$ satisfies, for all $t \in \left(-\frac{t_0}{\lambda_\infty^2},0\right] $, \begin{equation} \label{CC.4} \begin{aligned} \lambda_k(t) \underset{k \to +\infty}{\to} \lambda_v(t), \ x_k(t) \underset{k \to +\infty}{\to} x_v(t), \ b_k(t) \underset{k \to +\infty}{\to} b_v(t), \ \varepsilon_k(t) \underset{k \to +\infty}{\rightharpoonup} \varepsilon_v(t) \ \text{weak in} \ H^{\frac12}. \end{aligned} \end{equation} Then we define the solution $S(t)$ of~\eqref{mBO}, for all $t\in (0,t_0]$, by \[ S(t,x) = \frac{1}{\lambda_\infty^{\frac12}}\, v\left(\frac{t-t_0}{\lambda_\infty^2},\frac{x-x_\infty}{\lambda_\infty}\right), \] and denote $( \lambda, x, b, \varepsilon)$ its decomposition. Once again, the uniqueness of the decomposition in Lemma \ref{modulation} yields \begin{equation}\label{CC.5} \begin{aligned} & \lambda(t)=\lambda_{\infty} \lambda_v\big(\frac{t-t_0}{\lambda_{\infty}^2}\big)\, , & x(t)&=x_{\infty}+\lambda_{\infty} x_v\big(\frac{t-t_0}{\lambda_{\infty}^2}\big) \, ,\\ & b(t)=b_v\big(\frac{t-t_0}{\lambda_{\infty}^2}\big) \, , & \varepsilon(t)&= \varepsilon_v\big(\frac{t-t_0}{\lambda_{\infty}^2}\big) \, . \end{aligned} \end{equation} Therefore, we obtain gathering \eqref{CC.2b}-\eqref{CC.5}, for all $t\in (0,t_0]$, \begin{displaymath} \begin{aligned} \lambda_{n_k}(t) \underset{k \to +\infty}{\to} \lambda(t), \ x_{n_k}(t) \underset{k \to +\infty}{\to} x(t), \ b_{n_k}(t) \underset{k \to +\infty}{\to} b(t), \ \varepsilon_{n_k}(t) \underset{k \to +\infty}{\rightharpoonup} \varepsilon(t) \ \text{weak in} \ H^{\frac12} , \end{aligned} \end{displaymath} which implies together with \eqref{CC.1} that \begin{equation}\label{gbetanlim}\begin{aligned} &\| \varepsilon(t)\|_{L^2} \lesssim t^{\frac 12}, &&\|\varepsilon(t)\|_{\dot H^{\frac 12}}+ \| \varepsilon(t)\|_{L^2(y>-t^{-\frac 35})}\lesssim t^{\frac 23+4\alpha}, \\ &| \lambda(t) - t| +| b(t)+t|\lesssim t^{1+\frac \alpha 2}, && \left| x(t)+|\ln t|\right|\lesssim t^{\frac \alpha 2}. \end{aligned}\end{equation} Since $\lambda(t)\to 0$ as $t\downarrow 0$, the solution $S(t)$ blows up at time $0$. Moreover, by weak convergence and \eqref{inittrois}, we have \[ \| S(t)\|_{L^2} = \|v_\infty\|_{L^2} \leq \liminf_{k\to +\infty} \|v_{n_k}\|_{L^2} = \liminf_{k\to +\infty} \|u_{n_k}(t_0)\|_{L^2} = \liminf_{k\to +\infty} \|u_{n_k}(T_{n_k})\|_{L^2} = \|Q\|_{L^2}, \] and thus $\| S(t)\|_{L^2}=\|Q\|_{L^2}$ (recall that solutions with $\|u_0\|_{L^2}<\|Q\|_{L^2}$ are global and bounded in $H^{\frac 12}$). To finish the proof of Theorem~\ref{th1}, we recall that \begin{equation}\label{def.S} S(t,x) = \frac1{\lambda^{\frac12} (t)} \left(Q+b(t)P_{b(t)}+\varepsilon\right) \left(t,\frac{x-x(t)}{\lambda(t)}\right) \, , \end{equation} and so \begin{equation}\label{def.DS}\begin{aligned} D^{\frac 12} S(t,x) &= \frac1{\lambda^{\frac12}(t)}D^{\frac 12}Q\left(t,\frac{x-x(t)}{\lambda(t)}\right) +\frac {b(t)}{\lambda^{\frac12}(t)} D^{\frac 12} P_{b(t)}\left(t,\frac{x-x(t)}{\lambda(t)}\right) \\ &+\frac1{\lambda^{\frac12}(t)}D^{\frac 12} \varepsilon\left(t,\frac{x-x(t)}{\lambda(t)}\right) \, , \end{aligned}\end{equation} By \eqref{bd.PbRb} and \eqref{gbetanlim}, we have $\|b(t)P_{b(t)}\|_{L^2}\lesssim |b|^{\frac 12}\lesssim t^{\frac 12}$ and $\| \varepsilon\|_{L^2}\lesssim t^{\frac 12}$. Thus, \begin{equation}\label{unun} \left\| S(t) - \frac1{\lambda^{\frac12} (t)} Q \left(\frac{.-x(t)}{\lambda(t)}\right)\right\|_{L^2}\lesssim t^{\frac 12}. \end{equation} Similarly, it follows from \eqref{bd.PbRb} and \eqref{gbetanlim} that $\lambda^{-\frac 12} \|b D^{\frac 12}P_{b}\|_{L^2}\lesssim \lambda^{-\frac 12} |b| |\ln |b||^{\frac 12}\lesssim t^{\frac 12} |\ln t|^{\frac 12}$ and $\lambda^{-\frac 12} \|D^{\frac 12} \varepsilon\|_{L^2}\lesssim t^{\frac 16 + 4\alpha}$, and thus \begin{equation}\label{deuxdeux} \left\| D^{\frac 12}\left[S(t) - \frac1{\lambda^{1/2} (t)} Q \left(\frac{.-x(t)}{\lambda(t)}\right)\right] \right\|_{L^2}\lesssim t^{\frac 16}. \end{equation} Note to finish that \eqref{unun} and \eqref{deuxdeux} imply \eqref{d:S}. \begin{remark}\label{rk.precise} Note that by \eqref{gbetanlim}, we get the estimate $|\frac{\lambda(t)}{t}-1|\lesssim t^{\frac \alpha 2}$, and thus \[ \left\| S(t) - \frac1{t^{\frac12} } Q \left(\frac{.+|\ln t|)}{t}\right)\right\|_{L^2}\lesssim t^{\frac \alpha 2}, \] but such an estimate cannot be established in $H^{\frac 12}$ because of the singularity. This reflects a certain lack of precision of the ansatz $Q_{b}$. As observed in Remark~\ref{rk:profile}, it does not seem clear how to improve the ansatz without creating serious additional technical difficulties. \end{remark} \end{document}
arXiv
\begin{document} \title[Controlling Selmer groups in the higher core rank case]{Controlling Selmer groups in the \\ higher core rank case} \author{Barry Mazur} \address{Department of Mathematics, Harvard University, Cambridge, MA 02138 USA} \email{[email protected]} \author{Karl Rubin} \address{Department of Mathematics, UC Irvine, Irvine, CA 92697 USA} \email{[email protected]} \date{\today} \subjclass[2010]{Primary 11G40, 11F80; Secondary 11R23, 11R34} \thanks{This material is based upon work supported by the National Science Foundation under grants DMS-1302409 and DMS-1065904.} \begin{abstract} We define Kolyvagin systems and Stark systems attached to $p$-adic representations in the case of arbitrary ``core rank'' (the core rank is a measure of the generic Selmer rank in a family of Selmer groups). Previous work dealt only with the case of core rank one, where the Kolyvagin and Stark systems are collections of cohomology classes. For general core rank, they are collections of elements of exterior powers of cohomology groups. We show under mild hypotheses that for general core rank these systems still control the size and structure of Selmer groups, and that the module of all Kolyvagin (or Stark) systems is free of rank one. \end{abstract} \maketitle \tableofcontents \section*{Introduction} Let $K$ be a number field and $G_K:= {\rm Gal}({\bar K}/K)$ its Galois group. Let $R$ be either a principal artinian local ring, or a discrete valuation ring, and $T$ an $R[G_K]$-module that is free over $R$ of finite rank. Let $T^*:= \mathrm{Hom}(T,\boldsymbol\mu_\infty)$ be its Cartier dual. A cohomology class $c$ in $H^1(G_K,T)$ provides (after localization and cup-product) a linear functional $\mathcal{L}_{c,v}$ on $H^1(G_{K_v},T^*)$ for any place $v$ of $K$. Thanks to the duality theorems of class field theory, these $\mathcal{L}_{c,v}$, when summed over all places $v$ of $K$, give a linear functional $\mathcal{L}_{c}$ that annihilates the adelic image of $H^1(G_{K},T^*)$. By imposing local conditions on the class $c$, we get a linear functional that annihilates a Selmer group in $H^1(G_{K},T^*)$. Following this thread, a systematic construction of classes $c$ can be of used to control the size of Selmer groups. Even better, a sufficiently full collection (a {\em system}) of classes $c$ can sometimes be used to completely determine the structure of the relevant Selmer groups. We have just described a very vague outline of the strategy of controlling Selmer groups of Galois representations $T^*$, by systems of cohomology classes for $T$. In practice there are variants of this strategy. First, we will control the local conditions that we impose on our cohomology classes. That is, we will require our classes to lie in certain Selmer groups for $T$. But more importantly, in general one encounters situations where sufficiently many of the relevant Selmer groups for $T$ are free over $R$ of some (fixed) rank $r \ge 1$. We call $r$ the {\em core rank} of $T$; see Definition \ref{crd} below. In the natural cases that we consider, all relevant Selmer groups contain a free module of rank equal to the core rank $r$, and $r$ is maximal with respect to this property. If $R$ is a discrete valuation ring and our initial local conditions are what we call {\em unramified} (see Definition \ref{css} and Theorem \ref{dvrdm}), then under mild hypotheses the core rank $r$ of $T$ is given by the simple formula $$ r = \sum_{v\mid\infty} \mathrm{corank}\, H^0(G_{K_v}, T^*). $$ So, for example, if $T$ is the $p$-adic Tate module of an abelian variety of dimension $d$ over $K$, then the core rank is $d\,[K:{\mathbf{Q}}]$. To deal with the case where $r$ is greater than $1$ we will ask for elements in the $r$-th exterior powers (over $R$) of those Selmer groups, so that for every $r$ we will be seeking systems of classes in $R$-modules that are often free of rank one over $R$. One of the main aims of this article is to extend the more established theory of core rank $r=1$ (see for example \cite{kolysys}) to the case of higher core rank. We deal with two types of systems of cohomology classes: {\em Stark systems} (collections of classes generalizing the units predicted by Stark-type conjectures) and {\em Kolyvagin systems} (generalizing Kolyvagin's original formulation). Our Stark systems are similar to the ``unit systems'' that occur in the recent work of Sano \cite{sano}. There is a third type, {\em Euler systems} (see for example \cite{pr.es} or \cite{EulerSystems}), which we do not deal with in this paper. When $r=1$, Euler systems provide the crucial link (\cite[Theorem 3.2.4]{kolysys}) between Kolyvagin or Stark systems and $L$-values. We expect that when $r>1$ there is still a connection between Euler systems on the one hand, and Stark and Kolyvagin systems on the other, but this connection is still mysterious. For an example of the sort of connection that we expect, see the forthcoming paper \cite{gen.darmon}. The Euler systems that have been already constructed in the literature, or that are conjectured to exist, are motivic: they come from arithmetic objects such as circular units or more generally the conjectural Stark units; or---in another context---Heegner points; or elements of $K$-theory. Euler systems are `vertically configured' in the sense that they provide classes in many abelian extensions of the base number field, and the classes cohere via norm projection from one abelian extension to a smaller one when modified by the multiplication of appropriate `Euler factors' (hence the terminology `Euler system'). On the other hand, the Stark and Kolyvagin systems are `horizontally configured' in the sense that they consist only of cohomology classes over the base number field, but conform to a range of local conditions. The local conditions for Stark systems are more elementary and---correspondingly---the Stark systems are somewhat easier to handle than Kolyvagin systems. In contrast, the local conditions for Kolyvagin systems connect more directly with the changes of local conditions that arise from twisting the Galois representation $T$ by characters. One of the main results of this paper (Theorem \ref{mthm3'}) is that---under suitable hypotheses, but for general core rank---there is an equivalence between Stark systems and special Kolyvagin systems that we call {\em stub Kolyvagin systems}, and, up to a scalar unit, there is a unique `best' Stark (equivalently: stub Kolyvagin) system (Theorems \ref{magic} and \ref{rankdvr}). We show, as mentioned in the title of this article, that the corresponding Selmer modules are controlled by (either of) these systems (Theorems \ref{korank} and \ref{korankk}), in the sense that there is a relatively simple description of the elementary divisors (and hence the isomorphy type) of the Selmer group of $T$ starting with any Stark or stub Kolyvagin system. When the core rank is one, every Kolyvagin system is a stub Kolyvagin system \cite[Theorem 4.4.1]{kolysys}. Although we have restricted our scalar rings $R$ to be either principal artinian local rings or complete discrete valuation rings with finite residue field, it is natural to wish to extend the format of our systems of cohomology classes to encompass Galois representations $T$ that are free of finite rank over more general complete local rings, so as to be able to deal effectively with deformational questions. \subsection*{Layout of the paper} In Part \ref{part1} (sections \ref{lcg}--\ref{esexs}) we recall basic facts that we will need about local and global cohomology groups, and define our abstract Selmer groups and the core rank. In Part \ref{part2} (sections \ref{Ssec}--\ref{extra}) we define Stark systems and investigate the relations between Stark systems and the structure of Selmer groups. Part \ref{part3} (sections \ref{sog}--\ref{m0pf}) deals with Kolyvagin systems, and the relation between Kolyvagin systems and Stark systems. The results of \cite{kolysys} were restricted to the case where the base field $K$ is $\mathbf{Q}$. In many cases the proofs for general $K$ are the same, and in those cases we will feel free to use results from \cite{kolysys} without further comment. \subsection*{Notation} Fix a rational prime $p$. Throughout this paper, $R$ will denote a complete, noetherian, local principal ideal domain with finite residue field of characteristic $p$. Let $\mathbf{m}$ denote the maximal ideal of $R$. The basic cases to keep in mind are $R = \mathbf{Z}/p^n\mathbf{Z}$ or $R = \Z_p$. If $K$ is a field, ${\bar{K}}$ will denote a fixed separable closure of $K$ and $G_K := \mathrm{Gal}({\bar{K}}/K)$. If $A$ is an $R$-module and $I$ is an ideal of $R$, we will write $A[I]$ for the submodule of $A$ killed by $I$. If $A$ is a $G_K$-module, we write $K(A)$ for the fixed field in ${\bar{K}}$ of the kernel of the map $G_K \to \mathrm{Aut}(A)$. If a group $H$ acts on a set $X$, then the subset of elements of $X$ fixed by $H$ is denoted $X^H$. If $n$ is a positive integer, $\boldsymbol\mu_n$ will denote the group of $n$-th roots of unity in ${\bar{K}}$. \part{Cohomology groups and Selmer structures} \label{part1} \section{Local cohomology groups} \label{lcg} For this section $K$ will be a local field (archimedean or nonarchimedean). If $K$ is nonarchimedean let $\mathcal{O}$ be the ring of integers in $K$, $\mathbf{F}$ its residue field, $K^{\mathrm{ur}}\subset {\bar K}$ the maximal unramified subfield of ${\bar K}$, and $\mathcal{I}$ the inertia group $\mathrm{Gal}({\bar K}/K^{\mathrm{ur}})$, so $G_\mathbf{F} = G_K/\mathcal{I} = \mathrm{Gal}(K^{\mathrm{ur}}/K)$. Fix an $R$-module $T$ endowed with a continuous $G_K$-action. By $H^*(K,T) := H^*(G_K,T)$ we mean cohomology computed with respect to continuous cochains. \begin{defn} \label{locconddef} A {\em local condition} on $T$ (over $K$) is a choice of an $R$-sub\-module of $H^1(K,T)$. If we refer to the local condition by a symbol, say $\mathcal{F}$, we will denote the corresponding $R$-submodule $ \HS{\FF}(K,T) \subset H^1(K,T). $ If $I$ is an ideal of $R$, then a local condition on $T$ induces local conditions on $T/IT$ and $T[I]$ by taking $\HS{\FF}(K,T/IT)$ and $\HS{\FF}(K,T[I])$ to be the image and inverse image, respectively, of $\HS{\FF}(K,T)$ under the maps induced by $$ T \twoheadrightarrow T/IT, \qquad T[I] \hookrightarrow T. $$ One can similarly propagate the local condition $\mathcal{F}$ canonically to arbitrary subquotients of $T$, and if $R \to R'$ is a homomorphism of complete noetherian local PID's, then $\mathcal{F}$ induces a local condition on the $R'$-module $T \otimes_R R'$. \end{defn} \begin{defn} \label{conds} Suppose $K$ is nonarchimedean and $T$ is unramified (i.e., $\mathcal{I}$ acts trivially on $T$). Define the {\em finite} (or {\em unramified}) local condition by $$ \HS{\f}(K,T) := \ker\bigl[H^1(K,T) \to H^1(K^{\mathrm{ur}},T)\bigr] = H^1(K^\mathrm{ur}/K,T). $$ More generally, if $L$ is a Galois extension of $K$ we define the {\em $L$-transverse} local condition by $$ \HS{\text{$L$-${\mathrm{tr}}$}}(K,T) := \ker\bigl[H^1(K,T) \to H^1(L,T)\bigr] = H^1(L/K,T^{G_L}). $$ \end{defn} Suppose for the rest of this section that the local field $K$ is nonarchimedean, the $R$-module $T$ is of finite type, and the action of $G_K$ on $T$ is unramified. Fix a totally tamely ramified cyclic extension $L$ of $K$ such that $[L:K]$ annihilates $T$. We will write simply $\HS{\tr}(K,T)$ for $\HS{\text{$L$-${\mathrm{tr}}$}}(K,T) \subset H^1(K,T)$. \begin{lem} \label{isofunct} \begin{enumerate} \item The composition $$ \HS{\tr}(K,T) \hookrightarrow H^1(K,T) \twoheadrightarrow H^1(K,T)/\HS{\f}(K,T) $$ is an isomorphism, so there is a canonical splitting $$ H^1(K,T) = \HS{\f}(K,T) \oplus \HS{\tr}(K,T). $$ \end{enumerate} There are canonical functorial isomorphisms \begin{enumerate} \addtocounter{enumi}{1} \item $\HS{\f}(K,T) \cong T/(\mathrm{Fr}-1)T$, \item $\HS{\tr}(K,T) \cong \mathrm{Hom}(\mathcal{I},T^{\mathrm{Fr}=1})$, \quad $\HS{\tr}(K,T) \otimes \mathrm{Gal}(L/K) \cong T^{\mathrm{Fr}=1}$. \end{enumerate} \end{lem} \begin{proof} Assertion (i) is \cite[Lemma 1.2.4]{kolysys}. The rest is well known; see for example \cite[Lemma 1.2.1]{kolysys}. \end{proof} \begin{defn} \label{fsmap} Suppose that $T$ is free of finite rank as an $R$-module, and that $\det(1-\mathrm{Fr} \;|\; T) = 0$. Define $P(x) \in R[x]$ by $$ P(x) := \det(1-\mathrm{Fr}\;x \;|\; T). $$ Since $P(1) = 0$, there is a unique polynomial $Q(x) \in R[x]$ such that $$ (x-1)Q(x) = P(x) \quad \text{in $R[x]$}. $$ By the Cayley-Hamilton theorem, $P(\mathrm{Fr}^{-1})$ annihilates $T$, so $Q(\mathrm{Fr}^{-1}) T \subset T^{\mathrm{Fr}=1}$. We define the {\em finite-singular comparison map} $\phi^{\mathrm{fs}}$ on $T$ to be the composition, using the isomorphisms of Lemma \ref{isofunct}(ii,iii), $$ \HS{\f}(K,T) \map{\sim} T/(\mathrm{Fr}-1)T \map{Q(\mathrm{Fr}^{-1})} T^{\mathrm{Fr}=1} \map{\sim} \HS{\tr}(K,T) \otimes \mathrm{Gal}(L/K). $$ \end{defn} \begin{lem} \label{applemma} Suppose that $T$ is free of finite rank over $R$, and that $T/(\mathrm{Fr}-1)T$ is a free $R$-module of rank one. Then $\det(1-\mathrm{Fr} \;|\; T) = 0$ and the map $$ \phi^{\mathrm{fs}} : \HS{\f}(K,T) \longrightarrow \HS{\tr}(K,T) \otimes \mathrm{Gal}(L/K) $$ of Definition \ref{fsmap} is an isomorphism. In particular both $\HS{\f}(K,T)$ and $\HS{\tr}(K,T)$ are free of rank one over $R$. \end{lem} \begin{proof} This is \cite[Lemma 1.2.3]{kolysys}. \end{proof} \begin{defn} \label{locdualdef} Define the {\em dual} of $T$ to be the $R[[G_K]]$-module $$ T^* := \mathrm{Hom}(T,\boldsymbol\mu_{p^\infty}). $$ We have the (perfect) local Tate cup product pairing $$ \pair{~}{~} : H^1(K,T) \times H^1(K,T^*) \longrightarrow H^2(K,\boldsymbol\mu_{p^\infty}) \map{\;\sim\;} \Q_p/\Z_p. $$ A local condition $\mathcal{F}$ for $T$ determines a local condition $\mathcal{F}^*$ for $T^*$, by taking $\HS{\FF^*}(K,T^*)$ to be the orthogonal complement of $\HS{\FF}(K,T)$ under the Tate pairing $\pair{~}{~}$. \end{defn} \begin{prop} \label{h1fdual} With notation as above, we have: \begin{enumerate} \item $\HS{\f}(K,T)$ and $\HS{\f}(K,T^*)$ are orthogonal complements under $\pair{~}{~}$. \item $\HS{\tr}(K,T)$ and $\HS{\tr}(K,T^*)$ are orthogonal complements under $\pair{~}{~}$. \end{enumerate} \end{prop} \begin{proof} The first assertion is (for example) Theorem I.2.6 of \cite{milne}. Both assertions are \cite[Lemma 1.3.2]{kolysys}. \end{proof} \section{Global cohomology groups and Selmer structures} \label{gcg} For the rest of this paper, $K$ will be a number field and $T$ will be a finitely generated free $R$-module with a continuous action of $G_K$, that is unramified outside a finite set of primes. {\em Global notation.} Let ${\bar{K}} \subset \mathbf{C}$ be the algebraic closure of $K$ in $\mathbf{C}$, and for each prime $\mathfrak{q}$ of $K$ fix an algebraic closure $\shortoverline{\Kl}$ of $K_\l$ containing ${\bar{K}}$. This determines a choice of extension of $\mathfrak{q}$ to ${\bar{K}}$. Let $\mathcal{D}_\mathfrak{q} := \mathrm{Gal}(\shortoverline{\Kl}/K_\l)$, which we identify with a closed subgroup of $G_K := \mathrm{Gal}({\bar{K}}/K)$. In other words $\mathcal{D}_\mathfrak{q}$ is a particular decomposition group at $\mathfrak{q}$ in $G_K$, and $H^1(\mathcal{D}_\mathfrak{q},T) = H^1(K_\l,T)$. Let $\mathcal{I}_\mathfrak{q} \subset \mathcal{D}_\mathfrak{q}$ be the inertia group, and $\Fr_{\l} \in \mathcal{D}_\mathfrak{q}/\mathcal{I}_\mathfrak{q}$ the Frobenius element. If $T$ is unramified at $\mathfrak{q}$, then $\mathcal{D}_\mathfrak{q}/\mathcal{I}_\mathfrak{q}$ acts on $T$, and hence so does $\Fr_{\l}$. If we choose a different decomposition group at $\mathfrak{q}$, then the action of $\Fr_{\l}$ changes by conjugation in $G_K$. We will write $\mathrm{loc}_\mathfrak{q}$ for the localization map $H^1(K,T) \to H^1(K_\l,T)$. If $\mathfrak{q}$ is a prime of $K$, let $K(\mathfrak{q})$ denote the $p$-part of the ray class field of $K$ modulo $\mathfrak{q}$ (i.e., the maximal $p$-power extension of $K$ in the ray class field), and $K(\mathfrak{q})_\mathfrak{q}$ the completion of $K(\mathfrak{q})$ at the chosen prime above $\mathfrak{q}$. If $\mathfrak{q}$ is principal then $K(\mathfrak{q})_\mathfrak{q}/K_\mathfrak{q}$ is cyclic and totally tamely ramified. If $\mathfrak{q}$ is principal, $T$ is unramified at $\mathfrak{q}$, and $[K(\mathfrak{q})_\mathfrak{q}:K_\mathfrak{q}]T = 0$, the {\em transverse sub\-module} of $H^1(K_\l,T)$ is the submodule $$ \HS{\tr}(K_\l,T) := \HS{\text{$K(\mathfrak{q})_\mathfrak{q}$-${\mathrm{tr}}$}}(K_\l,T) = \ker\bigl[H^1(K_\l,T) \to H^1(K(\mathfrak{q})_\mathfrak{q},T)\bigr] $$ of Definition \ref{conds}. \begin{defn} \label{selmerdef} A {\em Selmer structure} $\mathcal{F}$ on $T$ is a collection of the following data: \begin{itemize} \item a finite set $\Sigma(\mathcal{F})$ of places of $K$, including all infinite places, all primes above $p$, and all primes where $T$ is ramified, \item for every $\mathfrak{q} \in \Sigma(\mathcal{F})$ (including archimedean places), a local condition (in the sense of Definition \ref{locconddef}) on $T$ over $K_\mathfrak{q}$, i.e., a choice of $R$-submodule $ \HS{\FF}(K_\l,T) \subset H^1(K_\l,T). $ \end{itemize} If $\mathcal{F}$ is a Selmer structure, we define the {\em Selmer module} $\HS{\FF}(K,T) \subset H^1(K,T)$ to be the kernel of the sum of restriction maps $$ H^1(K_{\Sigma(\mathcal{F})}/K,T) \longrightarrow \dirsum{\mathfrak{q}\in\Sigma(\mathcal{F})}\left(H^1(K_\l,T)/\HS{\FF}(K_\l,T)\right) $$ where $K_{\Sigma(\mathcal{F})}$ denotes the maximal extension of $K$ that is unramified outside $\Sigma(\mathcal{F})$. In other words, $\HS{\FF}(K,T)$ consists of all classes which are unramified (or equivalently, finite) outside of $\Sigma(\mathcal{F})$ and which locally at $\mathfrak{q}$ belong to $\HS{\FF}(K_\l,T)$ for every $\mathfrak{q} \in \Sigma(\mathcal{F})$. \end{defn} For examples of Selmer structures see \cite{kolysys}. Note that if $\mathcal{F}$ is a Selmer structure on $T$ and $I$ is an ideal of $R$, then $\mathcal{F}$ induces canonically (see Definition \ref {locconddef}) Selmer structures on the $R/I$-modules $T/IT$ and $T[I]$, that we will also denote by $\mathcal{F}$. \begin{defn} \label{pndef} Suppose now that $T$ is free over $R$, $\mathfrak{q} \nmid p\infty$ is prime, and $T$ is unramified at $\mathfrak{q}$. If $\mathfrak{q}$ is not principal, let $I_\l := R$. If $\mathfrak{q}$ is principal, let $I_\l \subset R$ be the largest power of $\mathbf{m}$ (i.e., $\mathbf{m}^k$ with $k \ge 0$ maximal) such that $[K(\mathfrak{q})_\mathfrak{q}:K_\l]R \subset I_\l$ and $T/((\Fr_{\l}-1)T+I_\l T)$ is free of rank one over $R/I_\l$. Let $\mathcal{P}$ denote a set of prime ideals of $K$, disjoint from $\Sigma(\mathcal{F})$. Typically $\mathcal{P}$ will be a set of positive density. Define a filtration $\mathcal{P} \supset \mathcal{P}_1 \supset \mathcal{P}_2 \supset \cdots$ by $$ \PP_k = \{\mathfrak{q} \in \mathcal{P} : I_\mathfrak{q} \subset \mathbf{m}^k\} $$ for $k \ge 1$. Let $\mathcal{N} := \mathcal{N}(\mathcal{P})$ denote the set of squarefree products of primes in $\mathcal{P}$ (with the convention that the trivial ideal $1 \in \mathcal{N}$). Let $I_1 := 0$ and if $\mathfrak{n} \in \mathcal{N}$, $\mathfrak{n} \ne 1$, define $$ I_\n := \sum_{\mathfrak{q} \mid \mathfrak{n}} I_\l \subset R. $$ \end{defn} \begin{defn} \label{FFabc} Suppose $\mathcal{F}$ is a Selmer structure, and $\mathfrak{a}, \mathfrak{b}, \mathfrak{n}$ are pairwise relatively prime ideals of $K$ with $\mathfrak{n} \in\mathcal{N}$ and $I_\n T = 0$. Define a new Selmer structure $\Fsrf{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{n}}$ by \begin{itemize} \item $\Sigma(\Fsrf{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{n}}) := \Sigma(\mathcal{F}) \cup \{\mathfrak{q} : \mathfrak{q} \mid \mathfrak{a}\mathfrak{b}\mathfrak{n}\}$, \item $ \HS{\Fsrf{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{n}}}(K_\l,T) := \begin{cases} \HS{\FF}(K_\l,T) & \text{if $\mathfrak{q} \in \Sigma(\mathcal{F})$},\\ 0 & \text{if $\mathfrak{q} \mid \mathfrak{a}$}, \\ H^1(K_\l,T) & \text{if $\mathfrak{q} \mid \mathfrak{b}$}, \\ \HS{\tr}(K_\l,T) & \text{if $\mathfrak{q} \mid \mathfrak{n}$}. \end{cases} $ \end{itemize} In other words, $\Fsrf{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{c}}$ consists of $\mathcal{F}$ together with the strict condition at primes dividing $\mathfrak{a}$, the unrestricted condition at primes dividing $\mathfrak{b}$, and the transverse condition at primes dividing $\mathfrak{n}$. \end{defn} If any of $\mathfrak{a}, \mathfrak{b}, \mathfrak{n}$ are the trivial ideal, we may suppress them from the notation. For example, we will be especially interested in Selmer groups of the form \begin{align*} \HS{\mathcal{F}^\mathfrak{n}}(K,T) &: \text{no restriction at $\mathfrak{q}$ dividing $\mathfrak{n}$, same as $\mathcal{F}$ elsewhere,} \\ \HS{\FFf{\n}}(K,T/I_\n T) &: \text{transverse condition at $\mathfrak{q}$ dividing $\mathfrak{n}$, same as $\mathcal{F}$ elsewhere.} \end{align*} If $\mathfrak{m}\mid\mathfrak{n} \in \mathcal{N}$, the definition leads to an exact sequence \begin{equation} \label{b} 0 \longrightarrow \HS{\mathcal{F}^\mathfrak{m}}(K,T) \longrightarrow \HS{\mathcal{F}^\mathfrak{n}}(K,T) \longrightarrow \dirsum{\mathfrak{q}\mid(\mathfrak{n}/\mathfrak{m})} H^1(K_\mathfrak{q},T)/\HS{\f}(K_\mathfrak{q},T). \end{equation} \begin{defn} \label{globdualdef} The {\em dual} of $T$ is the $R[[G_K]]$-module $ T^* := \mathrm{Hom}(T,\boldsymbol\mu_{p^\infty}). $ For every $\mathfrak{q}$ we have the local Tate pairing $$ \pairl{~}{~} : H^1(K_\l,T) \times H^1(K_\l,T^*) \longrightarrow \Q_p/\Z_p $$ as in \S\ref{lcg}. Just as every local condition on $T$ determines a local condition on $T^*$ (Definition \ref{locdualdef}), a Selmer structure $\mathcal{F}$ for $T$ determines a Selmer structure $\mathcal{F}^*$ for $T^*$. Namely, take $\Sigma(\mathcal{F}^*) := \Sigma(\mathcal{F})$, and for $\mathfrak{q} \in \Sigma(\mathcal{F})$ take $\HS{\FF^*}(K_\l,T^*)$ to be the local condition induced by $\mathcal{F}$, i.e., the orthogonal complement of $\HS{\FF}(K_\l,T)$ under $\pairl{~}{~}$. \end{defn} \section{Selmer structures and the core rank} \label{notsscr} Suppose for this section that the $R$ is a principal local ring. We continue to assume for the rest of this paper that $T$ is free of finite rank over $R$, in addition to being a $G_K$-module. \begin{defn} \label{cardef} A Selmer structure $\mathcal{F}$ on $T$ is is called {\em cartesian} if for every $\mathfrak{q} \in \Sigma(\mathcal{F})$, the local condition $\mathcal{F}$ at $\mathfrak{q}$ is ``cartesian on the category of quotients of $T$'' as defined in \cite[Definition 1.1.4]{kolysys}. \end{defn} \begin{rem} \label{5.2} If $\mathcal{F}$ is cartesian then for every $k$ the induced Selmer structure on the $R/\mathbf{m}^k$-module $T/\mathbf{m}^kT$ is cartesian. If $R$ is a field (i.e., $\mathbf{m} = 0$) then every Selmer structure on $T$ is cartesian. If $R$ is a discrete valuation ring and $H^1(K_\l,T)/\HS{\FF}(K_\l,T)$ is torsion-free for every $\mathfrak{q} \in \Sigma(\mathcal{F})$, then $\mathcal{F}$ is cartesian (see \cite[Lemma 3.7.1(i)]{kolysys}). \end{rem} \begin{prop} \label{stprops} Suppose $R$ is a principal artinian local ring of length $k$ (i.e., $\mathbf{m}^k = 0$ and $\mathbf{m}^{k-1} \ne 0$), $\mathcal{F}$ is a cartesian Selmer structure on $T$, and $T^{G_K} = (T^*)^{G_K} = 0$. If $\mathfrak{n}\in\mathcal{N}$ and $I_\n = 0$ then: \begin{enumerate} \item the exact sequence $$ 0 \longrightarrow T/\mathbf{m}^i T \longrightarrow T \longrightarrow T/\mathbf{m}^{k-i}T \to 0 $$ induces an isomorphism $\HS{\FFf{\n}}(K,T/\mathbf{m}^i T) \map{\sim} \HS{\FFf{\n}}(K,T)[\mathbf{m}^i]$ and an exact sequence $$ 0 \longrightarrow \HS{\FFf{\n}}(K,T)[\mathbf{m}^i] \longrightarrow \HS{\FFf{\n}}(K,T) \longrightarrow \HS{\FFf{\n}}(K,T/\mathbf{m}^{k-i}T). $$ \item the inclusion $T^*[\mathbf{m}^i] \hookrightarrow T^*$ induces an isomorphism $$ \HS{\FFf{\n}^*}(K,T^*[\mathbf{m}^i]) \map{\;\sim\;} \HS{\FFf{\n}^*}(K,T^*)[\mathbf{m}^i]. $$ \item there is a unique integer $r$, independent of $\mathfrak{n}$, such that there is a noncanonical isomorphism $$ \renewcommand{1.4}{1.5} \arraycolsep=2pt \begin{array}{rcll} \HS{\FFf{\n}}(K,T) &\cong& \HS{\FFf{\n}^*}(K,T^*) \oplus R^r & \text{~if $r \ge 0$}, \\ \HS{\FFf{\n}}(K,T) \oplus R^{-r} &\cong& \HS{\FFf{\n}^*}(K,T^*) & \text{~if $r \le 0$}. \end{array} $$ \end{enumerate} \end{prop} \begin{proof} These assertions are \cite[Lemma 3.5.4]{kolysys}, \cite[Lemma 3.5.3]{kolysys}, and \cite[Theorem 4.1.5]{kolysys}, respectively. \end{proof} \begin{defn} \label{crd} Suppose $\mathcal{F}$ is a cartesian Selmer structure on $T$. If $R$ is artinian, then the {\em core rank} of $(T,\mathcal{F})$ is the integer $r$ of Proposition \ref{stprops}(iii). If $R$ is a discrete valuation ring, then the core rank of $(T,\mathcal{F})$ is the core rank of $(T/\mathbf{m}^k T,\mathcal{F})$ for every $k > 0$, which by Proposition \ref{stprops} is independent of $k$. We will denote the core rank by $\chi(T,\mathcal{F})$, or simply $\chi(T)$ when $\mathcal{F}$ is understood. \end{defn} For $\mathfrak{n}\in\mathcal{N}$, let $\nu(\mathfrak{n})$ denote the number of primes dividing $\mathfrak{n}$. \begin{cor} \label{threefive} Suppose $R$ is artinian, $\chi(T) \ge 0$, $\mathfrak{n}\in\mathcal{N}$, and $I_\n = 0$. Let $\lambda(\mathfrak{n}) := \mathrm{length}(\HS{(\mathcal{F}(\mathfrak{n})^*}(K,T^*))$ and $\mu(\mathfrak{n}) := \mathrm{length}(\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*))$. There are noncanonical isomorphisms \begin{enumerate} \item $\HS{\FFf{\n}}(K,T) \cong \HS{\FFf{\n}^*}(K,T^*) \oplus R^{\chi(T)}$, \item $\HS{\mathcal{F}^\mathfrak{n}}(K,T) \cong \HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) \oplus R^{\chi(T)+\nu(\mathfrak{n})}$, \item $\mathbf{m}^{\lambda(\mathfrak{n})}\wedge^{\chi(T)}\HS{\FFf{\n}}(K,T) \cong \mathbf{m}^{\lambda(\mathfrak{n})}$, \item $\mathbf{m}^{\mu(\mathfrak{n})}\wedge^{\chi(T)+\nu(\mathfrak{n})}\HS{\mathcal{F}^\mathfrak{n}}(K,T) \cong \mathbf{m}^{\mu(\mathfrak{n})}$. \end{enumerate} \end{cor} \begin{proof} The first isomorphism is just Proposition \ref{stprops}(iii). For (ii), observe that the Selmer structure $\mathcal{F}^\mathfrak{n}$ is cartesian by \cite[Lemma 3.7.1(i)]{kolysys}, so applying Proposition \ref{stprops}(iii) to $(T,\mathcal{F}^\mathfrak{n})$ we have $ \HS{\mathcal{F}^\mathfrak{n}}(K,T) \cong \HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) \oplus R^{\chi(T,\mathcal{F}^\mathfrak{n})}. $ To complete the proof of (ii) we need only show that $\chi(T,\mathcal{F}^\mathfrak{n}) = \chi(T)+\nu(\mathfrak{n})$, and this follows without difficulty from Poitou-Tate global duality (see for example \cite[Theorem 2.3.4]{kolysys}). Assertions (iii) and (iv) follow directly from (i) and (ii), respectively. \end{proof} \section{Running hypotheses} \label{sscr} \begin{defn} \label{sddef} By {\em Selmer data} we mean a tuple $(T,\mathcal{F},\mathcal{P},r)$ where \begin{itemize} \item $T$ is a $G_K$-module, free of finite rank over $R$, unramified outside finitely many primes, \item $\mathcal{F}$ is a Selmer structure on $T$, \item $\mathcal{P}$ is a set of primes of $K$ disjoint from $\Sigma(\mathcal{F})$, \item $r \ge 1$. \end{itemize} \end{defn} \begin{defn} \label{mide} If $L$ is a finite Galois extension of $K$ and $\tau \in G_K$, define \begin{multline*} \mathcal{P}(L,\tau) := \{\text{primes $\mathfrak{q} \notin \Sigma(\mathcal{F})$ : $\mathfrak{q}$ is unramified in $L/K$} \\ \text{and $\Fr_{\l}$ is conjugate to $\tau$ in $\mathrm{Gal}(L/K)$}\}. \end{multline*} \end{defn} Fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$ as in Definition \ref{sddef}. Let ${\bar{T}} = T/\mathbf{m} T$, so ${\bar{T}}^* = T^*[\mathbf{m}]$. If $R$ is artinian, let $M$ denote the smallest power of $p$ such that $MR = 0$. If $R$ is a discrete valuation ring, let $M:= p^\infty$. Let $\mathcal{H}$ denote the Hilbert class field of $K$, and $\mathcal{H}_M := \mathcal{H}(\boldsymbol\mu_M,(\mathcal{O}_K^\times)^{1/M})$. Let $\Bbbk$ denote the residue field $R/\mathbf{m}$. In order to obtain the strongest results, we will usually make the following additional assumptions. {\renewcommand{(H.\arabic{enumi})}{(H.\arabic{enumi})} \renewcommand{(H.\arabic{enumi})}{(H.\arabic{enumi})} \begin{enumerate} \item \label{h.1} ${\bar{T}}^{G_K} = ({\bar{T}}^*)^{G_K} = 0$ and ${\bar{T}}$ is an absolutely irreducible $\Bbbk[[G_K]]$-module, \item \label{h.2} there is a $\tau \in \mathrm{Gal}({\bar{K}}/\mathcal{H}_M)$ and a finite Galois extension $L$ of $K$ in $\mathcal{H}_M$ such that $T/(\tau-1)T$ is free of rank one over $R$ and $\mathcal{P}(L,\tau) \subset \mathcal{P}$, \item \label{h.3} $H^1(\mathcal{H}_M(T)/K,T/\mathbf{m} T) = H^1(\mathcal{H}_M(T)/K,T^*[\mathbf{m}]) = 0$, \item \label{h.4} either ${\bar{T}} \not\cong {\bar{T}}^*$ as $\Bbbk[[G_K]]$-modules, or $p > 3$, \item \label{h.5} the Selmer structure $\mathcal{F}$ is cartesian (Definition \ref{cardef}), \item \label{h.6} $r = \chi(T) > 0$, where $\chi(T)$ is the core rank of $T$. \end{enumerate} (Only) when $R$ is artinian, we will also sometimes assume \begin{enumerate} \addtocounter{enumi}{6} \item \label{h.7} $I_\l = 0$ for every $\mathfrak{q} \in \mathcal{P}$. \end{enumerate} } \begin{rem} \label{5.7} Note that if the above properties hold for $(T,\mathcal{F},\mathcal{P},r)$, then they also hold if $R$ is replaced by $R/\mathbf{m}^k$ and $T$ by $T/\mathbf{m}^k$, for $k \ge 0$. If $R$ is artinian and \ref{h.1} through \ref{h.6} hold, then Lemma \ref{okl} below shows that \ref{h.1} through \ref{h.7} hold if we replace $L$ by $\mathcal{H}_M$ and $\mathcal{P}$ by $\mathcal{P}(\mathcal{H}_M,\tau)$. \end{rem} \begin{rem} Assumption \ref{h.5} is needed to have a well-defined notion of core rank. Assumption \ref{h.2} is needed to provide is with a large selection of primes $\mathfrak{q}$ such that $T/(\Fr_{\l}-1,\mathbf{m}^k)$ is free of rank one, for large $k$. We deduce from assumption \ref{h.3} that restriction from $K$ to $\mathcal{H}_M(T)$ is injective on the Selmer group; this allows us to view Selmer classes in $\mathrm{Hom}(G_{\mathcal{H}_M(T)},T)$. Assumptions \ref{h.1} and \ref{h.4} then allow us to satisfy various Cebotarev conditions simultaneously. \end{rem} \begin{lem} \label{okl} Suppose $R$ is artinian and $\tau$ is as in {\rm \ref{h.2}}. If $\mathfrak{q} \in \mathcal{P}(\mathcal{H}_M,\tau)$, then $I_\l = 0$. \end{lem} \begin{proof} Since $\Fr_{\l}$ fixes $\mathcal{H}$, $\mathfrak{q}$ is principal. By class field theory we have \begin{equation} \label{2.4.1} \mathrm{Gal}(K(\mathfrak{q})_\mathfrak{q}/K_\l) \cong (\mathcal{O}_K/\mathfrak{q})^\times/\mathrm{image}(\mathcal{O}_K^\times). \end{equation} Since $\tau$ acts trivially on $\boldsymbol\mu_M$, so does $\Fr_{\l}$, so $|(\mathcal{O}_K/\mathfrak{q})^\times|$ is cyclic of order divisible by $M$. Since $\tau$ acts trivially on $(\mathcal{O}_K^\times)^{1/M}$, so does $\Fr_{\l}$, so the reduction of $\mathcal{O}_K^\times$ is contained in $((\mathcal{O}_K/\mathfrak{q})^\times)^M$. By \eqref{2.4.1} we conclude that $[K(\mathfrak{q})_\mathfrak{q}:K_\l]$ is divisible by $M$, so $[K(\mathfrak{q})_\mathfrak{q}:K_\l]R = 0$. We also have that $T/(\Fr_{\l}-1)T \cong T/(\tau-1)T$ is free of rank one over $R$, so the lemma follows from the definition of $I_\l$. \end{proof} \section{Examples} \label{esexs} \subsection{A canonical Selmer structure} \begin{defn} \label{css} When $R$ is a discrete valuation ring, we define a canonical {\em unramified Selmer structure} $\FF_{\mathrm{ur}}$ on $T$ by \begin{itemize} \item $\Sigma(\FF_{\mathrm{ur}}) := \{\mathfrak{q} : \text{$T$ is ramified at $\mathfrak{q}$}\} \cup \{\mathfrak{p} : \mathfrak{p} \mid p\} \cup \{v : v \mid \infty\}$, \item if $\mathfrak{q} \in \Sigma(\FF_{\mathrm{ur}})$ and $\mathfrak{q} \nmid p\infty$ then $$ \HS{\FF_{\mathrm{ur}}}(K_\mathfrak{q},T) := \ker\bigl[H^1(K_\mathfrak{q},T) \to H^1(K_\mathfrak{q}^{\mathrm{ur}},T \otimes \Q_p)\bigr], $$ \item if $\mathfrak{p} \mid p$ then define the universal norm subgroup $$ H^1(K_\mathfrak{p},T)^\mathrm{u} := \cap_{K_\mathfrak{p} \subset L \subset K_\mathfrak{p}^\mathrm{ur}} \mathrm{Cor}_{L/K_\mathfrak{p}}H^1(L,T), $$ intersection over all finite unramified extensions $L$ of $K_\mathfrak{p}$. Define $$ \HS{\FF_{\mathrm{ur}}}(K_\mathfrak{p},T) := H^1(K_\mathfrak{p},T)^{\mathrm{u},\mathrm{sat}}, $$ the saturation of $H^1(K_\mathfrak{p},T)^\mathrm{u}$ in $H^1(K_\mathfrak{p},T)$, i.e., $H^1(K_\mathfrak{p},T)/\HS{\FF_{\mathrm{ur}}}(K_\mathfrak{p},T)$ is $R$-torsion-free and $\HS{\FF_{\mathrm{ur}}}(K_\mathfrak{p},T)/H^1(K_\mathfrak{p},T)^\mathrm{u}$ has finite length, \item if $v \mid \infty$ then $$\HS{\FF_{\mathrm{ur}}}(K_v,T) := H^1(K_v,T).$$ \end{itemize} In other words, $\HS{\FF_{\mathrm{ur}}}(K,T)$ is the Selmer group of classes that (after multiplication by some power of $p$) are unramified away from $p$, and universal norms in the unramified $\Z_p$-extension above $p$. \end{defn} Note that the Selmer structure $\FF_{\mathrm{ur}}$ satisfies \ref{h.5} by Remark \ref{5.2}. \begin{lem} \label{urrel} If $\mathfrak{p}\mid p$ then $\mathrm{corank}_R\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{p},T^*) = \mathrm{corank}_RH^0(K_\mathfrak{p},T^*)$. \end{lem} \begin{proof} By the Lemma in \cite[\S2.1.1]{bprheights} (applied to the unramified $\Z_p$-extension of $K_\mathfrak{p}$), $\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{p},T^*)$ is the maximal divisible submodule of the image of the (injective) inflation map $$ H^1(K_\mathfrak{p}^\mathrm{ur}/K_\mathfrak{p},(T^*)^{G_{K_\mathfrak{p}^\mathrm{ur}}}) \longrightarrow H^1(K_\mathfrak{p},T^*). $$ We have $$ H^1(K_\mathfrak{p}^\mathrm{ur}/K_\mathfrak{p},(T^*)^{G_{K_\mathfrak{p}^\mathrm{ur}}}) \cong (T^*)^{G_{K_\mathfrak{p}^\mathrm{ur}}}/(\gamma-1)(T^*)^{G_{K_\mathfrak{p}^\mathrm{ur}}} $$ where $\gamma$ is a topological generator of $\mathrm{Gal}(K_\mathfrak{p}^\mathrm{ur}/K_\mathfrak{p})$. Thus we have an exact sequence \begin{multline*} 0 \longrightarrow H^0(K_\mathfrak{p},T^*) \longrightarrow (T^*)^{G_{K_\mathfrak{p}^\mathrm{ur}}} \map{\gamma-1} (T^*)^{G_{K_\mathfrak{p}^\mathrm{ur}}} \\ \longrightarrow H^1(K_\mathfrak{p}^\mathrm{ur}/K_\mathfrak{p},(T^*)^{G_{K_\mathfrak{p}^\mathrm{ur}}}) \longrightarrow 0 \end{multline*} and the lemma follows. \end{proof} \begin{cor} \label{urrelcor} If $\mathfrak{p}\mid p$ and $H^0(K_\mathfrak{p},T^*)$ has finite length, then $\HS{\FF_{\mathrm{ur}}}(K_\mathfrak{p},T) = H^1(K_\mathfrak{p},T)$. \end{cor} \begin{proof} By Lemma \ref{urrel} $\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{p},T^*)$ has finite length, so $H^1(K_\mathfrak{p},T)/\HS{\FF_{\mathrm{ur}}}(K_\mathfrak{p},T)$ has finite length. But by definition $H^1(K_\mathfrak{p},T)/\HS{\FF_{\mathrm{ur}}}(K_\mathfrak{p},T)$ is $R$-torsion-free, so $\HS{\FF_{\mathrm{ur}}}(K_\mathfrak{p},T) = H^1(K_\mathfrak{p},T)$. \end{proof} \begin{thm} \label{dvrdm} Suppose $R$ is a discrete valuation ring. Then $$ \chi(T,\FF_{\mathrm{ur}},\mathcal{P}) = \sum_{v \mid \infty} \mathrm{corank}_R(H^0(K_v,T^*)). $$ \end{thm} \begin{proof} For every $k > 0$ let $T_k = T/\mathbf{m}^k T$. If $f, g$ are functions of $k \in \mathbf{Z}^+$, we will write $f(k) \sim g(k)$ to mean that $|f(k)-g(k)|$ is bounded independently of $k$. By definition of core rank (see Definition \ref{crd} and Proposition \ref{stprops}(iii)), the theorem will follow if we can show that \begin{equation} \label{dvd1} \mathrm{length}(\HS{\FF_{\mathrm{ur}}}(K,T_k)) - \mathrm{length}(\HS{\FF_{\mathrm{ur}}^*}(K,T_k^*)) \sim k\sum_{v \mid \infty} \mathrm{corank}_R(H^0(K_v,T^*)). \end{equation} By \cite[Proposition 2.3.5]{kolysys} (which is essentially \cite[Lemma 1.6]{wiles}), for every $k \in \mathbf{Z}^+$ \begin{multline} \label{wiles} \mathrm{length}(\HS{\FF_{\mathrm{ur}}}(K,T_k)) - \mathrm{length}(\HS{\FF_{\mathrm{ur}}^*}(K,T_k^*)) \\ = \mathrm{length}(H^0(K,T_k)) - \mathrm{length}(H^0(K,T_k^*)) \\ + \sum_{v \in \Sigma(\FF_{\mathrm{ur}})} \ (\mathrm{length}(H^0(K_v,T_k^*)) - \mathrm{length}(\HS{\FF_{\mathrm{ur}}^*}(K_v,T_k^*))). \end{multline} By hypothesis \ref{h.1}, $H^0(K,T_k) = H^0(K,T_k^*) = 0$. If $v \mid \infty$, then $$ \mathrm{length}(H^0(K_v,T_k^*)) \sim k \;\mathrm{corank}_R(H^0(K_v,T^*)), \quad \mathrm{length}(\HS{\FF_{\mathrm{ur}}^*}(K_v,T_k^*)) \sim 0. $$ Suppose $\mathfrak{q} \in \Sigma(\mathcal{F})$, $\mathfrak{q} \nmid p\infty$. Let $\mathcal{I}_\mathfrak{q}$ denote an inertia group above $\mathfrak{q}$ in $G_K$. By \cite[Lemma 1.3.5]{EulerSystems}, we have $$ \mathrm{length}(\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{q},T_k^*)) \sim \mathrm{length}((T_k^*)^{\mathcal{I}_\mathfrak{q}}/(\Fr_{\l}-1)(T_k^*)^{\mathcal{I}_\mathfrak{q}}). $$ On the other hand, the exact sequence $$ 0 \to H^0(K_\mathfrak{q},T_k^*) \to (T_k^*)^{\mathcal{I}_\mathfrak{q}} \map{\Fr_{\l}-1} (T_k^*)^{\mathcal{I}_\mathfrak{q}} \to (T_k^*)^{\mathcal{I}_\mathfrak{q}}/(\Fr_{\l}-1)(T_k^*)^{\mathcal{I}_\mathfrak{q}} \to 0 $$ shows that $$ \mathrm{length}(H^0(K_\mathfrak{q},T_k^*)) = \mathrm{length}((T_k^*)^{\mathcal{I}_\mathfrak{q}}/(\Fr_{\l}-1)(T_k^*)^{\mathcal{I}_\mathfrak{q}}). $$ Thus the term for $v=\mathfrak{q}$ in \eqref{wiles} is bounded independent of $k$. Now suppose $\mathfrak{p} \mid p$. By Lemma \ref{urrel}, $\mathrm{corank}_R\HS{\FF^*}(K_\mathfrak{p},T^*) = \mathrm{corank}_RH^0(K_\mathfrak{p},T^*)$. By definition $\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{p},T_k^*)$ is the inverse image of $\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{p},T^*)$ under the natural map $H^1(K_\mathfrak{p},T_k^*) \to H^1(K_\mathfrak{p},T^*)[\mathbf{m}^k]$. A simple exercise shows that the kernel and cokernel of this map have length bounded independent of $k$, so we see that $$ \mathrm{length}(\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{p},T_k^*)) \sim k \;\mathrm{corank}_R\HS{\FF_{\mathrm{ur}}^*}(K_\mathfrak{p},T^*) = k \;\mathrm{corank}_RH^0(K_\mathfrak{p},T^*). $$ Thus the term for $v=\mathfrak{p}$ in \eqref{wiles} is bounded independent of $k$. Combining these calculations proves \eqref{dvd1}, and hence the theorem. \end{proof} \subsection{Multiplicative groups} \label{exgm} Suppose $K$ is a number field and $\rho$ is a character of $G_K$ of finite order. For simplicity we will assume that $p > 2$, $\rho$ is nontrivial, and $\rho$ takes values in $\Z_p^\times$. (Everything that follows holds more generally, only assuming that $\rho$ has order prime to $p$, but we would have to tensor everything with the extension $\Z_p[\rho]$ where $\rho$ takes its values.) Let $T := \Z_p(1) \otimes \rho^{-1}$, a free $\Z_p$-module of rank one with $G_K$ acting via the product of $\rho^{-1}$ and the cyclotomic character. Let $E$ be the cyclic extension of $K$ cut out by $\rho$, i.e., such that $\rho$ factors through an injective homomorphism $\mathrm{Gal}(E/K) \hookrightarrow \Z_p^\times$. Let $$ \mathcal{P} = \{\text{primes $\mathfrak{q}$ of $K$} : \text{$\mathfrak{q} \nmid p$ and $\rho$ is unramified at $\mathfrak{q}$}\}. $$ A simple exercise in Galois cohomology (see for example \cite[\S6.1]{kolysys} or \cite[\S1.6.C]{EulerSystems}) shows that $$ H^1(K,T) \cong (E^\times \otimes \Z_p)^\rho $$ where the superscript $\rho$ means the subgroup on which $\mathrm{Gal}(E/K)$ acts via $\rho$, and for every prime $\mathfrak{q}$, $$ H^1(K_\mathfrak{q},T) \cong (E_\mathfrak{q}^\times \otimes \Z_p)^\rho $$ where $E_\mathfrak{q} = E \otimes_K K_\mathfrak{q}$ is the product of the completions of $E$ above $\mathfrak{q}$. With these identifications, the unramified Selmer structure of Definition \ref{css} is given by $$ \HS{\FF_{\mathrm{ur}}}(K_\mathfrak{q},T) := (\mathcal{O}_{E,\mathfrak{q}}^\times \otimes \Z_p)^\rho $$ for every $\mathfrak{q}$, where $\mathcal{O}_{E,\mathfrak{q}}$ is the ring of integers of $E_\mathfrak{q}$. \begin{prop} \label{13.1} Let $\mathrm{Cl}(E)$ denote the ideal class group of $E$. There are natural isomorphisms $$ \HS{\FF_{\mathrm{ur}}}(K,T) \cong (\mathcal{O}_{E}^\times \otimes \Z_p)^\rho, \quad \HS{\FF_{\mathrm{ur}}^*}(K,T^*) \cong \mathrm{Hom}(\mathrm{Cl}(E)^\rho,\Q_p/\Z_p) $$ and for every $k \ge 0$ an exact sequence $$ 0 \longrightarrow (\mathcal{O}_E^\times/(\mathcal{O}_E^\times)^{p^k})^\rho \longrightarrow \HS{\FF_{\mathrm{ur}}}(K,T/p^kT) \longrightarrow \mathrm{Cl}(E)[p^k]^\rho \longrightarrow 0 $$ and an isomorphism $$ \HS{\FF_{\mathrm{ur}}^*}(K,T^*[p^k]) \cong \mathrm{Hom}(\mathrm{Cl}(E)^\rho,\mathbf{Z}/p^k\mathbf{Z}). $$ \end{prop} \begin{proof} See for example \cite[Proposition 6.1.3]{kolysys}. \end{proof} Suppose in addition now that $\rho \ne \omega$, and either $\rho^2 \ne \omega$ or $p > 3$, where $\omega : G_K \to \Z_p^\times$ is the Teichm\"uller character giving the action of $G_K$ on $\boldsymbol\mu_p$. Then conditions \ref{h.1}, \ref{h.3}, and \ref{h.4} of \S\ref{sscr} all hold. By Remark \ref{5.2}, the Selmer structure $\FF_{\mathrm{ur}}$ satisfies \ref{h.5} as well, and condition \ref{h.2} holds with $\tau = 1$ and $L = E$. Finally, if there is at least one real place $v$ of $K$ such that $\rho$ is trivial on complex conjugation at $v$, then the following corollary shows that condition \ref{h.6} holds. \begin{cor} \label{eighttwo} The core rank $\chi(T,\FF_{\mathrm{ur}})$ is $$ \chi(T) = \dim_{\F_p}(\mathcal{O}_E^\times/(\mathcal{O}_E^\times)^{p})^\rho = \mathrm{rank}_{\Z_p}(\mathcal{O}_{E}^\times \otimes \Z_p)^\rho = |\{\text{archimedean $v$} : \rho(\sigma_v) = 1\}| $$ where $\sigma_v \in \mathrm{Gal}(E/K)$ is the complex conjugation at $v$. \end{cor} \begin{proof} The first equality follows from Proposition \ref{13.1} and the definition of core rank, and the second because $\rho \ne \omega$. The third equality is well-known (using that $\rho \ne 1$); see for example \cite[Proposition I.3.4]{tate}. \end{proof} Thus if $E/K$ is an extension of totally real fields and $\rho \ne 1$, then $\chi(T,\FF_{\mathrm{ur}}) = [K:\mathbf{Q}]$ by Corollary \ref{eighttwo}, and all conditions \ref{h.1} through \ref{h.6} are satisfied. If $K = \mathbf{Q}$, then $\chi(T) = 1$, and a Kolyvagin system (see \S\ref{ksp}) can be constructed from the Euler system of cyclotomic units (see \cite{kolysys}). For a general totally real field $K$, if we assume the version of Stark's Conjecture described in \cite{rubin-stark}, then the so-called ``Rubin-Stark'' elements predicted by that conjecture can be used to construct both an Euler system and a Stark system (see \S\ref{Ssec}). For the details and a thorough discussion of this example, see \cite{gen.darmon}. \subsection{Abelian varieties} Suppose $A$ is an abelian variety of dimension $d$ defined over the number field $K$. Let $$ \mathcal{P} = \{\text{primes $\mathfrak{q}$ of $K$} : \text{$\mathfrak{q} \nmid p$ and $A$ has good reduction at $\mathfrak{q}$}\}. $$ Let $T$ be the Tate module $T_p(A) := \varprojlim A[p^k]$. Then $T$ is a free $\Z_p$-module of rank $2d$ with a natural action of $G_K$, and $T^* = \check{A}[p^\infty]$ where $\check{A}$ is the dual abelian variety to $A$. Let $\mathcal{F}$ be the Selmer structure on $T$ given by $ \HS{\FF}(K_v,T) = H^1(K_v,T) $ for every $v$. Then $\mathcal{F}$ is the unramified Selmer structure $\FF_{\mathrm{ur}}$ given by Definition \ref{css}. (For $v$ dividing $p$, this follows from the Lemma in \cite[\S2.1.1]{bprheights}, and for $v$ not dividing $p$ it follows from the fact that $H^1(K_v,T)$ is finite.) Further, $\mathcal{F}$ is the usual Selmer structure attached to an abelian variety, with the local conditions at primes above $p$ relaxed (see for example \cite[\S 1.6.4]{EulerSystems}). Hence we have an exact sequence $$ 0 \longrightarrow \HS{\FF}(K,T^*) \longrightarrow {\mathrm{Sel}}_{p^\infty}(\check{A}/K) \longrightarrow \oplus_{\mathfrak{p} \mid p} H^1(K_\mathfrak{p},\check{A}[p^\infty]). $$ Suppose now that $p > 3$, and that the image of $G_K$ in $\mathrm{Aut}(A[p]) \cong {\mathrm{GL}}_{2d}(\F_p)$ is large enough so that conditions \ref{h.1}, \ref{h.2}, and \ref{h.3} of \S\ref{sscr} all hold. For example, this will be true if the image of $G_K$ contains ${\mathrm{GSp}}_{2d}(\F_p)$. Condition \ref{h.4} holds since $p>3$, and $\mathcal{F}$ satisfies \ref{h.5} by Remark \ref{5.2}. The following consequence of Theorem \ref{dvrdm} shows that condition \ref{h.6} holds as well. \begin{prop} \label{13.3} The core rank of $T$ is given by $ \chi(T) = d\;[K:\mathbf{Q}] . $ \end{prop} \begin{proof} By Theorem \ref{dvrdm}, we have $$ \chi(T) = \sum_{v \mid \infty} \mathrm{corank}_{\Z_p} H^0(K_v,\check{A}[p^\infty]). $$ If $v$ is a real place, then $\mathrm{corank}_{\Z_p} H^0(K_v,\check A[p^\infty]) = d$, and if $v$ is a complex place then $\mathrm{corank}_{\Z_p} H^0(K_v,\check A[p^\infty]) = \mathrm{corank}_{\Z_p} \check A[p^\infty] = 2d$. Thus $$ \sum_{v \mid \infty} \mathrm{corank}_{\Z_p} H^0(K_v,\check A[p^\infty]) = \sum_{v \mid \infty} d\;[K_v:\mathbf{R}] = d\;[K:\mathbf{Q}]. $$ \end{proof} If $K = \mathbf{Q}$ and $d = 1$ (i.e., $A$ is an elliptic curve), then Proposition \ref{13.3} shows that $\chi(T) = 1$. In this case Kato has constructed an Euler system for $T$, from which one can produce a Kolyvagin system (\cite[Theorem 3.2.4]{kolysys}). \part{Stark systems and the structure of Selmer groups} \label{part2} \section{Stark systems} \label{Ssec} Suppose for this section that $R$ is a principal artinian ring of length $k$, so $\mathbf{m}^k = 0$ and $\mathbf{m}^{k-1} \ne 0$. Fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$ as in Definition \ref{sddef}. We assume throughout this section that \ref{h.7} of \S\ref{sscr} holds, i.e., $I_\l = 0$ for every $\mathfrak{q}\in\mathcal{P}$. Recall that $\nu(\mathfrak{n})$ denotes the number of prime factors of $\mathfrak{n}$. \begin{defn} \label{yndef} For every $\mathfrak{n} \in \mathcal{N}$, define \begin{align*} \W{\mathfrak{n}} &:= \oplus_{\mathfrak{q}\mid\mathfrak{n}} \mathrm{Hom}(\HS{\tr}(K_{\mathfrak{q}},T),R), \\ \Y{\mathfrak{n}} &:= \wedge^{r+\nu(\mathfrak{n})}\HS{\mathcal{F}^{\mathfrak{n}}}(K,T) \otimes \wedge^{\nu(\mathfrak{n})}\W{\mathfrak{n}}, \end{align*} where as usual the exterior powers are taken in the category of $R$-modules. \end{defn} Then $\W{\mathfrak{n}}$ is a free $R$-module of rank $\nu(\mathfrak{n})$, since each $\HS{\tr}(K_{\mathfrak{q}},T)$ is free of rank one (Lemma \ref{applemma}). If we fix an ordering $\mathfrak{n} = \mathfrak{q}_i \cdots \mathfrak{q}_{\nu(\mathfrak{n})}$ of the primes dividing $\mathfrak{n}$, and a generator $h_i$ of $\mathrm{Hom}(\HS{\tr}(K_{\mathfrak{q}_i},T),R)$ for every $i$, then $h_1\wedge\cdots\wedge h_{\nu(\mathfrak{n})}$ is a generator of the free, rank-one $R$-module $\wedge^{\nu(\mathfrak{n})}\W{\mathfrak{n}}$. For the structure of $\Y{\mathfrak{n}}$ when $r$ is the core rank of $T$, see Lemma \ref{ylem} below. \begin{defn} \label{piqdef} For every $\mathfrak{q} \in \mathcal{P}$, define the transverse localization map $$ \mathrm{loc}_{\mathfrak{q}}^{{\mathrm{tr}}} : H^1(K,T) \map{\mathrm{loc}_\mathfrak{q}} H^1(K_\mathfrak{q},T) \twoheadrightarrow \HS{\tr}(K_\mathfrak{q},T), $$ where the second map is projection (using the direct sum decomposition of Lemma \ref{isofunct}(i)) with kernel $\HS{\f}(K_\mathfrak{q},T)$. If $\mathfrak{n} \in \mathcal{N}$ and $\mathfrak{q}\mid\mathfrak{n}$, then \begin{equation} \label{gi} \ker \bigl(\mathrm{loc}_{\mathfrak{q}}^{{\mathrm{tr}}} \bigl| \HS{\mathcal{F}^{\mathfrak{n}}}(K,T) \bigr) = \HS{\mathcal{F}^{\mathfrak{n}/\mathfrak{q}}}(K,T). \end{equation} In exactly the same way, we can define a map $\mathrm{loc}_\mathfrak{q}^{\mathrm{f}}$ by using the finite projection and the isomorphism $\fs_{\l}$ of Definition \ref{fsmap} $$ \mathrm{loc}_\mathfrak{q}^{\mathrm{f}} : H^1(K,T) \map{\mathrm{loc}_\mathfrak{q}} H^1(K_\mathfrak{q},T) \twoheadrightarrow \HS{\f}(K,T) \map{\fs_{\l}} \HS{\tr}(K_\mathfrak{q},T) \otimes \mathrm{Gal}(K(\mathfrak{q})_\mathfrak{q}/K_\mathfrak{q}), $$ and then \begin{equation} \label{gi2} \ker \bigl(\mathrm{loc}_{\mathfrak{q}}^{\mathrm{f}} \bigl| \HS{\mathcal{F}^{\mathfrak{n}}}(K,T) \bigr) = \HS{\mathcal{F}^{\mathfrak{n}/\mathfrak{q}}(\mathfrak{q})}(K,T). \end{equation} \end{defn} \begin{defn} \label{4.3} Suppose $\mathfrak{n} \in \mathcal{N}$ and $\mathfrak{m} \mid \mathfrak{n}$. By \eqref{gi} we have an exact sequence $$ 0 \longrightarrow \HS{\mathcal{F}^\mathfrak{m}}(K,T) \longrightarrow \HS{\mathcal{F}^\mathfrak{n}}(K,T) \map{\oplus \mathrm{loc}_\mathfrak{q}^{\mathrm{tr}}} \dirsum{\mathfrak{q}\mid(\mathfrak{n}/\mathfrak{m})}\HS{\tr}(K_\mathfrak{q},T) $$ and it follows that the square \begin{equation} \label{cd} \raisebox{22pt} {\xymatrix@R=15pt{ ~\HS{\mathcal{F}^\mathfrak{m}}(K,T)~ \ar@{^(->}[r]\ar_{\oplus \mathrm{loc}_\mathfrak{q}^{\mathrm{tr}}}[d] & ~\HS{\mathcal{F}^\mathfrak{n}}(K,T)~ \ar^{\oplus \mathrm{loc}_\mathfrak{q}^{\mathrm{tr}}}[d] \\ ~\dirsum{\mathfrak{q}\mid\mathfrak{m}}\HS{\tr}(K_\mathfrak{q},T)~ \ar@{^(->}[r] & ~\dirsum{\mathfrak{q}\mid\mathfrak{n}}\HS{\tr}(K_\mathfrak{q},T)~ }} \end{equation} is cartesian. Let $$ \Psi_{\mathfrak{n},\mathfrak{m}} : \Y{\mathfrak{n}} \longrightarrow \Y{\mathfrak{m}} $$ be the map of Proposition \ref{bhprop}(i) attached to this diagram. Concretely, $\Psi_{\mathfrak{n},\mathfrak{m}}$ is given as follows. Fix a factorization $\mathfrak{n} = \mathfrak{q}_1 \cdots \mathfrak{q}_t$, with $\mathfrak{m} = \mathfrak{q}_1 \cdots \mathfrak{q}_s$, and a generator $h_i$ of $\mathrm{Hom}(\HS{\tr}(K_{\mathfrak{q}_i},T),R)$ for every $i$. Let $\mathfrak{n}_i = \prod_{j \le i}\mathfrak{q}_j$. These choices lead to a map $$ \widehat{h_{s+1} \circ \mathrm{loc}_{\mathfrak{q}_{s+1}}^{{\mathrm{tr}}}} \circ \cdots \circ \widehat{h_{t} \circ \mathrm{loc}_{\mathfrak{q}_{t}}^{{\mathrm{tr}}}} : \wedge^{r+t}\HS{\mathcal{F}^{\mathfrak{n}}}(K,T) \longrightarrow \wedge^{r+s} \HS{\mathcal{F}^{\mathfrak{m}}}(K,T) $$ (where $\widehat{h_{i} \circ \mathrm{loc}_{\mathfrak{q}_{i}}^{{\mathrm{tr}}}} : \wedge^i\HS{\mathcal{F}^{\mathfrak{n}_i}}(K,T) \to \wedge^{i-1}\HS{\mathcal{F}^{\mathfrak{n}_{i-1}}}(K,T)$ is given by Proposition \ref{wedgemap}) and an isomorphism $\wedge^{\nu(\mathfrak{n})}\W{\mathfrak{n}} \map{\sim} \wedge^{\nu(\mathfrak{m})}\W{\mathfrak{m}}$ given by $h_1\wedge\cdots\wedge h_{t} \mapsto h_1\wedge\cdots\wedge h_{s}$. The tensor product of these two maps is the map $\Psi_{\mathfrak{n},\mathfrak{m}} : \Y{\mathfrak{n}} \longrightarrow \Y{\mathfrak{m}}$, and is independent of the choices made. \end{defn} \begin{prop} \label{premagic} Suppose $\mathfrak{n} \in \mathcal{N}$, $\mathfrak{n}' \mid \mathfrak{n}$, and $\mathfrak{n}'' \mid \mathfrak{n}'$. Then $\Psi_{\mathfrak{n}',\mathfrak{n}''} \circ \Psi_{\mathfrak{n},\mathfrak{n}'} = \Psi_{\mathfrak{n},\mathfrak{n}''}$. \end{prop} \begin{proof} This is Proposition \ref{bhprop}(iii). \end{proof} \begin{defn} \label{4.6} Thanks to Proposition \ref{premagic}, we can define the $R$-module $\mathbf{SS}_r(T) = \mathbf{SS}_r(T,\mathcal{F},\mathcal{P})$ of {\em Stark systems of rank $r$} to be the inverse limit $$ \mathbf{SS}_r(T) := \lim_{\overleftarrow{\mathfrak{n}\in\mathcal{N}}} \Y{\mathfrak{n}} $$ with respect to the maps $\Psi_{\mathfrak{n},\mathfrak{m}}$. \end{defn} We call these collections Stark systems because a fundamental example is given by elements predicted by a generalized Stark conjecture \cite{gen.darmon, rubin-stark}. Let $\Y{\mathfrak{n}}' = \mathbf{m}^{\mathrm{length}(\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*))}\Y{\mathfrak{n}}$. \begin{lem} \label{ylem} Suppose that hypotheses {\rm\ref{h.1}} through {\rm\ref{h.7}} of \S\ref{sscr} are satisfied, so in particular $r$ is the core rank of $T$. Then: \begin{enumerate} \item $\Y{\mathfrak{n}}'$ is a cyclic $R$-module of length $\max\{k - \mathrm{length}(\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*)),0\}$. \item There are $\mathfrak{n}\in\mathcal{N}$ such that $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) = 0$. \item If $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) = 0$ then $\Y{\mathfrak{n}}$ is free of rank one over $R$. \item If $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) = 0$ and $\mathfrak{m}\mid\mathfrak{n}$, then $ \Psi_{\mathfrak{n},\mathfrak{m}}(\Y{\mathfrak{n}}) = \Y{\mathfrak{m}}'. $ \end{enumerate} \end{lem} \begin{proof} Assertions (i) and (iii) follow directly from Corollary \ref{threefive}(iv). Since $\HS{\FF^*}(K,T^*)$ is finite, we can choose generators $c_1, \ldots, c_t$ of $\HS{\FF^*}(K,T^*)[\mathbf{m}]$. For each $i$, use \cite[Proposition 3.6.1]{kolysys} to choose $\mathfrak{q}_i \in \mathcal{N}$ such that $\mathrm{loc}_{\mathfrak{q}_i}(c_i) \ne 0$, and let $\mathfrak{n} = \prod_i \mathfrak{q}_i$. Then $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) = 0$, so (ii) holds. Proposition \ref{bhprop}(ii) applied to the diagram \eqref{cd} shows that $$ \Psi_{\mathfrak{n},\mathfrak{m}}(\Y{\mathfrak{n}}) = \mathbf{m}^{\mathrm{length}(\HS{\mathcal{F}^\mathfrak{m}}(K,T))-(r+\nu(\mathfrak{m}))k}\Y{\mathfrak{m}}. $$ Corollary \ref{threefive}(ii) shows that $$ \mathrm{length}(\HS{\mathcal{F}^\mathfrak{m}}(K,T))-(r+\nu(\mathfrak{m}))k = \mathrm{length}(\HS{(\mathcal{F}^*)_{\mathfrak{m}}}(K,T^*)) $$ which proves (iv). \end{proof} \begin{thm} \label{magic} Suppose that hypotheses {\rm\ref{h.1}} through {\rm\ref{h.7}} of \S\ref{sscr} are satisfied. Then the $R$-module $\mathbf{SS}_r(T)$ is free of rank one, and for every $\mathfrak{n}\in\mathcal{N}$, the image of the projection map $\mathbf{SS}_r(T) \to \Y{\mathfrak{n}}$ is $\Y{\mathfrak{n}}'$. \end{thm} \begin{proof} Using Lemma \ref{ylem}(ii), choose an $\mathfrak{d} \in \mathcal{N}$ such that $\HS{(\mathcal{F}^*)_{\mathfrak{d}}}(K,T^*) = 0$. Then $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T) = 0$ for every $\mathfrak{n} \in \mathcal{N}$ divisible by $\mathfrak{d}$. Now the theorem follows from Lemma \ref{ylem}(iv). \end{proof} \section{Stark systems over discrete valuation rings} For this section we assume that $R$ is a discrete valuation ring, and we fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$ as in Definition \ref{sddef}. We assume throughout this section that hypotheses \ref{h.1} through \ref{h.6} of \S\ref{sscr} are satisfied. For $k > 0$ recall from Definition \ref{pndef} that $$ \PP_k := \{\mathfrak{q} \in \mathcal{P} : I_\l \in \mathbf{m}^k\}, $$ and $\NN_k$ is the set of squarefree products of primes in $\PP_k$. By Remark \ref{5.7}, the Selmer data $(T/\mathbf{m}^kT,\mathcal{F},\PP_k,r)$ satisfies \ref{h.1} through \ref{h.7} over the ring $R/\mathbf{m}^k$. In this section we will define the module $\mathbf{SS}_r(T)$ of Stark systems of rank $r$ over $T$, and use the results of \S\ref{Ssec} about $\mathbf{SS}_r(T/\mathbf{m}^kT)$ to study $\mathbf{SS}_r(T)$. \begin{defn} \label{yndef2} For every $\mathfrak{n} \in \mathcal{N}$, define \begin{align*} \W{\mathfrak{n}} &:= \oplus_{\mathfrak{q}\mid\mathfrak{n}} \mathrm{Hom}(\HS{\tr}(K_{\mathfrak{q}},T/I_\n T),R/I_\n), \\ \Y{\mathfrak{n}} &:= \wedge^{r+\nu(\mathfrak{n})}\HS{\mathcal{F}^{\mathfrak{n}}}(K,T/I_\n T) \otimes \wedge^{\nu(\mathfrak{n})}\W{\mathfrak{n}}, \\ \Y{\mathfrak{n}}' &:= \mathbf{m}^{\mathrm{length}(\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*[I_\n]))}\Y{\mathfrak{n}}. \end{align*} A {\em Stark system of rank $r$} for $T$ (more precisely, for $(T,\mathcal{F},\mathcal{P})$) is a collection $\{\epsilon_\mathfrak{n} \in \Y{\mathfrak{n}} : \mathfrak{n}\in\mathcal{N}\}$ such that if $\mathfrak{n}\in\mathcal{N}$ and $\mathfrak{m} \mid \mathfrak{n}$, then $$ \Psi_{\mathfrak{n},\mathfrak{m}}(\epsilon_\mathfrak{n}) = \bar\epsilon_\mathfrak{m} $$ where $\bar\epsilon_\mathfrak{m}$ is the image of $\epsilon_\mathfrak{m}$ in $\Y{\mathfrak{m}} \otimes R/I_\n$, and $\Psi_{\mathfrak{n},\mathfrak{m}} : \Y{\mathfrak{n}} \to \Y{\mathfrak{m}} \otimes R/I_\n$ is the map of Definition \ref{4.3} applied to $T/I_\n T$ and $R/I_\n$. Denote by $\mathbf{SS}_r(T) = \mathbf{SS}_r(T,\mathcal{F},\mathcal{P})$ the $R$-module of Stark systems for $T$. \end{defn} \begin{lem} \label{surjity} If $j \le k$, then the projection map $T/\mathbf{m}^k T \to T/\mathbf{m}^j T$ and restriction to $\PP_k$ induce a surjection and an isomorphism, respectively $$ \xymatrix@C=17pt{ \mathbf{SS}_r(T/\mathbf{m}^k T,\PP_k) \ar@{->>}[r] & \mathbf{SS}_r(T/\mathbf{m}^j T,\PP_k) & \ar_-{\;\;\sim}[l] \mathbf{SS}_r(T/\mathbf{m}^j T,\mathcal{P}_j) } $$ \end{lem} \begin{proof} Let $\mathfrak{n} \in \NN_k$ be such that $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*[\mathbf{m}]) = 0$. Then by Theorem \ref{magic}, projecting to $\Y{\mathfrak{n}}$ gives a commutative diagram with vertical isomorphisms $$ \xymatrix{ \mathbf{SS}_r(T/\mathbf{m}^k T,\PP_k) \ar[r]\ar^{\cong}[d] & \mathbf{SS}_r(T/\mathbf{m}^j T,\PP_k) \ar^{\cong}[d] & \ar[l]\ar^{\cong}[d] \mathbf{SS}_r(T/\mathbf{m}^j T,\mathcal{P}_j) \\ \Y{\mathfrak{n}} \otimes R/\mathbf{m}^k \ar@{->>}[r] & \Y{\mathfrak{n}} \otimes R/\mathbf{m}^j & \Y{\mathfrak{n}} \otimes R/\mathbf{m}^j \ar_{=}[l] } $$ Since the bottom maps are a surjection and an isomorphism, so are the top ones. \end{proof} \begin{prop} \label{betprp} The natural maps $T \twoheadrightarrow T/\mathbf{m}^k$ and $\PP_k \hookrightarrow \mathcal{P}$ induce an isomorphism $$ \mathbf{SS}_r(T,\mathcal{P}) \map{\;\sim\;} \varprojlim \mathbf{SS}_r(T/\mathbf{m}^k T,\PP_k) $$ where the inverse limit is with respect to the maps of Lemma \ref{surjity}. \end{prop} \begin{proof} Suppose $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$ is nonzero. Then we can find an $\mathfrak{n}$ such that $\epsilon_\mathfrak{n} \ne 0$ in $\Y{\mathfrak{n}}$. If $\mathfrak{n} \ne 1$ then $I_\n \ne 0$, and we let $k$ be such that $\mathbf{m}^k = I_\n$. If $\mathfrak{n} = 1$ choose $k$ so that $\epsilon_1 \ne 0$ in $\wedge^r\HS{\FF}(K,T/\mathbf{m}^{k} T)$. In either case $I_\n \subset \mathbf{m}^k$, and the image of $\boldsymbol{\epsilon}$ in $\mathbf{SS}_r(T/\mathbf{m}^k T,\PP_k)$ is nonzero. Thus the map in the proposition is injective. Now suppose $\{\ssyi{k}\} \in \varprojlim \mathbf{SS}_r(T/\mathbf{m}^k,\PP_k)$. If $\mathfrak{n} \in \mathcal{N}$ and $\mathfrak{n} \ne 1$, let $j$ be such that $I_\n = \mathbf{m}^j$ and define $$ \epsilon_{\mathfrak{n}} := \kapi{j}_{\mathfrak{n}} \in \Y{\mathfrak{n}}. $$ If $\mathfrak{n} = 1$, define $$ \epsilon_1 = \lim_{k\to\infty} \kapi{k}_{1} \in \lim_{k\to\infty}\wedge^r\HS{\FF}(K,T/\mathbf{m}^kT) = \wedge^r\HS{\FF}(K,T) = \Y{1}. $$ It is straightforward to verify that this defines an element $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T,\mathcal{P})$ that maps to $\ssyi{k} \in \mathbf{SS}_r(T/\mathbf{m}^k T,\PP_k)$ for every $k$. Thus the map in the proposition is surjective as well. \end{proof} \begin{thm} \label{rankdvr} Suppose $R$ is a discrete valuation ring and hypotheses \ref{h.1} through \ref{h.6} hold. Then the $R$-module of Stark systems of rank $r$, $\mathbf{SS}_r(T,\mathcal{P})$, is free of rank one, generated by a Stark system $\boldsymbol{\epsilon}$ whose image in $\mathbf{SS}_r(T/\mathbf{m} T,\mathcal{P})$ is nonzero. The map $\mathbf{SS}_r(T,\mathcal{P}) \to \mathbf{SS}_r(T/\mathbf{m}^k,\PP_k)$ is surjective for every $k$. \end{thm} \begin{proof} By Theorem \ref{magic}, $\mathbf{SS}_r(T/\mathbf{m}^kT,\PP_k)$ is free of rank one over $R/\mathbf{m}^k$ for every $k$. The maps $\mathbf{SS}_r(T/\mathbf{m}^{k+1}T,\mathcal{P}_{k+1}) \to \mathbf{SS}_r(T/\mathbf{m}^kT,\PP_k)$ are surjective by Lemma \ref{surjity}, so the theorem follows from Proposition \ref{betprp}. \end{proof} \section{Structure of the dual Selmer group} \label{extra} In this section $R$ is either a principal artinian local ring or a discrete valuation ring. We let $k := \mathrm{length}(R)$, so $k$ is finite in the artinian case and $k = \infty$ in the discrete valuation ring case. Fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$. We continue to assume that hypotheses \ref{h.1} through \ref{h.6} are satisfied, and if $R$ is artinian we assume that \ref{h.7} is satisfied as well. Recall that if $\mathfrak{n}\in\mathcal{N}$ then $\nu(\mathfrak{n})$ denotes the number of prime divisors of $\mathfrak{n}$. \begin{defn} \label{allde} Define functions $\mu, \lambda, \varphi_{\boldsymbol{\epsilon}} \in {\mathrm{Maps}}(\mathcal{N},\mathbf{Z}_{\ge 0}\cup\{\infty\})$ \begin{itemize} \item $\mu(\mathfrak{n}) = \mathrm{length}(\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*))$, \item $\lambda(\mathfrak{n}) = \mathrm{length}(\HS{\FFf{\n}^*}(K,T^*))$, \end{itemize} and if $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$ is a Stark system \begin{itemize} \item $\varphi_{\boldsymbol{\epsilon}}(\mathfrak{n}) = \max\{j : \epsilon_\mathfrak{n} \in \mathbf{m}^j\Y{\mathfrak{n}}\}$. \end{itemize} Define $\partial : {\mathrm{Maps}}(\mathcal{N},\mathbf{Z}_{\ge 0}\cup\{\infty\}) \to {\mathrm{Maps}}(\mathbf{Z}_{\ge 0},\mathbf{Z}_{\ge 0}\cup\{\infty\})$ by $$ \partial f(i) = \min\{f(\mathfrak{n}) : \text{$\mathfrak{n} \in \mathcal{N}$ and $\nu(\mathfrak{n}) = i$}\}. $$ \end{defn} \begin{defn} \label{allder} The {\em order of vanishing} of a nonzero Stark system $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$ is $$ \ordv{\boldsymbol{\epsilon}} := \min\{\nu(\mathfrak{n}) : \mathfrak{n} \in \mathcal{N}, \epsilon_\mathfrak{n} \ne 0\} = \min\{i : \partial\varphi_{\boldsymbol{\epsilon}}(i) \ne \infty\}. $$ We say $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$ is {\em primitive} if its image in $\mathbf{SS}_r(T/\mathbf{m} T)$ is nonzero. We also define the sequence of {\em elementary divisors} $$ d_{\boldsymbol{\epsilon}}(i) := \partial\varphi_{\boldsymbol{\epsilon}}(i) - \partial\varphi_{\boldsymbol{\epsilon}}(i+1), \quad i \ge \ordv{\boldsymbol{\epsilon}}. $$ \end{defn} Note that $\partial\varphi_{\boldsymbol{\epsilon}}(i) = \infty$ if $i < \ordv{\boldsymbol{\epsilon}}$; Theorems \ref{struc} and \ref{korank} below show that the converse is true as well, so the $d_{\boldsymbol{\epsilon}}(i)$ are well-defined and finite. \begin{prop} \label{6.12} Suppose $R$ is artinian, and $\HS{\FF^*}(K,T^*) \cong \oplus_{i\ge 1} R/\mathbf{m}^{e_i}$ with $e_1 \ge e_2 \ge \cdots$. Then for every $t \ge 0$, $$ \partial\lambda(t) = \partial\mu(t) = \sum_{i > t} e_i. $$ \end{prop} \begin{proof} Suppose $\mathfrak{n} \in \mathcal{N}$ and $\nu(\mathfrak{n}) = t$. Consider the map $$ \HS{\FF^*}(K,T^*) \longrightarrow \dirsum{\mathfrak{q} \mid \mathfrak{n}} \HS{\f}(K_\mathfrak{q},T^*). $$ The right-hand side is free of rank $t$ over $R$, and $R$ is principal, so the image is a quotient of $\HS{\FF^*}(K,T^*)$ generated by (at most) $t$ elements. Hence the image has length at most $\sum_{i \le t} e_i$, so the kernel has length at least $\sum_{i>t} e_i$. But by definition this kernel is $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*)$, which is contained in $\HS{\FFf{\n}^*}(K,T^*)$, so \begin{equation} \label{eq:3} \lambda(\mathfrak{n}) \ge \mu(\mathfrak{n}) \ge \sum_{i>t} e_i. \end{equation} We will prove by induction on $t$ that $\mathfrak{n}$ can be chosen so that $\nu(\mathfrak{n}) = t$ and $\HS{\FFf{\n}^*}(K,T^*) \cong \oplus_{i>t} R/\mathbf{m}^{e_i}$. For such an $\mathfrak{n}$ equality holds in \eqref{eq:3}, and the lemma follows. When $t = 0$ we can just take $\mathfrak{n} = 1$. Suppose we have an $\mathfrak{n}$ with $\nu(\mathfrak{n}) = t-1$ and $\HS{\FFf{\n}^*}(K,T^*) \cong \oplus_{i>t-1} R/\mathbf{m}^{e_i}$. Since $\chi(T) > 0$, Corollary \ref{threefive} shows that $\mathbf{m}^{k-1}\HS{\FFf{\n}}(K,T) \ne 0$. Fix a nonzero element $c \in \mathbf{m}^{k-1}\HS{\FFf{\n}}(K,T) \subset \HS{\FFf{\n}}(K,T)[\mathbf{m}]$. If $e_t > 0$ then choose a nonzero element $c' \in \mathbf{m}^{e_t-1}\HS{\FFf{\n}^*}(K,T^*) \subset \HS{\FFf{\n}^*}(K,T^*)[\mathbf{m}]$. By \cite[Proposition 3.6.1]{kolysys} we can use the Cebotarev theorem to choose a prime $\mathfrak{q} \in \mathcal{P}$ such that the localization $\mathrm{loc}_\mathfrak{q}(c) \ne 0$ and, if $e_t > 0$, such that $\mathrm{loc}_\mathfrak{q}(c') \ne 0$ as well. Since $\HS{\f}(K_\mathfrak{q},T)$ is free of rank one over $R$, and (by our choice of $\mathfrak{q}$) the localization of $\mathbf{m}^{k-1}\HS{\FFf{\n}}(K,T)$ at $\mathfrak{q}$ is nonzero, it follows that the localization map $\HS{\FFf{\n}}(K,T) \to \HS{\f}(K_\mathfrak{q},T)$ is surjective. Similarly, we have that $\HS{\mathcal{F}(\mathfrak{n})^*}(K,T^*)$ has exponent $\mathbf{m}^{e_t}$, and if $e_t > 0$ then the localization of $\mathbf{m}^{e_t-1}\HS{\FFf{\n}^*}(K,T^*)$ at $\mathfrak{q}$ is nonzero, so $$ \HS{\mathcal{F}(\mathfrak{n})^*}(K,T^*)/\HS{\mathcal{F}^\mathfrak{q}(\mathfrak{n})^*}(K,T^*) \cong \mathrm{loc}_\mathfrak{q}(\HS{\mathcal{F}(\mathfrak{n})^*}(K,T^*)) \cong R/\mathbf{m}^{e_t} $$ and therefore $\HS{\mathcal{F}^\mathfrak{q}(\mathfrak{n})^*}(K,T^*) \cong \oplus_{i > t} R/\mathbf{m}^{e_i}$. By \cite[Theorem 4.1.7(ii)]{kolysys} we have $\HS{\FFf{\n\l}^*}(K,T^*) = \HS{\mathcal{F}^\mathfrak{q}(\mathfrak{n})^*}(K,T^*)$, so $\mathfrak{n}\mathfrak{q} \in \mathcal{N}$ has the desired property. \end{proof} \begin{prop} \label{struclem} Suppose $R$ is artinian of length $k$, and $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$. Fix $s \ge 0$ such that $\boldsymbol{\epsilon}$ generates $\mathbf{m}^s \mathbf{SS}_r(T)$, and nonnegative integers $e_1 \ge e_2 \ge \cdots$ such that $$ \HS{\FF^*}(K,T^*) \cong \oplus_i R/\mathbf{m}^{e_i}. $$ Then for every $t \ge 0$, $$ \partial\varphi_{\boldsymbol{\epsilon}}(t) = \begin{cases} s+\sum_{i>t} e_i & \text{if $s+\sum_{i>t} e_i < k$}, \\ \infty & \text{if $s+\sum_{i>t} e_i \ge k$}. \end{cases} $$ \end{prop} \begin{proof} It is enough to prove the proposition when $s = 0$, and the general case will follow. So we may assume that $\boldsymbol{\epsilon}$ generates $\mathbf{SS}_r(T)$. By Theorem \ref{magic} and Lemma \ref{ylem}(i), we have that $\epsilon_\mathfrak{n}$ generates $\Y{\mathfrak{n}}' = \mathbf{m}^{\mu(\mathfrak{n})}\Y{\mathfrak{n}}$, which is cyclic of length $\max\{k-\mu(\mathfrak{n}),0\}$. Hence $\epsilon_\mathfrak{n} \in \mathbf{m}^{\mu(\mathfrak{n})}\Y{\mathfrak{n}}$, and $\epsilon_\mathfrak{n} \in \mathbf{m}^{\mu(\mathfrak{n})+1}\Y{\mathfrak{n}}$ if and only if $\mu(\mathfrak{n}) \ge k$. Therefore $$ \partial\varphi_{\boldsymbol{\epsilon}}(t) = \begin{cases} \partial\mu(t) & \text{if $\partial\mu(t) < k$}, \\ \infty & \text{if $\partial\mu(t) \ge k$}. \end{cases} $$ Now the proposition follows from the calculation of $\partial\mu(t)$ in Lemma \ref{6.12}. \end{proof} \begin{thm} \label{struc} Suppose $R$ is artinian, $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$, and $\epsilon_1 \ne 0$. Then $$ \renewcommand{1.4}{1.4} \arraycolsep=2pt \begin{array}{ccccccccc} \partial\varphi_{\boldsymbol{\epsilon}}(0) & \ge & \partial\varphi_{\boldsymbol{\epsilon}}(1) & \ge & \partial\varphi_{\boldsymbol{\epsilon}}(2) & \ge & \cdots, \\ d_{\boldsymbol{\epsilon}}(0) & \ge & d_{\boldsymbol{\epsilon}}(1) & \ge & d_{\boldsymbol{\epsilon}}(2) & \ge & \cdots & \ge & 0, \end{array} $$ and $$ \HS{\FF^*}(K,T^*) \cong \dirsum{i \ge 0} R/\mathbf{m}^{d_{\boldsymbol{\epsilon}}(i)}. $$ \end{thm} \begin{proof} Let $s$ be such that $\boldsymbol{\epsilon}$ generates $\mathbf{m}^s \mathbf{SS}_r(T)$. If $\epsilon_1 \ne 0$ then $\partial\varphi_{\boldsymbol{\epsilon}}(0) < k$, so in Proposition \ref{struclem} we have $\partial\varphi_{\boldsymbol{\epsilon}}(t) = s + \sum_{i>t} e_i$ for every $t$. The theorem follows directly. \end{proof} If $R$ is a discrete valuation ring then $F$ will denote the field of fractions of $R$, and if $M$ is an $R$-module we define \begin{itemize} \item $\mathrm{rank}_R M := \dim_{F}M \otimes F$, \item $\mathrm{corank}_R M := \mathrm{rank}_R \mathrm{Hom}_R(M,F/R)$, \item $M_\mathrm{div}$ is the maximal divisible submodule of $M$. \end{itemize} \begin{prop} \label{11.4} Suppose $R$ is a discrete valuation ring, and $\boldsymbol{\epsilon} \in\mathbf{SS}_r(T)$ generates $\mathbf{m}^s\mathbf{SS}_r(T)$. Let $a := \mathrm{corank}_R(\HS{\FF^*}(K,T^*))$ and write $$ \HS{\FF^*}(K,T^*)/(\HS{\FF^*}(K,T^*))_\mathrm{div} \cong \dirsum{i>a} R/\mathbf{m}^{e_i} $$ with $e_{a+1} \ge e_{a+2} \ge \cdots$. Then $$ \partial\varphi_{\boldsymbol{\epsilon}}(t) = \begin{cases} \infty & \text{if $t < a$}, \\ s+\sum_{i>t} e_i & \text{if $t \ge a$}. \end{cases} $$ \end{prop} \begin{proof} Let $e_1 = \cdots = e_a := \infty$. Since $$ \HS{\FF^*}(K,T^*) = \varinjlim \HS{\FF^*}(K,T^*[\mathbf{m}^k]), $$ Proposition \ref{stprops}(ii) applied to all the $T/\mathbf{m}^kT$ shows that for every $k \in \mathbf{Z}^+$ we have \begin{equation} \label{eq:n} \HS{\FF^*}(K,T^*[\mathbf{m}^k]) = \HS{\FF^*}(K,T^*)[\mathbf{m}^k] \cong \dirsum{i \ge 1} R/\mathbf{m}^{\min\{k,e_i\}}. \end{equation} For every $k \ge 0$ let $\ssyi{k}$ denote the image of $\boldsymbol{\epsilon}$ in $\mathbf{SS}_r(T/\mathbf{m}^k T,\PP_k)$. Fix $s \ge 0$ such that $\boldsymbol{\epsilon}$ generates $\mathbf{m}^s\mathbf{SS}_r(T)$. Then by Theorem \ref{rankdvr}, $\ssyi{k}$ generates $\mathbf{m}^s\mathbf{SS}_r(T/\mathbf{m}^k T)$ for every $k$. Fix $t$, and choose $\mathfrak{n} \in \mathcal{N}$ with $\nu(\mathfrak{n}) = t$. Let $k$ be such that $I_\n = \mathbf{m}^k$. By \eqref{eq:n} and Proposition \ref{struclem} we have that $\epsilon^{(k)}_\mathfrak{n} = 0$ if $t < a$, and $\epsilon^{(k)}_\mathfrak{n} \in \mathbf{m}^{s+\sum_{i>t}e_i}\Y{\mathfrak{n}}$ if $t > a$. But $\epsilon^{(k)}_\mathfrak{n} = \epsilon_\mathfrak{n} \in \Y{\mathfrak{n}}$, so we conclude that $\partial\varphi_{\boldsymbol{\epsilon}}(t) = \infty$ if $t < a$, and $\partial\varphi_{\boldsymbol{\epsilon}}(t) \ge s+\sum_{i>t}e_i$ if $t \ge a$. Now suppose $t \ge a$, and fix $k > s+\sum_{i>t}e_i$. By Proposition \ref{struclem} we can find $\mathfrak{n}\in \mathcal{N}$ with $I_\n \subset \mathbf{m}^k$ such that $\epsilon^{(k)}_\mathfrak{n} \notin \mathbf{m}^{s+1+\sum_{i>t}e_i}\Y{\mathfrak{n}}$. Since $\epsilon^{(k)}_\mathfrak{n}$ is the image of $\epsilon_\mathfrak{n}$, we have that $\epsilon_\mathfrak{n} \notin \mathbf{m}^{s+1+\sum_{i>t}e_i}\Y{\mathfrak{n}}$. This shows that $\partial\varphi_{\boldsymbol{\epsilon}}(t) \le s+\sum_{i>t}e_i$, and the proof is complete. \end{proof} \begin{thm} \label{korank} Suppose $R$ is a discrete valuation ring, $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$ and $\boldsymbol{\epsilon} \ne 0$. Then: \begin{enumerate} \item the sequence $\partial\varphi_{\boldsymbol{\epsilon}}(t)$ is nonincreasing, finite for $t \ge \ordv{\boldsymbol{\epsilon}}$, and nonnegative, \item the sequence $d_{\boldsymbol{\epsilon}}(i)$ is nonincreasing, finite for $i \ge \ordv{\boldsymbol{\epsilon}}$, and nonnegative, \item $\ordv{\boldsymbol{\epsilon}}$ and the $d_{\boldsymbol{\epsilon}}(i)$ are independent of the choice of nonzero $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$, \item $\mathrm{corank}_R(\HS{\FF^*}(K,T^*)) = \ordv{\boldsymbol{\epsilon}}$, \item $\HS{\FF^*}(K,T^*)/(\HS{\FF^*}(K,T^*))_\mathrm{div} \cong \oplus_{i\ge\ordv{\boldsymbol{\epsilon}}} R/\mathbf{m}^{d_{\boldsymbol{\epsilon}}(i)}$, \item $\mathrm{length}_R(\HS{\FF^*}(K,T^*)/(\HS{\FF^*}(K,T^*))_\mathrm{div}) = \partial\varphi_{\boldsymbol{\epsilon}}(\ordv{\boldsymbol{\epsilon}})-\partial\varphi_{\boldsymbol{\epsilon}}(\infty)$, where $\partial\varphi_{\boldsymbol{\epsilon}}(\infty) := \lim_{t \to \infty}\partial\varphi_{\boldsymbol{\epsilon}}(t)$ \item $\boldsymbol{\epsilon}$ is primitive if and only if $\der{\infty}(\boldsymbol{\epsilon}) = 0$, \item $\mathrm{length}(\HS{\FF^*}(K,T^*))$ is finite if and only if $\epsilon_1 \ne 0$, \item $ \mathrm{length}(\HS{\FF^*}(K,T^*)) \le \partial\varphi_{\boldsymbol{\epsilon}}(0) = \max\{s : \epsilon_1 \in \mathbf{m}^s\wedge^r\HS{\FF}(K,T)\}, $ with equality if and only if $\boldsymbol{\epsilon}$ is primitive. \end{enumerate} \end{thm} \begin{proof} The theorem follows directly from Proposition \ref{11.4}. \end{proof} \part{Kolyvagin systems} \label{part3} \section{Sheaves and monodromy} \label{sog} In this section we recall some concepts and definitions from \cite{kolysys}. \begin{defn} \label{sheafdef} If $X$ is a graph, a sheaf $\mathcal{S}$ (of $R$-modules) on $X$ is a rule assigning: \begin{itemize} \item to each vertex $v$ of $X$, an $R$-module $\mathcal{S}(v)$ (the stalk of $X$ at $v$), \item to each edge $e$ of $X$, an $R$-module $\mathcal{S}(e)$, \item to each pair $(e, v)$ where $v$ is an endpoint of the edge $e$, an $R$-module map $\shmap{v}{e} : \mathcal{S}(v) \to \mathcal{S}(e)$. \end{itemize} A global section of $\mathcal{S}$ is a collection $\{\kappa_v \in \mathcal{S}(v) : v \in V\}$ such that for every edge $e \in E$, if $e$ has endpoints $v, v'$ then $\shmap{v}{e}(\kappa_v) = \shmap{v'}{e}(\kappa_{v'})$ in $\mathcal{S}(e)$. We write $\Gamma(\mathcal{S})$ for the $R$-module of global sections of $\mathcal{S}$. \end{defn} \begin{defn} \label{lcdef} We say that a sheaf $\mathcal{S}$ on a graph $X$ is {\em locally cyclic} if all the $R$-modules $\mathcal{S}(v)$, $\mathcal{S}(e)$ are cyclic and all the maps $\shmap{v}{e}$ are surjective. If $\mathcal{S}$ is locally cyclic then a {\em surjective path} (relative to $\mathcal{S}$) from $v$ to $w$ is a path $(v=v_1,v_2,\ldots,v_k=w)$ in $X$ such that for each $i$, if $e_i$ is the edge joining $v_i$ and $v_{i+1}$, then $\shmap{v_{i+1}}{e_i}$ is an isomorphism. We say that the vertex $v$ is a {\em hub} of $\mathcal{S}$ if for every vertex $w$ there is an $\mathcal{S}$-surjective path from $v$ to $w$. Suppose now that the sheaf $\mathcal{S}$ is locally cyclic. If $P = (v_1,v_2,\ldots,v_k)$ is a surjective path in $X$, we can define a surjective map $\shmap{P}{} : \mathcal{S}(v_1) \to \mathcal{S}(v_k)$ by $$ \shmap{P}{} := (\shmap{v_k}{e_{k-1}})^{-1} \circ \shmap{v_{k-1}}{e_{k-1}} \circ (\shmap{v_{k-1}}{e_{k-2}})^{-1} \circ \cdots \circ (\shmap{v_2}{e_1})^{-1} \circ \shmap{v_1}{e_1} $$ since all the inverted maps are isomorphisms. We will say that $\mathcal{S}$ has {\em trivial monodromy} if whenever $v, w, w'$ are vertices, $P, P'$ are surjective paths $(v,\ldots,w)$ and $(v,\ldots,w')$, and $w, w'$ are joined by an edge $e$, then $\shmap{w}{e}\circ\shmap{P}{} = \shmap{w'}{e}\circ\shmap{P'}{} \in \mathrm{Hom}(\mathcal{S}(v),\mathcal{S}(e))$. In particular for every pair $v, w$ of vertices and and every pair $P, P'$ of surjective paths from $v$ to $w$, we require that $\shmap{P}{} = \shmap{P'}{} \in \mathrm{Hom}(\mathcal{S}(v),\mathcal{S}(w))$. \end{defn} \begin{prop} \label{lcgraphprop} Suppose $\mathcal{S}$ is locally cyclic and $v$ is a hub of $\mathcal{S}$. \begin{enumerate} \item The map $f_v : \Gamma(\mathcal{S}) \to \mathcal{S}(v)$ defined by $\boldsymbol{\kappa} \mapsto \kappa_v$ is injective, and is surjective if and only if $\mathcal{S}$ has trivial monodromy. \item If $\boldsymbol{\kappa} \in \Gamma(\mathcal{S})$, and if $u$ is a vertex such that $\kappa_u \ne 0$ and $\kappa_u$ generates $\mathbf{m}^i \mathcal{S}(u)$ for some $i \in \mathbf{Z}^+$, then $\kappa_w$ generates $\mathbf{m}^i \mathcal{S}(w)$ for every vertex $w$. \end{enumerate} \end{prop} \begin{proof} This is \cite[Proposition 3.4.4]{kolysys}. \end{proof} \begin{defn} \label{perf} A global section $\kappa \in \Gamma(\mathcal{S})$ will be called {\em primitive} if for every vertex $v$, $\kappa(v) \in \mathcal{S}(v)$ is a generator of the $R$-module $\mathcal{S}(v)$. It follows from Proposition \ref{lcgraphprop} that a locally cyclic sheaf $\mathcal{S}$ with a hub has a primitive global section if and only if $\mathcal{S}$ has trivial monodromy. \end{defn} \section{Kolyvagin systems and the Selmer sheaf} \label{ksp} Fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$ as in Definition \ref{sddef}. Recall that we have defined a Selmer structure $\mathcal{F}(\mathfrak{n})$ for every $\mathfrak{n}\in\mathcal{N}$ (Definition \ref{FFabc}) by modifying the local condition at primes dividing $\mathfrak{n}$, and that $K(\mathfrak{q})$ is the $p$-part of the ray class field of $K$ modulo $\mathfrak{q}$. \begin{defn} For every $\mathfrak{n} \in\mathcal{N}$, define $$ G_\n := \tensor{\mathfrak{q} \mid \mathfrak{n}} \mathrm{Gal}(K(\mathfrak{q})_\mathfrak{q}/K_\l). $$ Each $\mathrm{Gal}(K(\mathfrak{q})_\mathfrak{q}/K_\l)$ is cyclic with order contained in $I_\n$, so $G_\n \otimes (R/I_\n)$ is free of rank one over $R/I_\n$. If $\mathfrak{q}$ is a prime dividing $\mathfrak{n}$, then $(T/I_\n T)/(\Fr_{\l}-1)(T/I_\n T)$ is free of rank one over $R/I_\n$, so we can apply the results of \S\ref{lcg} to $H^1(K_\l,T/I_\n T)$. In particular we will write $$ \fs_{\l} : \HS{\f}(K_\l,T/I_\n T) \longrightarrow \HS{\tr}(K_\l,T/I_\n T) \otimes G_\mathfrak{q} $$ for the finite-singular isomorphism of Definition \ref{fsmap} applied to $K_\l$. If $\mathfrak{q}$ is a prime, $\mathfrak{n}\mathfrak{q} \in \mathcal{N}$, and $r \ge 1$, then we can compare $\wedge^r \HS{\FFf{\n}}(K,T/I_\n T) \otimes G_\n$ and $\wedge^r \HS{\FFf{\mathfrak{n}\mathfrak{q}}}(K,T/I_{\n\l} T) \otimes G_{\n\l}$ using the exterior algebra of Appendix \ref{exalg}. Namely, applying Proposition \ref{wedgemap} with the localization maps of Definition \ref{piqdef} \begin{align*} &\mathrm{loc}_\mathfrak{q}^\mathrm{f} : \HS{\FFf{\n}}(K,T/I_{\n\l} T) \longrightarrow \HS{\f}(K_\mathfrak{q},T/I_{\n\l} T) \map{\fs_{\l}} \HS{\tr}(K_\mathfrak{q},T/I_{\n\l} T) \otimes G_\mathfrak{q}, \\ &\mathrm{loc}_\mathfrak{q}^{\mathrm{tr}} : \HS{\FFf{\mathfrak{n}\mathfrak{q}}}(K,T/I_{\n\l} T) \longrightarrow \HS{\tr}(K_\mathfrak{q},T/I_{\n\l} T) \end{align*} gives the top and bottom maps, respectively, in the following diagram: \begin{equation} \label{compdiag} \hskip -50pt \raisebox{48pt} {\xymatrix@R=25pt@C=15pt{ (\wedge^r\HS{\FFf{\n}}(K,T/I_\n T)) \otimes G_\n \ar^(.6){\widehat{\mathrm{loc}_\mathfrak{q}^f}\otimes 1}[dr] \\ & \hskip -80pt \HS{\tr}(K_\l,T/I_{\n\l} T) \otimes (\wedge^{r-1}\HS{\mathcal{F}_\mathfrak{q}(\mathfrak{n})}(K,T/I_{\n\l} T)) \otimes G_{\n\l} \hskip -80pt \\ (\wedge^r\HS{\FFf{\mathfrak{n}\mathfrak{q}}}(K,T/I_{\n\l} T)) \otimes G_{\n\l} \ar_(.6){\widehat{\mathrm{loc}_\mathfrak{q}^{\mathrm{tr}}}\otimes 1}[ur] }} \end{equation} \end{defn} \begin{defn} \label{canonicalsheaf} Define a graph $\mathcal{X} := \mathcal{X}(\mathcal{P})$ by taking the set of vertices of $\mathcal{X}$ to be $\mathcal{N} := \mathcal{N}(\mathcal{P})$ (Definition \ref{pndef}), and whenever $\mathfrak{n}, \mathfrak{n}\mathfrak{q} \in \mathcal{N}$ (with $\mathfrak{q}$ prime) we join $\mathfrak{n}$ and $\mathfrak{n}\mathfrak{q}$ by an edge. The {\em Selmer sheaf} associated to $(T,\mathcal{F},\mathcal{P},r)$ is the sheaf $\mathcal{S} = \mathcal{S}_{(T,\mathcal{F},\mathcal{P},r)}$ of $R$-modules on $\mathcal{X}$ defined as follows. Let \begin{itemize} \item $\mathcal{S}(\mathfrak{n}) := (\wedge^r\HS{\FFf{\n}}(K,T/I_\n T)) \otimes G_\n$ for $\mathfrak{n} \in \mathcal{N}$, \end{itemize} and if $e$ is the edge joining $\mathfrak{n}$ and $\mathfrak{n}\mathfrak{q}$ define \begin{itemize} \item $\mathcal{S}(e) := \HS{\tr}(K_\l,T/I_{\n\l} T) \otimes (\wedge^{r-1}\HS{\mathcal{F}_\mathfrak{q}(\mathfrak{n})}(K,T/I_{\n\l} T)) \otimes G_{\n\l}$, \item $\shmap{\n}{e} : \mathcal{S}(\mathfrak{n}) \to \mathcal{S}(e)$ is the upper map of \eqref{compdiag}, \item $\shmap{\n\l}{e} : \mathcal{S}(\mathfrak{n}\mathfrak{q}) \to \mathcal{S}(e)$ is the lower map of \eqref{compdiag}. \end{itemize} We call $\mathcal{S}(n) := \wedge^r\HS{\FFf{\n}}(K,T/I_\n T) \otimes G_\n$ the {\em Selmer stalk} at $n$. \end{defn} \begin{defn} \label{ksdefn} A {\em Kolyvagin system} for $(T,\mathcal{F},\mathcal{P},r)$ (or simply a Kolyvagin system of rank $r$ for $T$, if $\mathcal{F}$ and $\mathcal{P}$ are fixed) is a global section of the Selmer sheaf $\mathcal{S}$. We write $\mathbf{KS}_r(T,\mathcal{F},\mathcal{P})$, or simply $\mathbf{KS}_r(T)$ when there is no risk of confusion, for the $R$-module of Kolyvagin systems $\Gamma(\mathcal{S})$. Concretely, a Kolyvagin system for $(T,\mathcal{F},\mathcal{P},r)$ is a collection of classes $$ \{\kappa_\mathfrak{n} \in (\wedge^r\HS{\FFf{\n}}(K,T/I_\n T)) \otimes G_\n : \mathfrak{n} \in \mathcal{N}\} $$ such that if $\mathfrak{q}$ is prime and $\mathfrak{n}\mathfrak{q} \in \mathcal{N}$, the images of $\kappa_{\mathfrak{n}}$ and $\kappa_{\mathfrak{n}\mathfrak{q}}$ coincide in the diagram \eqref{compdiag}. \end{defn} \begin{rem} The definition of Kolyvagin system given in \cite{kolysys} corresponds to the definition above with $r = 1$. \end{rem} \section{Stub Kolyvagin systems} \label{stubks} Suppose until the final result of this section that $R$ is a principal artinian ring of length $k$. Fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$ as in Definition \ref{sddef} such that hypotheses \ref{h.1} through \ref{h.7} of \S\ref{sscr} hold. In particular $r = \chi(T)$ is the core rank of $T$. Recall that for $\mathfrak{n} \in \mathcal{N}$ we defined $$ \lambda(\mathfrak{n}) := \mathrm{length}_R(\HS{\FFf{\n}^*}(K,T^*)) \in \mathbf{Z}_{\ge 0} \cup \{\infty\}. $$ We say that a vertex $\mathfrak{n} \in \mathcal{N}$ is a {\em core vertex} if $\lambda(\mathfrak{n}) = 0$. \begin{prop} The following are equivalent: \begin{enumerate} \item $\mathfrak{n}$ is a core vertex for $T$, \item $\HS{\FFf{\n}}(K,T)$ is free of rank $r$ over $R$, \item $\mathcal{S}(\mathfrak{n})$ is free of rank one over $R$, \item $\mathfrak{n}$ is a core vertex for $T/\mathbf{m} T$. \end{enumerate} \end{prop} \begin{proof} We have (i) $\iff$ (ii) by Corollary \ref{threefive}, and (i) $\iff$ (iv) by Proposition \ref{stprops}(ii). It is easy to see that (ii) $\iff$ (iii). \end{proof} \begin{prop} \label{eqim} If $\mathfrak{n},\mathfrak{n}\mathfrak{q} \in \mathcal{N}$ and $e$ is the edge joining them, then $$ \shmap{\n}{e}(\mathbf{m}^{\lambda(\mathfrak{n})} \mathcal{S}(\mathfrak{n})) = \shmap{\n\l}{e}(\mathbf{m}^{\lambda(\mathfrak{n}\mathfrak{q})} \mathcal{S}(\mathfrak{n}\mathfrak{q})) \subset \mathcal{S}(e). $$ \end{prop} \begin{proof} By Proposition \ref{wedgemap}(ii) and Definition \ref{canonicalsheaf} of $\shmap{\n}{e}$ and $\shmap{\n\l}{e}$, we have \begin{align*} \shmap{\n}{e}(\mathcal{S}(\mathfrak{n})) &= \fs_{\l}(\mathrm{loc}_\mathfrak{q}(\HS{\FFf{\n}}(K,T))) \otimes \wedge^{r-1}\HS{\mathcal{F}_\mathfrak{q}(\mathfrak{n})}(K,T) \otimes G_\n, \\ \shmap{\n\l}{e}(\mathcal{S}(\mathfrak{n}\mathfrak{q})) &= \mathrm{loc}_\mathfrak{q}(\HS{\FFf{\n\l}}(K,T)) \otimes \wedge^{r-1}\HS{\mathcal{F}_\mathfrak{q}(\mathfrak{n})}(K,T) \otimes G_{\n\l}. \end{align*} By \cite[Lemma 4.1.7]{kolysys}, global duality shows that $$ \mathbf{m}^{\lambda(\mathfrak{n})}\fs_{\l}(\mathrm{loc}_\mathfrak{q}(\HS{\FFf{\n}}(K,T))) = \mathbf{m}^{\lambda(\mathfrak{n}\mathfrak{q})}\mathrm{loc}_\mathfrak{q}(\HS{\FFf{\n\l}}(K,T)) \otimes G_\l $$ and the proposition follows. \end{proof} We define a subsheaf $\mathcal{S}'$ of the Selmer sheaf $\mathcal{S}$ as follows. \begin{defn} \label{stubsheaf} The {\em sheaf of stub Selmer modules} $\mathcal{S}' = \mathcal{S}'_{(T,\mathcal{F},\mathcal{P},r)} \subset \mathcal{S}$ is the subsheaf of $\mathcal{S}$ defined by \begin{itemize} \item $ \mathcal{S}'(\mathfrak{n}) := \mathbf{m}^{\lambda(\mathfrak{n})} \mathcal{S}(\mathfrak{n}) = \mathbf{m}^{\lambda(\mathfrak{n})} (\wedge^r\HS{\FFf{\n}}(K,T)) \otimes G_\n \subset \mathcal{S}(\mathfrak{n}) $ if $\mathfrak{n} \in \mathcal{N}$, \item $\mathcal{S}'(e)$ is the image of $\mathcal{S}'(\mathfrak{n})$ in $\mathcal{S}(e)$ under the vertex-to-edge map of $\mathcal{S}$, if $\mathfrak{n}$ is a vertex of the edge $e$ (this is well-defined by Proposition \ref{eqim}), \end{itemize} and the vertex-to-edge maps are the restrictions of those of the sheaf $\mathcal{S}$. \end{defn} \begin{defn} \label{ks'def} A {\em stub Kolyvagin system} is a global section of the sheaf $\mathcal{S}'$. We let $\mathbf{KS}'_r(T) = \mathbf{KS}_r'(T,\mathcal{F},\mathcal{P}) := \Gamma(\mathcal{S}') \subset \mathbf{KS}_r(T)$ denote the $R$-module of stub Kolyvagin systems. \end{defn} \begin{rem} It is shown in \cite[Theorem 4.4.1]{kolysys} that when the core rank $\chi(T) = 1$, we have $\mathbf{KS}'_1(T) = \mathbf{KS}_1(T)$. In other words, in that case for every Kolyvagin system $\boldsymbol{\kappa} \in \mathbf{KS}_1(T)$ and $\mathfrak{n} \in \mathcal{N}$, we have $ \kappa_\mathfrak{n} \in \mathbf{m}^{\lambda(\mathfrak{n})}\HS{\FFf{\n}}(K,T) \otimes G_\n. $ \end{rem} \begin{thm} \label{mthm0} \begin{enumerate} \item There are core vertices. \item Suppose $\mathfrak{n},\mathfrak{n}'$ are core vertices. Then there is a path $$ \xymatrix{ \mathfrak{n}=\mathfrak{n}_0 \ar@{-}^-{e_1}[r] & \mathfrak{n}_1 \ar@{-}^-{e_2}[r] & \cdots \ar@{-}^-{e_t}[r] & \mathfrak{n}_t=\mathfrak{n}' } $$ in $\mathcal{X}$ such that every $\mathfrak{n}_i$ is a core vertex and all of the maps $\psi_{\mathfrak{n}_i}^{e_{i+1}}$ and $\psi_{\mathfrak{n}_i}^{e_{i}}$ are isomorphisms. \item The stub subsheaf $\mathcal{S}'$ is locally cyclic, and every core vertex is a hub. For every vertex $\mathfrak{n} \in \mathcal{N}$, there is a core vertex $\mathfrak{n}' \in \mathcal{N}$ divisible by $\mathfrak{n}$. \end{enumerate} \end{thm} Theorem \ref{mthm0} will be proved in \S\ref{m0pf}. In the remainder of this section we derive some consequences of it. \begin{thm} \label{mthm3} \begin{enumerate} \item The module $\mathbf{KS}'_r(T)$ of stub Kolyvagin systems is free of rank one over $R$, and for every core vertex $\mathfrak{n}$ the specialization map $$ \mathbf{KS}'_r(T) \longrightarrow \mathcal{S}'(\mathfrak{n}) = (\wedge^r\HS{\FFf{\n}}(K,T)) \otimes G_\n $$ given by $\boldsymbol{\kappa} \mapsto \kappa_\mathfrak{n}$ is an isomorphism. \item There is a Kolyvagin system $\boldsymbol{\kappa} \in \mathbf{KS}'_r(T)$ such that $\kappa_\mathfrak{n}$ generates $\mathcal{S}'(\mathfrak{n})$ for every $\mathfrak{n} \in \mathcal{N}$. \item The locally cyclic sheaf $\mathcal{S}'$ has trivial monodromy. \end{enumerate} \end{thm} \begin{proof} This follows from Proposition \ref{lcgraphprop}, using Theorem \ref{mthm0}(i,iii). \end{proof} For the next theorem we take $R$ to be a discrete valuation ring. \begin{thm} \label{betthm} Suppose that $R$ is a discrete valuation ring, and hypotheses {\rm\ref{h.1}} through {\rm\ref{h.6}} are satisfied for the Selmer data $(T,\mathcal{F},\mathcal{P},r)$. For $k > 0$ let $\PP_k \subset \mathcal{P}$ be as in Definition \ref{pndef}. The natural maps $T \twoheadrightarrow T/\mathbf{m}^k$ and $\PP_k \hookrightarrow \mathcal{P}$ induce an isomorphism $$ \mathbf{KS}'_r(T,\mathcal{P}) \map{\;\sim\;} \varprojlim \mathbf{KS}'_r(T/\mathbf{m}^k T,\PP_k). $$ The $R$-module $\mathbf{KS}'_r(T,\mathcal{P})$, is free of rank one, generated by a Kolyvagin system $\boldsymbol{\kappa}$ whose image in $\mathbf{KS}'_r(T/\mathbf{m} T)$ is nonzero. The maps $\mathbf{KS}'_r(T,\mathcal{P}) \to \mathbf{KS}'_r(T/\mathbf{m}^k,\PP_k)$ are surjective. \end{thm} \begin{proof} This can be proved easily directly from Theorem \ref{mthm3}, as in the proofs of Proposition \ref{betprp} and Theorem \ref{rankdvr} for Stark systems. See also \cite[Proposition 5.2.9]{kolysys}. \end{proof} \begin{rem} When $r = \chi(T) > 1$, it is not generally true that $\mathbf{KS}'_r(T) = \mathbf{KS}_r(T)$. For example, suppose $R$ is principal artinian of length $k > 1$, and suppose $\mathfrak{m}\in\mathcal{N}$ is such that $\HS{\mathcal{F}(\mathfrak{m})}(K,T) \cong R^r \oplus (R/\mathbf{m})^r$, with corresponding basis $c_1,\ldots,c_r, d_1,\ldots,d_r$. Let $g_\mathfrak{m}$ be a generator of $G_\mathfrak{m}$. For every $\mathfrak{q}\in\mathcal{P}$ and every $i$, $\mathrm{loc}_\mathfrak{q}(d_i)$ is killed by $\mathbf{m}$, so it is divisible by $\mathbf{m}^{k-1}$ in the free $R$-module $H^1(K_\mathfrak{q},T)$. It follows that if we define $\boldsymbol{\kappa} := \{\kappa_\mathfrak{n}\}$ where $$ \kappa_\mathfrak{n} := \begin{cases} (d_1 \wedge \cdots \wedge d_r) \otimes g_\mathfrak{m} & \text{if $\mathfrak{n}=\mathfrak{m}$},\\ 0 & \text{if $\mathfrak{n}\ne\mathfrak{m}$}, \end{cases} $$ then $\boldsymbol{\kappa}$ is a Kolyvagin system, but $\kappa_\mathfrak{m} \notin \mathcal{S}'(\mathfrak{m})$ so $\boldsymbol{\kappa} \notin \mathbf{KS}_r'(T)$. \end{rem} \section{Kolyvagin systems and Stark systems} \label{ksnss} Suppose that $R$ is a principal artinian ring, and fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$ as in Definition \ref{sddef} such that $I_\l = 0$ for every $\mathfrak{q} \in \mathcal{P}$. Recall the $R$ module $\Y{\mathfrak{n}}$ of Definition \ref{yndef}, and let $\mathrm{loc}^\mathrm{f}_\mathfrak{q} : H^1(K,T) \to \HS{\tr}(K,T) \otimes G_\l$ and $\mathrm{loc}^{\mathrm{tr}}_\mathfrak{q} : H^1(K,T) \to \HS{\tr}(K,T)$ be the maps of Definition \ref{piqdef}. \begin{defn} Suppose $\mathfrak{n}\in\mathcal{N}$. By \eqref{gi2} we have an exact sequence $$ 0 \longrightarrow \HS{\FFf{\n}}(K,T) \longrightarrow \HS{\mathcal{F}^\mathfrak{n}}(K,T) \map{\oplus \mathrm{loc}_\mathfrak{q}^\mathrm{f}} \dirsum{\mathfrak{q}\mid\mathfrak{n}}\HS{\tr}(K_\mathfrak{q},T) \otimes G_\l $$ and it follows that the square \begin{equation*} \xymatrix@R=15pt{ ~\HS{\FFf{\n}}(K,T)~ \ar@{^(->}[r]\ar[d] & ~\HS{\mathcal{F}^\mathfrak{n}}(K,T)~ \ar^{\oplus \mathrm{loc}_\mathfrak{q}^\mathrm{f}}[d] \\ ~0~ \ar@{^(->}[r] & ~\dirsum{\mathfrak{q}\mid\mathfrak{n}}\HS{\tr}(K_\mathfrak{q},T)\otimes G_\l~ } \end{equation*} is cartesian. Proposition \ref{bhprop}(i,iv) attaches to this diagram a map $$ \wedge^{r+\nu(\mathfrak{n})}\HS{\mathcal{F}^\mathfrak{n}}(K,T) \otimes \wedge^{\nu(\mathfrak{n})}\mathrm{Hom}(\oplus_{\mathfrak{q}\mid\mathfrak{n}}\HS{\tr}(K_\l,T) \otimes G_\l,R) \longrightarrow \wedge^r\HS{\FFf{\n}}(K,T). $$ Tensoring both sides with $G_\n$ defines a map $$ \Pi_{\mathfrak{n}} : \Y{\mathfrak{n}} \longrightarrow \wedge^r\HS{\FFf{\n}}(K,T) \otimes G_\n. $$ \end{defn} See the proof of Proposition \ref{ss2ks} below for an explicit description of the map $\Pi_\mathfrak{n}$. Recall that if $\mathfrak{m}\mid\mathfrak{n} \in \mathcal{N}$, then $\Psi_{\mathfrak{n},\mathfrak{m}} : \Y{\mathfrak{n}} \to \Y{\mathfrak{m}}$ is the map of Definition \ref{4.3}. \begin{lem} \label{imlem} Suppose that hypotheses {\rm\ref{h.1}} through {\rm\ref{h.7}} of \S\ref{sscr} are satisfied, so in particular $r$ is the core rank of $T$. If $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) = 0$ and $\mathfrak{m}\mid\mathfrak{n}$, then $$ (\Pi_\mathfrak{m} \circ \Psi_{\mathfrak{n},\mathfrak{m}})(\Y{\mathfrak{n}}) = \mathbf{m}^{\mathrm{length}(\HS{\mathcal{F}(\mathfrak{m})^*}(K,T^*))}\mathcal{S}(\mathfrak{m}) = \mathcal{S}'(\mathfrak{m}). $$ \end{lem} \begin{proof} If $\HS{(\mathcal{F}^*)_{\mathfrak{n}}}(K,T^*) = 0$ then $\HS{\mathcal{F}^\mathfrak{n}}(K,T)$ is free of rank $r+\nu(\mathfrak{n})$ over $R$ by Corollary \ref{threefive}(ii). By \eqref{gi} and \eqref{gi2} we have $$ (\cap_{\mathfrak{q}\mid\mathfrak{m}} \ker(\mathrm{loc}_\mathfrak{q}^\mathrm{f}|\HS{\mathcal{F}^\mathfrak{n}}(K,T))) \cap (\cap_{\mathfrak{q}\mid(\mathfrak{n}/\mathfrak{m})} \ker(\mathrm{loc}_\mathfrak{q}^{\mathrm{tr}}|\HS{\mathcal{F}^\mathfrak{n}}(K,T))) = \HS{\mathcal{F}(\mathfrak{m})}(K,T). $$ Now the lemma follows from Proposition \ref{bhprop}(ii,iii) applied to the cartesian square $$ \xymatrix@R=15pt{ ~\HS{\mathcal{F}(\mathfrak{m})}(K,T)~ \ar@{^(->}[r]\ar[d] & ~\HS{\mathcal{F}^\mathfrak{n}}(K,T)~ \ar^{\oplus_{\mathfrak{q}\mid\mathfrak{m}}\mathrm{loc}_\mathfrak{q}^\mathrm{f} \oplus_{\mathfrak{q}\mid(\mathfrak{n}/\mathfrak{m})} \mathrm{loc}_\mathfrak{q}^{\mathrm{tr}}}[d] \\ ~0~ \ar@{^(->}[r] & ~\dirsum{\mathfrak{q}\mid\mathfrak{m}}(\HS{\tr}(K_\mathfrak{q},T)\otimes G_\l) \dirsum{\mathfrak{q}\mid(\mathfrak{n}/\mathfrak{m})}\HS{\tr}(K_\mathfrak{q},T)~ } $$ \end{proof} \begin{prop} \label{ss2ks} Suppose $\boldsymbol{\epsilon} = \{\epsilon_\mathfrak{n} : \mathfrak{n}\in\mathcal{N}\}$ is a Stark system of rank $r$ for $T$. Let $\Pi(\boldsymbol{\epsilon})$ denote the collection $\{(-1)^{\nu(n)}\Pi_\mathfrak{n}(\epsilon_\mathfrak{n}) : \mathfrak{n}\in\mathcal{N}\}$. Then: \begin{enumerate} \item $\Pi(\boldsymbol{\epsilon}) \in \mathbf{KS}_r(T)$. \item If hypotheses {\rm\ref{h.1}} through {\rm\ref{h.7}} of \S\ref{sscr} hold, then $\Pi(\boldsymbol{\epsilon}) \in \mathbf{KS}_r'(T)$. \end{enumerate} \end{prop} \begin{proof} By definition $\Pi_\mathfrak{n}(\epsilon_\mathfrak{n}) \in \wedge^r\HS{\FFf{\n}}(K,T) \otimes G_\n$, so we only need to check the compatibility \eqref{compdiag}. Suppose $\mathfrak{n}\mathfrak{q} \in \mathcal{N}$, with $\mathfrak{n} = \mathfrak{q}_1 \cdots \mathfrak{q}_{\nu(\mathfrak{n})}$, and for every $i$ let $h_i$ be a generator of $\mathrm{Hom}(\HS{\tr}(K_{\mathfrak{q}_i},T),R)$ and similarly for $h$ and $\mathfrak{q}$. Let \begin{align*} \varpi_{\mathfrak{q},h}^{\mathrm{tr}} &:= \widehat{h \circ \mathrm{loc}_\mathfrak{q}^{\mathrm{tr}}} : \wedge^t\HS{\mathcal{F}^{\mathfrak{n}\mathfrak{q}}}(K,T) \to \wedge^{t-1}\HS{\mathcal{F}^{\mathfrak{n}}}(K,T), \\ \varpi_{\mathfrak{q},h}^\mathrm{f} &:= \widehat{h \circ \mathrm{loc}_\mathfrak{q}^\mathrm{f}} : \wedge^t\HS{\mathcal{F}^{\mathfrak{n}\mathfrak{q}}}(K,T) \to \wedge^{t-1}\HS{\mathcal{F}^{\mathfrak{n}}(\mathfrak{q})}(K,T) \otimes G_\l \end{align*} be the maps given by Proposition \ref{wedgemap}, for $t > 0$, and similarly for $\varpi_{\mathfrak{q}_i,h_i}^{\mathrm{tr}}$ and $\varpi_{\mathfrak{q}_i,h_i}^\mathrm{f}$. Let $\epsilon_{\mathfrak{n}\mathfrak{q}} = d_{\mathfrak{n}\mathfrak{q}} \otimes (h_1 \wedge \cdots \wedge h_{\nu(\mathfrak{n})} \wedge h)$ with $d_{\mathfrak{n}\mathfrak{q}} \in \wedge^{r+\nu(\mathfrak{n}\mathfrak{q})}\HS{\mathcal{F}^{\mathfrak{n}\mathfrak{q}}}(K,T)$, and similarly $\epsilon_{\mathfrak{n}} = d_{\mathfrak{n}} \otimes (h_1 \wedge \cdots \wedge h_{\nu(\mathfrak{n})})$. By definition of $\Psi_{\mathfrak{n}\mathfrak{q},\mathfrak{n}}$ we have $d_\mathfrak{n} = \varpi_{\mathfrak{q},h}^{\mathrm{tr}}(d_{\mathfrak{n}\mathfrak{q}})$. If $e$ denotes the edge joining $\mathfrak{n}$ and $\mathfrak{n}\mathfrak{q}$, then \begin{align*} (h \otimes 1)(\psi_{\mathfrak{n}\mathfrak{q}}^e(\Pi_{\mathfrak{n}\mathfrak{q}}(\epsilon_{\mathfrak{n}\mathfrak{q}})) ) &= \varpi_{\mathfrak{q},h}^{\mathrm{tr}}((\varpi_{\mathfrak{q}_{1},h_{1}}^{\mathrm{f}} \circ \cdots \circ \varpi_{\mathfrak{q}_{\nu(\mathfrak{n})},h_{\nu(\mathfrak{n})}}^{\mathrm{f}} \circ \varpi_{\mathfrak{q},h}^\mathrm{f})(d_{\mathfrak{n}\mathfrak{q}})) \\ &= (-1)^{\nu(\mathfrak{n})+1}(\varpi_{\mathfrak{q}_1,h_1}^{\mathrm{f}} \circ \cdots \circ \varpi_{\mathfrak{q}_{\nu(\mathfrak{n})},h_{\nu(\mathfrak{n})}}^{\mathrm{f}} \circ \varpi_{\mathfrak{q},h}^\mathrm{f} \circ \varpi_{\mathfrak{q},h}^{\mathrm{tr}})(d_{\mathfrak{n}\mathfrak{q}}) \\ &= (-1)^{\nu(\mathfrak{n})+1}(\varpi_{\mathfrak{q}_1,h_1}^{\mathrm{f}} \circ \cdots \circ \varpi_{\mathfrak{q}_{\nu(\mathfrak{n})},h_{\nu(\mathfrak{n})}}^{\mathrm{f}} \circ \varpi_{\mathfrak{q},h}^\mathrm{f})(d_\mathfrak{n}) \\ &= -\varpi_{\mathfrak{q},h}^\mathrm{f} ((\varpi_{\mathfrak{q}_1,h_1}^{\mathrm{f}} \circ \cdots \circ \varpi_{\mathfrak{q}_{\nu(\mathfrak{n})},h_{\nu(\mathfrak{n})}}^{\mathrm{f}})(d_\mathfrak{n})) \\ &= -(h \otimes 1)(\psi_\mathfrak{n}^e(\Pi_\mathfrak{n}(\epsilon_\mathfrak{n}))). \end{align*} Since $h$ is an isomorphism, it follows that $\psi_{\mathfrak{n}\mathfrak{q}}^e(\Pi_{\mathfrak{n}\mathfrak{q}}(\epsilon_{\mathfrak{n}\mathfrak{q}})) = -\psi_\mathfrak{n}^e(\Pi_\mathfrak{n}(\epsilon_\mathfrak{n}))$, so the collection $\{(-1)^{\nu(\mathfrak{n})}\Pi_\mathfrak{n}(\epsilon_\mathfrak{n})\}$ is a Kolyvagin system. This proves (i), and (ii) follows from Lemma \ref{imlem} (using Lemma \ref{ylem}(ii)). \end{proof} \begin{thm} \label{mthm3'} If hypotheses {\rm\ref{h.1}} through {\rm\ref{h.7}} of \S\ref{sscr} hold, then the $R$-module map $\Pi : \mathbf{SS}_r(T) \to \mathbf{KS}'_r(T)$ of Proposition \ref{ss2ks} is an isomorphism. \end{thm} \begin{proof} By Lemma \ref{imlem} and Theorem \ref{magic}, for every $\mathfrak{n}$ the composition $$ \mathbf{SS}_r(T) \map{\;\Pi\;} \mathbf{KS}'_r(T) \longrightarrow \mathcal{S}'(\mathfrak{n}) $$ is surjective. Since $\mathbf{SS}_r'(T)$ and $\mathbf{KS}_r'(T)$ are both free of rank one over $R$ (Theorems \ref{magic} and \ref{mthm3}(i)), it follows that $\Pi$ is an isomorphism. \end{proof} \section{Stub Kolyvagin systems and the dual Selmer group} \label{sksds} Suppose for this section that $R$ is either a principal artinian local ring or a discrete valuation ring. We let $k := \mathrm{length}(R)$, so $k$ is finite in the artinian case and $k = \infty$ in the discrete valuation ring case. Fix Selmer data $(T,\mathcal{F},\mathcal{P},r)$ satisfying hypotheses \ref{h.1} through \ref{h.6}, and if $R$ is artinian satisfying \ref{h.7} as well. In this section we prove analogues for stub Kolyvagin systems of the results of \S\ref{extra} for Stark systems. We will say that a stub Kolyvagin system $\boldsymbol{\kappa}$ is primitive if it is primitive as a global section of the stub Selmer sheaf $\mathcal{S}'$ (Definition \ref{perf}), i.e., if $\boldsymbol{\kappa}$ generates the $R$-module $\mathbf{KS}'_r(T)$, or equivalently, if $\kappa_\mathfrak{n}$ generates $\mathbf{m}^{\lambda(\mathfrak{n})}(\wedge^r\HS{\FFf{\n}}(K,T))\otimesG_\n$ for every $\mathfrak{n} \in \mathcal{N}$. \begin{cor} \label{lowerbound} Suppose $R$ is a principal artinian ring of length $k$, and $\boldsymbol{\kappa} \in \mathbf{KS}'_r(T)$. \begin{enumerate} \item If $\kappa_1 \ne 0$ then $$ \mathrm{length}(\HS{\FF^*}(K,T^*)) \le k - \mathrm{length}(R\kappa_1) = \max\{i : \kappa_1 \in \mathbf{m}^i \wedge^r\HS{\FF}(K,T)\}.$$ \item If $\boldsymbol{\kappa}$ is primitive and $\kappa_1 \ne 0$, then equality holds in {\rm(i)}. \item If $\boldsymbol{\kappa}$ is primitive and $\kappa_1 = 0$, then $\mathrm{length}(\HS{\FF^*}(K,T^*)) \ge k$. \end{enumerate} \end{cor} \begin{proof} By Corollary \ref{threefive}(iii), $\mathcal{S}'(1) = \mathbf{m}^{\lambda(1)}\wedge^r\HS{\FF}(K,T)$ is a cyclic $R$-module of length $\max\{0,k-\mathrm{length}(\HS{\FF^*}(K,T^*))\}$. Since $\kappa_1 \in \mathcal{S}'(1)$ by definition, (i) follows. If $\boldsymbol{\kappa}$ is primitive, then $\kappa_1$ generates $\mathcal{S}'(1)$, which proves (ii) and (iii). \end{proof} The following definition is the analogue for Kolyvagin systems of Definitions \ref{allde} and \ref{allder} for Stark systems. \begin{defn} \label{allderk} Suppose $\boldsymbol{\kappa} \in \mathbf{KS}_r(T)$ is a Kolyvagin system. Define $\varphi_{\boldsymbol{\kappa}} \in {\mathrm{Maps}}(\mathcal{N},\mathbf{Z}_\ge 0 \cup \{\infty\})$ by $\varphi_{\boldsymbol{\kappa}}(\mathfrak{n}) := \max\{j : \kappa_\mathfrak{n} \in \mathfrak{m}^j\HS{\FFf{\n}^*}(K,T)\}$. The {\em order of vanishing} of $\boldsymbol{\kappa}$ is $$ \ordv{\boldsymbol{\kappa}} := \min\{\nu(\mathfrak{n}) : \mathfrak{n} \in \mathcal{N}, \kappa_\mathfrak{n} \ne 0\} = \min\{i : \partial\varphi_{\boldsymbol{\kappa}}(i) \ne \infty\}. $$ We also define the sequence of {\em elementary divisors} $$ d_{\boldsymbol{\kappa}}(i) := \partial\varphi_{\boldsymbol{\kappa}}(i) - \partial\varphi_{\boldsymbol{\kappa}}(i+1), \quad i \ge \ordv{\boldsymbol{\kappa}}. $$ \end{defn} \begin{prop} \label{prekork} Suppose that $\boldsymbol{\kappa} \in \mathbf{KS}'_r(T)$, $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$, and $\boldsymbol{\kappa} = \Pi(\boldsymbol{\epsilon})$. Then $\mathrm{ord}(\boldsymbol{\kappa}) = \mathrm{ord}(\boldsymbol{\epsilon})$, $\partial\varphi_{\boldsymbol{\kappa}}(i) = \partial\varphi_{\boldsymbol{\epsilon}}(i)$ for every $i$, and $d_{\boldsymbol{\kappa}}(i) = d_{\boldsymbol{\epsilon}}(i)$ for every $i$. \end{prop} \begin{proof} Suppose first that $R$ is artinian of length $k$. Since $\Pi$ is an isomorphism (Theorem \ref{mthm3'}), we may assume without loss of generality that $\boldsymbol{\kappa}$ and $\boldsymbol{\epsilon}$ generate $\mathbf{KS}'_r(T)$ and $\mathbf{SS}_r(T)$, respectively. Recall that $\mu(\mathfrak{n}) := \mathrm{length}(\HS{(\mathcal{F}^*)_\mathfrak{n}}(K,T^*))$. For every $\mathfrak{n}\in\mathcal{N}$, Theorem \ref{mthm3}(ii) shows that $\kappa_\mathfrak{n}$ generates $\mathbf{m}^{\lambda(\mathfrak{n})}\mathcal{S}(\mathfrak{n})$, and Theorem \ref{magic} shows that $\epsilon_\mathfrak{n}$ generates $\mathbf{m}^{\mu(\mathfrak{n})}\Y{\mathfrak{n}}$. Thus $$ \partial\varphi_{\boldsymbol{\kappa}}(i) = \begin{cases} \partial\lambda(i) & \text{if $\partial\lambda(i) < k$}, \\ \infty & \text{if $\partial\lambda(i) \ge k$}, \end{cases} \qquad \partial\varphi_{\boldsymbol{\epsilon}}(i) = \begin{cases} \partial\mu(i) & \text{if $\partial\mu(i) < k$}, \\ \infty & \text{if $\partial\mu(i) \ge k$}. \end{cases} $$ By Proposition \ref{6.12}, $\partial\lambda(i) = \partial\mu(i)$ for every $i$, and all the equalities of the Proposition follow. The case where $R$ is a discrete valuation ring follows from the artinian case as in the proof of Proposition \ref{11.4}. \end{proof} \begin{thm} \label{korankk} Suppose $R$ is a discrete valuation ring, $\boldsymbol{\kappa} \in \mathbf{KS}'_r(T)$ and $\boldsymbol{\kappa} \ne 0$. Then: \begin{enumerate} \item the sequence $\partial\varphi_{\boldsymbol{\kappa}}(t)$ is nonincreasing, and finite for $t \ge \ordv{\boldsymbol{\kappa}}$, \item the sequence $d_{\boldsymbol{\kappa}}(i)$ is nonincreasing, nonnegative, and finite for $i \ge \ordv{\boldsymbol{\kappa}}$, \item $\ordv{\boldsymbol{\kappa}}$ and the $d_{\boldsymbol{\kappa}}(i)$ are independent of the choice of nonzero $\boldsymbol{\kappa} \in \mathbf{KS}'_r(T)$, \item $\mathrm{corank}_R(\HS{\FF^*}(K,T^*)) = \ordv{\boldsymbol{\kappa}}$, \item $\HS{\FF^*}(K,T^*)/(\HS{\FF^*}(K,T^*))_\mathrm{div} \cong \oplus_{i\ge\ordv{\boldsymbol{\kappa}}} R/\mathbf{m}^{d_{\boldsymbol{\kappa}}(i)}$, \item $\mathrm{length}_R(\HS{\FF^*}(K,T^*)/(\HS{\FF^*}(K,T^*))_\mathrm{div}) = \partial\varphi_{\boldsymbol{\kappa}}(\ordv{\boldsymbol{\kappa}})-\partial\varphi_{\boldsymbol{\kappa}}(\infty)$, where $\partial\varphi_{\boldsymbol{\kappa}}(\infty) := \lim_{t \to \infty}\partial\varphi_{\boldsymbol{\kappa}}(t)$ \item $\boldsymbol{\kappa}$ is primitive if and only if $\partial\varphi_{\boldsymbol{\kappa}}(\infty) = 0$, \item $\mathrm{length}(\HS{\FF^*}(K,T^*))$ is finite if and only if $\kappa_1 \ne 0$, \item $ \mathrm{length}(\HS{\FF^*}(K,T^*)) \le \partial\varphi_{\boldsymbol{\kappa}}(0) = \max\{s : \kappa_1 \in \mathbf{m}^s\wedge^r\HS{\FF}(K,T)\}, $ with equality if and only if $\boldsymbol{\kappa}$ is primitive. \end{enumerate} \end{thm} \begin{proof} By Theorem \ref{mthm3'}, there is a (unique) $\boldsymbol{\epsilon} \in \mathbf{SS}_r(T)$ such that $\Pi(\boldsymbol{\epsilon}) = \boldsymbol{\kappa}$. By Proposition \ref{prekork}, all the invariants of Definition \ref{allderk} attached to $\boldsymbol{\kappa}$ are equal to the corresponding invariants of $\boldsymbol{\epsilon}$. Now the theorem follows from Theorem \ref{korank}. \end{proof} \section{Proof of Theorem \ref{mthm0}} \label{m0pf} Keep the notation of \S\ref{stubks}, so $R$ is principal and artinian of length $k$, hypotheses \ref{h.1} through \ref{h.7} hold. In particular we assume that $r = \chi(T)$, the core rank of $T$. \begin{lem} \label{9.1} The sheaf $\mathcal{S}'$ is locally cyclic. \end{lem} \begin{proof} By Corollary \ref{threefive}(iii), for every $\mathfrak{n}\in\mathcal{N}$ the stalk $\mathcal{S}'(\mathfrak{n})$ is a cyclic $R$-module. By Definition \ref{stubsheaf} and Proposition \ref{eqim} the vertex-to-edge maps $\shmap{\n}{e}$ are all surjective, and so the edge stalks $\mathcal{S}'(e)$ are all cyclic as well. \end{proof} \begin{lem} \label{9.2} Suppose $\mathfrak{n}$ is a core vertex, and $\mathfrak{q} \in \mathcal{P}$ does not divide $\mathfrak{n}$. Let $e$ denote the edge joining $\mathfrak{n}$ and $\mathfrak{n}\mathfrak{q}$. Then the following are equivalent: \begin{enumerate} \item $\mathrm{loc}_\mathfrak{q} : \HS{\FFf{\n}}(K,T)[\mathbf{m}] \to \HS{\f}(K_\mathfrak{q},T)$ is nonzero, \item $\mathfrak{n}\mathfrak{q}$ is a core vertex and both maps $\shmap{\n}{e} : \mathcal{S}(\mathfrak{n}) \to \mathcal{S}(e)$, $\shmap{\n\l}{e} : \mathcal{S}(\mathfrak{n}\mathfrak{q}) \to \mathcal{S}(e)$ are isomorphisms. \end{enumerate} \end{lem} \begin{proof} Suppose that (i) holds. Since $I_\l = 0$ by \ref{h.7}, Lemma \ref{isofunct}(ii) shows that $\HS{\f}(K_\mathfrak{q},T)$ is free of rank one over $R$. Since $\mathfrak{n}$ is a core vertex, $\HS{\FFf{\n}}(K,T)$ is a free $R$-module of rank $r$. In particular $\HS{\FFf{\n}}(K,T)[\mathbf{m}] = \mathbf{m}^{k-1}\HS{\FFf{\n}}(K,T)$, and it follows that the localization map $\mathrm{loc}_\mathfrak{q} : \HS{\FFf{\n}}(K,T) \to \HS{\f}(K_\mathfrak{q},T)$ is surjective. By Proposition \ref{wedgemap}, it follows that $\shmap{\n}{e}$ is an isomorphism. Further, since $\mathrm{loc}_\mathfrak{q} : \HS{\FFf{\n}}(K,T) \to \HS{\f}(K_\mathfrak{q},T)$ is surjective, and $\HS{\tr}(K_\mathfrak{q},T)$ is free of rank one over $R$, and $\HS{\FFf{\n}^*}(K,T^*) = 0$, \cite[Lemma 4.1.6]{kolysys} shows that $\mathfrak{n}\mathfrak{q}$ is a core vertex and $\mathrm{loc}_\mathfrak{q} : \HS{\FFf{\n\l}}(K,T) \to \HS{\tr}(K_\mathfrak{q},T)$ is surjective. Now Proposition \ref{wedgemap} shows that that $\shmap{\n\l}{e}$ is an isomorphism. Thus (ii) holds. Conversely, if $\shmap{\n}{e}$ is an isomorphism then Proposition \ref{wedgemap} shows that the map $\mathrm{loc}_\mathfrak{q} : \HS{\FFf{\n}}(K,T) \to \HS{\f}(K_\mathfrak{q},T)$ is surjective, and since $\HS{\f}(K_\mathfrak{q},T)$ is free of rank one over $R$ it follows that $\mathrm{loc}_\mathfrak{q}$ is not identically zero on $\HS{\FFf{\n}}(K,T)[\mathbf{m}]$. Thus (ii) implies (i). \end{proof} Recall that ${\bar{T}} := T/\mathbf{m} T$. \begin{prop} \label{9.4} Suppose $\mathfrak{n} \in \mathcal{N}$ and $\lambda(\mathfrak{n},{\bar{T}}^*) > 0$. Then there is a $\mathfrak{q} \in \mathcal{P}$ prime to $\mathfrak{n}$ such that $\lambda(\mathfrak{n}\mathfrak{q},{\bar{T}}^*) < \lambda(\mathfrak{n},{\bar{T}}^*)$ and $\shmap{\n}{e} : \mathcal{S}'(\mathfrak{n}) \to \mathcal{S}'(e)$ is an isomorphism, where $e$ is the edge joining $\mathfrak{n}$ and $\mathfrak{n}\mathfrak{q}$. \end{prop} Let $\bar\lambda(\mathfrak{n}) := \dim_\Bbbk\HS{\FFf{\n}^*}(K,{\bar{T}}^*)$. By Proposition \ref{stprops}(ii), we have $\lambda(\mathfrak{n}) = 0$ if and only if $\bar\lambda(\mathfrak{n}) = 0$. \begin{proof} By \cite[Proposition 3.6.1]{kolysys} we can use the Cebotarev theorem to choose a prime $\mathfrak{q} \in \mathcal{P}$ such that the localization maps $$ \mathbf{m}^{k-1}\HS{\FFf{\mathfrak{n}}}(K,T) \to \HS{\f}(K_\mathfrak{q},T), \quad \HS{\FFf{\mathfrak{n}}^*}(K,T^*)[\mathbf{m}] \to \HS{\f}(K_\mathfrak{q},T^*) $$ are both nonzero. (Note that $\mathbf{m}^{k-1}\HS{\FFf{\n}}(K,T) \ne 0$ by Corollary \ref{threefive}(iii).) Then by Poitou-Tate global duality (see for example \cite[Lemma 4.1.7(iv)]{kolysys}), we have $\bar\lambda(\mathfrak{n}\mathfrak{q}) < \bar\lambda(\mathfrak{n})$. Further, we have that localization $\HS{\FFf{\mathfrak{n}}}(K,T) \to \HS{\f}(K_\mathfrak{q},T)$ is surjective, so by Proposition \ref{wedgemap}(ii) $$ \widehat{\mathrm{loc}_\mathfrak{q}} : \wedge^r \HS{\FFf{\mathfrak{n}}}(K,T) \longrightarrow \HS{\f}(K_\mathfrak{q},T) \otimes (\wedge^{r-1} \HS{\mathcal{F}_\mathfrak{q}(\mathfrak{n})}(K,T)) $$ is surjective as well. Since $\mathcal{S}'(e)$ is defined to be the image of $$ \mathcal{S}'(\mathfrak{n}) := \mathbf{m}^{\lambda(\mathfrak{n})}(\wedge^r \HS{\FFf{\mathfrak{n}}}(K,T)) \otimes G_\n $$ under the upper maps of \eqref{compdiag}, we deduce that $$ \mathcal{S}'(e) = \mathbf{m}^{\lambda(\mathfrak{n})} \HS{\tr}(K_\mathfrak{q},T) \otimes (\wedge^{r-1} \HS{\mathcal{F}_\mathfrak{q}(\mathfrak{n})}(K,T)) \otimes G_{\n\l}. $$ Thus $$ \mathrm{length}_R(\mathcal{S}'(e)) \ge k - \lambda(\mathfrak{n}) = \mathrm{length}_R(\mathcal{S}'(\mathfrak{n})), $$ the equality by Corollary \ref{threefive}(iii). Since the map $\mathcal{S}'(\mathfrak{n}) \to \mathcal{S}'(e)$ is surjective by definition, it must be an isomorphism. \end{proof} \begin{thm} \label{14.4} Suppose $\mathfrak{n},\mathfrak{n}'$ are core vertices. Then there is a path $$ \xymatrix{ \mathfrak{n}=\mathfrak{n}_0 \ar@{-}^-{e_1}[r] & \mathfrak{n}_1 \ar@{-}^-{e_2}[r] & \cdots \ar@{-}^-{e_t}[r] & \mathfrak{n}_t=\mathfrak{n}' } $$ in $\mathcal{X}$ such that every $\mathfrak{n}_i$ is a core vertex and all of the maps $\psi_{\mathfrak{n}_i}^{e_{i+1}}$ and $\psi_{\mathfrak{n}_i}^{e_{i}}$ are isomorphisms. \end{thm} \begin{proof} When $\chi(T)=1$, this is \cite[Theorem 4.3.12]{kolysys}. The general case can be proved in the same way, but instead we will prove it here by induction on $r := \chi(T)$. Denote by $\bar\mathcal{F}$ the induced Selmer structure on ${\bar{T}}$. By Proposition \ref{stprops} and the definition of core vertices we see that the Selmer sheaves $\mathcal{S}_{(T,\mathcal{F},\mathcal{P})}$ and $\mathcal{S}_{({\bar{T}},\bar\mathcal{F},\mathcal{P})}$ have the same core vertices and the same core rank $r$ (see also \cite[Theorem 4.1.3]{kolysys}). Since $r > 0$, we can fix nonzero classes $c \in \HS{\FFf{\n}}(K,{\bar{T}})$ and $c' \in \HS{\mathcal{F}(\mathfrak{n}')}(K,{\bar{T}})$. By \cite[Proposition 3.6.1]{kolysys}, we can use the Cebotarev theorem to choose $\mathfrak{q} \in \mathcal{P}$, not dividing $\mathfrak{n}\mathfrak{n}'$, such that the localizations $c_\mathfrak{q}$ and $c'_\mathfrak{q}$ are both nonzero. Note that the Selmer triple $({\bar{T}},\bar\mathcal{F}_\mathfrak{q},\mathcal{P}-\{\mathfrak{q}\})$ also satisfies hypotheses \ref{h.1} through \ref{h.6} (the only one of those conditions that depends on the Selmer structure is \ref{h.5}, and \ref{h.5} is vacuous when we work over $R/\mathbf{m}$). By our choice of $\mathfrak{q}$, both localization maps $$ \mathrm{loc}_\mathfrak{q} : \HS{\bar\mathcal{F}(\mathfrak{n})}(K,{\bar{T}}) \to \HS{\f}(K_\mathfrak{q},{\bar{T}}), \qquad \mathrm{loc}_\mathfrak{q} : \HS{\bar\mathcal{F}(\mathfrak{n}')}(K,{\bar{T}}) \to \HS{\f}(K_\mathfrak{q},{\bar{T}}) $$ are nonzero, and $\HS{\f}(K_\mathfrak{q},{\bar{T}})$ is a one-dimensional $R/\mathbf{m}$-vector space, so both maps are surjective. Since $\mathfrak{n}$ and $\mathfrak{n}'$ are core vertices for $({\bar{T}},\bar\mathcal{F})$, it follows that $$ \dim_{R/\mathbf{m}}\HS{\bar\mathcal{F}_\mathfrak{q}(\mathfrak{n})}(K,{\bar{T}}) = \dim_{R/\mathbf{m}}\HS{\bar\mathcal{F}_\mathfrak{q}(\mathfrak{n}')}(K,{\bar{T}}) = r-1 $$ and (by Poitou-Tate global duality, see for example \cite[Theorem 2.3.4]{kolysys}) that $\HS{\bar\mathcal{F}_\mathfrak{q}(\mathfrak{n})^*}(K,{\bar{T}}^*) = \HS{\bar\mathcal{F}_\mathfrak{q}(\mathfrak{n}')^*}(K,{\bar{T}}^*) = 0.$ In particular we deduce that $\chi({\bar{T}},\bar\mathcal{F}_\mathfrak{q}) = r-1$, and that $\mathfrak{n}, \mathfrak{n}'$ are core vertices for the sheaf $\mathcal{S}_{{\bar{T}},\bar\mathcal{F}_\mathfrak{q}}$. By our induction hypotheses we conclude that there is a path $\mathfrak{n}=\mathfrak{n}_0,\mathfrak{n}_1,\ldots,\mathfrak{n}_t=\mathfrak{n}'$ from $\mathfrak{n}$ to $\mathfrak{n}'$ in $\mathcal{X}$ such that every $\mathfrak{n}_i$ is prime to $\mathfrak{q}$, every $\mathfrak{n}_i$ is a core vertex for $\mathcal{S}_{{\bar{T}},\bar\mathcal{F}_\mathfrak{q}}$, and every vertex-to-edge map (for $\mathcal{S}_{{\bar{T}},\bar\mathcal{F}_\mathfrak{q}}$) along the path is an isomorphism. We will show that every $\mathfrak{n}_i$ is a core vertex for $\mathcal{S}_{T,\mathcal{F}}$, and every vertex-to-edge map (for $\mathcal{S}_{T,\mathcal{F}}$) along the path is an isomorphism. This will prove the theorem. Fix $i$, $0 \le i \le t$. The exact sequence $$ 0 \longrightarrow \HS{\bar\mathcal{F}_\mathfrak{q}(\mathfrak{n}_i)}(K,{\bar{T}}) \longrightarrow \HS{\bar\mathcal{F}(\mathfrak{n}_i)}(K,{\bar{T}}) \map{\mathrm{loc}_\mathfrak{q}} \HS{\f}(K_\mathfrak{q},{\bar{T}}) $$ shows that $\dim_{R/\mathbf{m}}\HS{\bar\mathcal{F}(\mathfrak{n}_i)}(K,{\bar{T}}) \le r$. Then Corollary \ref{threefive}(i) (applied to ${\bar{T}}$, $\bar\mathcal{F}$, and $R/\mathbf{m}$) shows that $\mathfrak{n}_i$ is a core vertex of $\mathcal{S}_{{\bar{T}},\bar\mathcal{F}}$, and hence is a core vertex of $\mathcal{S}_{T,\mathcal{F}}$. Further, suppose $\mathfrak{l}$ is a prime such that $\mathfrak{n}_{i \pm 1} = \mathfrak{n}_i\mathfrak{l}$, and let $e$ be the edge joining those two vertices. By assumption, the maps $\mathcal{S}_{{\bar{T}},\bar\mathcal{F}_\mathfrak{q}}(\mathfrak{n}_i) \to \mathcal{S}_{{\bar{T}},\bar\mathcal{F}_\mathfrak{q}}(e)$ and $\mathcal{S}_{{\bar{T}},\bar\mathcal{F}_\mathfrak{q}}(\mathfrak{n}_i\mathfrak{l}) \to \mathcal{S}_{{\bar{T}},\bar\mathcal{F}_\mathfrak{q}}(e)$ are isomorphisms, so by Lemma \ref{9.2} the localization map $\HS{\bar\mathcal{F}_\mathfrak{q}(\mathfrak{n}_i)}(K,{\bar{T}}) \to \HS{\f}(K_\mathfrak{l},{\bar{T}})$ is nonzero. But $$ \HS{\bar\mathcal{F}_\mathfrak{q}(\mathfrak{n}_i)}(K,{\bar{T}}) \subset \HS{\bar\mathcal{F}(\mathfrak{n}_i)}(K,{\bar{T}}) = \HS{\mathcal{F}(\mathfrak{n}_i)}(K,T)[\mathbf{m}], $$ so $\mathrm{loc}_\mathfrak{l} : \HS{\mathcal{F}(\mathfrak{n}_i)}(K,T)[\mathbf{m}] \to \HS{\f}(K_\mathfrak{l},T)$ is nonzero, so by Lemma \ref{9.2} both of the maps $\psi_{n_i}^e$ and $\psi_{n_{i\pm 1}}^e$ are isomorphisms. This completes the proof. \end{proof} \begin{cor} \label{9.5} There are core vertices. More precisely: \begin{enumerate} \item for every $\mathfrak{n}\in\mathcal{N}$ there is an $\mathfrak{n}'\in\mathcal{N}$ prime to $\mathfrak{n}$, with $\nu(\mathfrak{n}') = \bar\lambda(\mathfrak{n})$, such that $\mathfrak{n}\mathfrak{n}'$ is a core vertex, \item $\min\{\nu(\mathfrak{n}) : \text{$\mathfrak{n}$ is a core vertex}\} = \dim_{R/\mathbf{m}}\HS{\FF^*}(K,T^*)[\mathbf{m}].$ \end{enumerate} \end{cor} \begin{proof} Choose $\mathfrak{n} \in \mathcal{N}$. For every $\mathfrak{n}' \in\mathcal{N}$ prime to $\mathfrak{n}$, global duality (see for example \cite[Lemma 4.1.7(i)]{kolysys}) shows that \begin{equation} \label{nts} \bar\lambda(\mathfrak{n}\mathfrak{n}') \ge \bar\lambda(\mathfrak{n})-\nu(\mathfrak{n}'). \end{equation} Applying Proposition \ref{9.4}, we can construct $\mathfrak{n}=\mathfrak{n}_0, \mathfrak{n}_1, \mathfrak{n}_2, \ldots \in \mathcal{N}$ inductively, with $\mathfrak{n}_{i+1} = \mathfrak{n}_i\mathfrak{q}_i$ for some prime $\mathfrak{q}_i \in \mathcal{N}$ and $\bar\lambda(\mathfrak{n}_{i+1}) < \bar\lambda(\mathfrak{n}_i)$, until we reach $\mathfrak{n}_d \in\mathcal{N}$ with $\bar\lambda(\mathfrak{n}_d) = 0$. Then $\HS{\mathcal{F}(\mathfrak{n}_d)^*}(K,T^*)[\mathbf{m}] = \HS{\mathcal{F}(\mathfrak{n}_d)^*}(K,{\bar{T}}^*) = 0$, so $\mathfrak{n}_d$ is a core vertex. Setting $\mathfrak{n}' := \mathfrak{n}_d/\mathfrak{n}$ we have $$ \nu(\mathfrak{n}') = d \le \bar\lambda(\mathfrak{n}) = \dim_{R/\mathbf{m}}\HS{\FF^*}(K,T^*)[\mathbf{m}]. $$ By \eqref{nts}, since $\bar\lambda(\mathfrak{n}\mathfrak{n}') = 0$ we have $\nu(\mathfrak{n}') \ge \bar\lambda(\mathfrak{n})$, and so $\nu(\mathfrak{n}') = \bar\lambda(\mathfrak{n})$. This proves (i), and applying (i) with $\mathfrak{n} = 1$ and \eqref{nts} proves (ii). \end{proof} \begin{proof}[Proof of Theorem \ref{mthm0}] Theorem \ref{mthm0}(i) is Corollary \ref{9.5}, and Theorem \ref{mthm0}(ii) is Theorem \ref{14.4}. Lemma \ref{9.1} says that $\mathcal{S}'$ is locally cyclic. To complete the proof of Theorem \ref{mthm0} we need only show that every core vertex is a hub of $\mathcal{S}'$. Fix a core vertex $\mathfrak{n}_0$, and let $\mathfrak{n}\in\mathcal{N}$ be any other vertex. We will show by induction on $\bar\lambda(\mathfrak{n})$ that there is an $\mathcal{S}'$-surjective path from $\mathfrak{n}_0$ to $\mathfrak{n}$. If $\bar\lambda(\mathfrak{n}) = 0$, then $\mathfrak{n}$ is also a core vertex and the desired surjective path exists by Theorem \ref{14.4}. Now suppose $\bar\lambda(\mathfrak{n}) > 0$. Use Proposition \ref{9.4} to find $\mathfrak{q} \in \mathcal{P}$ not dividing $\mathfrak{n}$ such that $\bar\lambda(\mathfrak{n}\mathfrak{q}) < \bar\lambda(\mathfrak{n})$ and $\shmap{\n}{e} : \mathcal{S}'(\mathfrak{n}) \to \mathcal{S}'(e)$ is an isomorphism, where $e$ is the edge joining $\mathfrak{n}$ and $\mathfrak{n}\mathfrak{q}$. By induction there is an $\mathcal{S}'$-surjective path from $\mathfrak{n}_0$ to $\mathfrak{n}\mathfrak{q}$, and if we adjoin to that path the edge $e$, we get an $\mathcal{S}'$-surjective path from $\mathfrak{n}_0$ to $\mathfrak{n}$. \end{proof} \appendix \section{Some exterior algebra} \label{exalg} Suppose for this appendix that $R$ is a local principal ideal ring with maximal ideal $\mathbf{m}$. \begin{prop} \label{wedgemap} Suppose $0 \to N \to M \map{\psi} C$ is an exact sequence of finitely-generated $R$-modules, with $C$ free of rank one, and $r \ge 1$. Then there is a unique map $$ \hat\psi : \wedge^r M \longrightarrow C \otimes \wedge^{r-1}N $$ such that \begin{enumerate} \item the composition $\wedge^r M \map{\hat\psi} C \otimes \wedge^{r-1}N \to C \otimes \wedge^{r-1}M$ is given by $$ m_1\wedge\cdots\wedge m_r \mapsto \sum_{i=1}^r (-1)^{i+1} \psi(m_i) \otimes (m_1\wedge\cdots\wedge m_{i-1}\wedge m_{i+1}\cdots\wedge m_r), $$ \item the image of $\hat\psi$ is the image of $\psi(M) \otimes \wedge^{r-1}N \to C \otimes \wedge^{r-1}N$. \end{enumerate} If $M$ is free of rank $r$ over $R$, then $\hat\psi$ is an isomorphism if and only if $\psi$ is surjective. \end{prop} \begin{proof} Since $R$ is principal, we can ``diagonalize'' $\psi$ and write $M = Rm \oplus N_0$ and $N = Im \oplus N_0$ where $N_0 \subset N$, $m \in M$ is such that $\psi(m)$ generates $\psi(M)$, and $I$ is an ideal of $R$. In particular we have $0 = \psi(N) = I\psi(M)$. The formula of (i) gives a well-defined $R$-module homomorphism $\hat\psi_0 : \wedge^r M \to \psi(M) \otimes \wedge^{r-1}M$. Consider the diagram $$ \xymatrix{ \wedge^r M \ar^-{\hat\psi_0}[r] & \psi(M) \otimes \wedge^{r-1}M \ar[r] &C \otimes \wedge^{r-1}M \\ & \psi(M) \otimes \wedge^{r-1}N \ar^-{\eta_2}[r]\ar_{\eta_1}[u] & C \otimes \wedge^{r-1}N \ar[u] } $$ with maps induced by the inclusions $\psi \hookrightarrow C$ and $N \hookrightarrow M$. We will show that $\mathrm{image}(\hat\psi_0) \subset \mathrm{image}(\eta_1)$ and $\ker(\eta_1) \subset \ker(\eta_2)$. Then $\hat\psi := \eta_2 \circ \eta_1^{-1} \circ \hat\psi_0$ is well defined and satisfies (i) and (ii). Since $M = Rm \oplus N_0$, we have that the image $\hat\psi_0(\wedge^r M)$ is generated by monomials $\psi(m) \otimes n_1 \wedge \cdots \wedge n_{r-1}$ with $n_i \in N_0$, so $\mathrm{image}(\hat\psi_0) \subset \mathrm{image}(\eta_1)$. We also have \begin{align*} \wedge^{r-1}N &= (Im \otimes \wedge^{r-2}N_0) \oplus \wedge^{r-1}N_0, \\ \wedge^{r-1}M &= (Rm \otimes \wedge^{r-2}N_0) \oplus \wedge^{r-1}N_0. \end{align*} Therefore, since $I\psi(M) = 0$, \begin{multline*} \ker(\eta_1) = \ker(\psi(M) \otimes Im \otimes \wedge^{r-2}N_0 \to \psi(M) \otimes Rm \otimes \wedge^{r-2}N_0) \\ \hskip2.75in= \psi(M) \otimes Im \otimes \wedge^{r-2}N_0. \end{multline*} We further have \begin{equation} \label{(a.1)} \eta_2(\psi(M) \otimes Im \otimes \wedge^{r-2}N_0) = 0. \end{equation} Thus $\ker(\eta_1) \subset \ker(\eta_2)$, so $\hat{\psi}$ is well-defined and has properties (i) and (ii). Uniqueness follows from the fact that by \eqref{(a.1)} $$ \eta_2(\psi(M) \otimes \wedge^{r-1}N) = \eta_2(\psi(M) \otimes \wedge^{r-1}N_0) $$ injects into $C \otimes \wedge^{r-1}M$. The final assertion follows easily from the definition of $\hat\psi$ above. \end{proof} If $M$ is an $R$-module, let $\dual{M} := \mathrm{Hom}(M,R)$. \begin{prop} \label{bhprop} Suppose $R$ is artinian and there is a cartesian diagram of $R$-modules $$ \xymatrix@R=15pt{ M_1 \ar@{^(->}[r]\ar[d] & M_2 \ar^{h}[d] \\ C_1 \ar@{^(->}[r] & C_2 } $$ where $C_1$ and $C_2$ are free $R$-modules of finite rank, and the horizontal maps are injective. \begin{enumerate} \item Suppose $r \ge 0$ and $s_i = \mathrm{rank}_R(C_i)$. There is a canonical $R$-module homomorphism $$ \wedge^{r+s_2}M_2 \otimes \wedge^{s_2} \dual{C}_2 \longrightarrow \wedge^{r+s_1}M_1 \otimes \wedge^{s_1} \dual{C}_1 $$ defined as follows. If $m \in \wedge^{r+s_2}M_2$, $\psi_1,\ldots, \psi_{s_2}$ is a basis of $\dual{C}_2$ such that $\psi_{s_1+1},\ldots, \psi_{s_2}$ is a basis of $\dual{(C_2/C_1)}$, and $h_i = \psi_i \circ h$, then $$ m \otimes (\psi_{1}\wedge\cdots\wedge\psi_{s_2}) \mapsto (\hat{h}_{s_1+1} \circ \cdots \circ \hat{h}_{s_2})(m) \otimes (\psi_{1}\wedge\cdots\wedge\psi_{s_1}) $$ with $\hat{h}_i$ as in Proposition \ref{wedgemap}. This is independent of the choice of the $\psi_i$. \item If $M_2$ is free of rank $r+s_2$ over $R$, then the image of the map of (i) is $$\mathbf{m}^{\mathrm{length}(M_1) - (r+s_1)\mathrm{length}(R)}\wedge^{r+s_1}M_1 \otimes \wedge^{s_1} \dual{C}_1.$$ \item If $$ \xymatrix@R=15pt{ M_2 \ar@{^(->}[r]\ar[d] & M_3 \ar[d] \\ C_2 \ar@{^(->}[r] & C_3 } $$ is another such cartesian square, then the triangle $$ \xymatrix@C=8pt{ \wedge^{r+s_3}M_3 \otimes \wedge^{s_3} \dual{C}_3 \ar[rr]\ar[dr] && \wedge^{r+s_1}M_1 \otimes \wedge^{s_1} \dual{C}_1 \\ & \wedge^{r+s_2}M_2 \otimes \wedge^{s_2} \dual{C}_2 \ar[ur] } $$ induced by the maps of (i) commutes. \item Suppose there is an exact sequence $0 \to M_1 \to M_2 \to C$, where $C$ is free of rank $s$ over $R$. Then for every $r \ge 0$, the map of (i) (with $C_1 = 0$ and $C_2 = C$) is a canonical map $\wedge^{r+s}M_2 \otimes \wedge^s\dual{C} \to \wedge^r M_1$. \end{enumerate} \end{prop} \begin{proof} Since the square is cartesian, and by our choice of the $\psi_i$, we have \begin{equation} \label{(14)} \ker(\oplus_{i > s_1} h_i) = h^{-1}(C_1) = M_1. \end{equation} Applying Proposition \ref{wedgemap} repeatedly shows that the map defined in (i) takes values in $\wedge^{r+s_1}M_1 \otimes \wedge^{s_1} \dual{C}_1$. It is straightforward to check that this map is independent of the choice of the $\psi_i$. This proves (i), and (iv) is just a special case of (i). Suppose now that $M_2$ is free of rank $r+s_2$, and let $s := s_2-s_1$. Choose an $R$-basis $\eta_1,\ldots,\eta_{r+s_2}$ of $\dual{M_2}$ such that the span of $\eta_{1},\ldots,\eta_{s}$ contains $h_{s_1+1},\ldots,h_{s_2}$, i.e., there is an $s \times s$ matrix $A = [a_{ij}]$ with $a_{ij} \in R$ such that $h_{s_1+j} = \sum_{i}a_{ij}\eta_i$. Let $N := \cap_{i=1}^{s} \ker(\eta_i)$. Then $N$ is free over $R$ of rank $r+s_1$, and we have a split exact sequence of free modules $$ 0 \longrightarrow N \longrightarrow M_2 \map{\oplus_{i\le s} \eta_i} R^{s} \longrightarrow 0. $$ It follows that the composition $\hat\eta_{1}\circ\cdots\circ\hat\eta_{s} : \wedge^{r+s_2}M_2 \to \wedge^{r+s_1}N$ of maps given by Proposition \ref{wedgemap} is an isomorphism. We also have $$ \hat{h}_{s_1+1} \cdots \circ \hat{h}_{s_2} = \det(A) \; \hat{\eta}_1 \circ \cdots \circ \hat{\eta}_s, $$ and $N \subset M_1$ by \eqref{(14)}. Since $N$ is free, there is a noncanonical splitting $$ M_1 \cong N \oplus M_1/N, $$ so the map $$ \mathbf{m}^{\mathrm{length}(M_1/N)}\wedge^{r+s_1}N \longrightarrow \mathbf{m}^{\mathrm{length}(M_1/N)}\wedge^{r+s_1}M_1 $$ induced by the inclusion $N \hookrightarrow M_1$ is surjective. Finally, $$ \det(A)R = \mathbf{m}^{\mathrm{length}(M_1/N)} = \mathbf{m}^{\mathrm{length}(M_1)-(r+s_1)\mathrm{length}(R)}, $$ and combining these facts proves (ii). Assertion (iii) follows from the independence of the choice of the $\psi_i$. Choose a basis $\psi_1,\ldots, \psi_{s_s}$ of $\dual{C}_3$ such that $\psi_{s_1+1},\ldots, \psi_{s_3}$ is a basis of $\dual{(C_3/C_1)}$ and $\psi_{s_2+1},\ldots, \psi_{s_3}$ is a basis of $\dual{(C_3/C_2)}$. Then $\psi_{s_1+1}|_{C_2},\ldots, \psi_{s_2}|_{C_2}$ is a basis of $\dual{(C_2/C_1)}$, and (iii) just reduces to the statement that $$ (\hat{\psi}_{s_1+1} \circ \cdots \circ \hat{\psi}_{s_2}) \circ (\hat{\psi}_{s_2+1} \circ \cdots \circ \hat{\psi}_{s_3}) = (\hat{\psi}_{s_1+1} \circ \cdots \circ \hat{\psi}_{s_3}). $$ \end{proof} \subsection*{\em Erratum to \cite{kolysys}} We thank Cl\'ement Gomez for pointing out an error in the statement of \cite[Lemma 2.1.4]{kolysys}. The correct statement (which is all that was used elsewhere in \cite{kolysys}) should be: \begin{mlem} If $(T/\mathfrak{m} T)^{G_\mathbf{Q}} = 0$ then $(T/I T)^{G_\mathbf{Q}} = 0$ for every ideal $I$ of $R$. \end{mlem} \end{document}
arXiv
Journal of Cheminformatics Proteochemometric modelling coupled to in silico target prediction: an integrated approach for the simultaneous prediction of polypharmacology and binding affinity/potency of small molecules Shardul Paricharak1,2, Isidro Cortés-Ciriano3, Adriaan P IJzerman2, Thérèse E Malliavin3 & Andreas Bender1 Journal of Cheminformatics volume 7, Article number: 15 (2015) Cite this article The rampant increase of public bioactivity databases has fostered the development of computational chemogenomics methodologies to evaluate potential ligand-target interactions (polypharmacology) both in a qualitative and quantitative way. Bayesian target prediction algorithms predict the probability of an interaction between a compound and a panel of targets, thus assessing compound polypharmacology qualitatively, whereas structure-activity relationship techniques are able to provide quantitative bioactivity predictions. We propose an integrated drug discovery pipeline combining in silico target prediction and proteochemometric modelling (PCM) for the respective prediction of compound polypharmacology and potency/affinity. The proposed pipeline was evaluated on the retrospective discovery of Plasmodium falciparum DHFR inhibitors. The qualitative in silico target prediction model comprised 553,084 ligand-target associations (a total of 262,174 compounds), covering 3,481 protein targets and used protein domain annotations to extrapolate predictions across species. The prediction of bioactivities for plasmodial DHFR led to a recall value of 79% and a precision of 100%, where the latter high value arises from the structural similarity of plasmodial DHFR inhibitors and T. gondii DHFR inhibitors in the training set. Quantitative PCM models were then trained on a dataset comprising 20 eukaryotic, protozoan and bacterial DHFR sequences, and 1,505 distinct compounds (in total 3,099 data points). The most predictive PCM model exhibited R 2 0 test and RMSEtest values of 0.79 and 0.59 pIC50 units respectively, which was shown to outperform models based exclusively on compound (R 2 0 test/RMSEtest = 0.63/0.78) and target information (R 2 0 test/RMSEtest = 0.09/1.22), as well as inductive transfer knowledge between targets, with respective R 2 0 test and RMSEtest values of 0.76 and 0.63 pIC50 units. Finally, both methods were integrated to predict the protein targets and the potency on plasmodial DHFR for the GSK TCAMS dataset, which comprises 13,533 compounds displaying strong anti-malarial activity. 534 of those compounds were identified as DHFR inhibitors by the target prediction algorithm, while the PCM algorithm identified 25 compounds, and 23 compounds (predicted pIC50 > 7) were identified by both methods. Overall, this integrated approach simultaneously provides target and potency/affinity predictions for small molecules. Proteochemometric modelling coupled to in silico target prediction. In recent years it has been demonstrated that drugs exert their therapeutic effect by modulating more than one target, in fact six on average [1]. Therefore, the early evaluation of the bioactivity profiles of lead compounds is essential for the success in developing new drugs, although efficacy is sometimes attained by the inhibition of single targets, e.g. viral proteins. Similarly, understanding drug polypharmacology can help in anticipating drug adverse effects [2]. In parallel, the availability of public bioactivity databases has enabled the application of large-scale chemogenomics techniques to, among others, predict protein targets for small molecules, and to predict their affinity on therapeutically interesting targets [3]. These techniques capitalize on bioactivity data to infer relationships between the compounds, encoded with numerical descriptors, and their targets, which can be represented as labels in a classification model or explicitly encoded by e.g. protein or amino acid descriptors [4]. In silico target prediction algorithms assess potential compound polypharmacology through the computational evaluation of the (functionally unrelated) targets modulated by a given compound, or its selectivity to species-specific targets, as they predict the probability of interaction of that compound with a panel of targets [5]. Initially, target prediction models were developed using Laplacian-modified Naïve Bayesian classifiers [6] and the Winnow algorithm [7]. Later, Keiser et al. [8] developed a model which related biological targets based on ligand similarities and ranked the significance of the resulting similarity scores using the Similarity Ensemble Approach (SEA), followed by Wale and Karypis [9] who applied SVM and ranking perceptron algorithms to rank targets for a given compound. More recently, Koutsoukas et al. [10] compared the performance of both the Naïve Bayesian and Parzen-Rosenblatt Window classifiers, concluding that the overall performance of both methods is comparable though differences were found for certain target classes. The ligand-target prediction methods described above generally predict the likelihood of interaction with a target, and they do not predict compound affinity or potency (e.g. Ki or IC50). On the other hand, quantitative bioactivity prediction techniques, e.g. proteochemometric modelling (PCM) [3], predict the potency or affinity for compound-target pairs, normally in the form of pIC50 or pKi values. PCM combines information from compounds and related targets, e.g. orthologs, in a single machine learning model [3,11], which enables the simultaneous modelling of chemical and biological information, and thus the prediction of compound affinity and selectivity across a panel of targets. Nonetheless, the effects of a compound at the cellular or the organism level are poorly understood in this case, as these methods cannot account for the interactions of a compound with other unrelated targets, which are not captured in the PCM model. Given the limitations of both purely qualitative and purely quantitative bioactivity modelling approaches, in the current work, we propose an integrated drug discovery approach, combining in silico target prediction for the qualitative large-scale evaluation of compound bioactivity, and PCM for the quantitative prediction of compound potency. The proposed approach was evaluated on the discovery of DHFR inhibitors for Plasmodium falciparum (P. falciparum), the causative agent of the most dangerous form of malaria [12]. Whilst there are multiple anti-malarial drugs on the market, resistance to anti-malarial drugs is on the rise [13,14], and there are only 21 compounds in clinical or pre-clinical trials [15]. In order to combat the lack of novel drugs for malaria, big pharmaceutical companies have generated a wealth of phenotypic data, namely the GlaxoSmithKline (GSK) TCAMS dataset, as well as the Novartis-GNF Malaria Box [16,17]. Both datasets contain phenotypic readouts, describing how effective the compounds present in the datasets are in inhibiting the growth of P. falciparum. Nonetheless, none of them contain annotations about the P. falciparum target(s) involved, making it a challenge to elucidate the mode of action (MoA) of the compounds in the dataset, and hence, making the dataset difficult to interpret. This renders these datasets a very suitable case study for the algorithms we are presenting in this work. In the context of malaria drug discovery, previous studies have applied machine learning algorithms to predict whether plasmodial proteins are secretory proteins based on their residue composition [18], and to predict the bioactivities of compounds against particular plasmodial targets [19,20]. These approaches, though, did not account for the polypharmacology of anti-malarial compounds. To overcome the limitations of these methods, we now integrate both in silico target prediction and PCM in a unified drug discovery approach. As illustrated in Figure 1, the target prediction algorithm used in this study, trained on approximately 553,084 bioactivity data points spanning 3,481 targets, used a domain-based similarity metric between targets to extrapolate target predictions from one species to another. Non-plasmodial targets were then extrapolated to plasmodial targets. Besides, the PCM model was trained on a dataset composed of 20 eukaryotic, protozoan and bacterial DHFR sequences, and of 1,505 different DHFR inhibitors and a total of 3,099 data points. To exploit the complementarity of the two prediction methods, in silico target prediction was used to predict MoA hypotheses for the anti-malarial compounds in the GSK TCAMS phenotypic dataset, whereas PCM was employed to quantify compound potency (pIC50). Schematic overview of in silico target prediction and domain-based extrapolation workflow. The conventional in silico target prediction approach [10] is extended in this study by using protein domain annotations to extrapolate from non-plasmodial target predictions to protein target predictions in P. falciparum. This concept is generally applicable across organisms, in particular to those for which little bioactivity data is currently available. Exploratory principal component analysis (PCA) of PCM and target prediction datasets A PCA was performed for compounds contained in the PCM dataset, as well as for those annotated on P. falciparum and T. gondii in the target prediction dataset. The Spearman's rank correlation coefficient was calculated for all pairs of compound descriptors, based on both physicochemical descriptors and Morgan fingerprints, thus defining a square correlation matrix. The PCA analysis was performed on this matrix in order to avoid the direct application of PCA on binary descriptors, i.e. Morgan fingerprints. Visualization was performed using R and Vortex [21]. Target prediction Training dataset Bioactivity data were extracted from ChEMBL16 [22] according to the protocol described by Koutsoukas et al. [10]. The extracted data contained approximately 4 million bioactivities covering approximately 8,000 biomolecular targets, of which approximately 4,000 targets were proteins [22,23]. Compound-target pairs were selected according to the following criteria: (i) Ki, Kd, IC50 or EC50 bioactivity values equal to or lower than 10 μM, and (ii) targets annotated with a confidence score of 8 (homologous single protein target assigned) or 9 (direct single protein target assigned). Subsequently, ligand structures were processed with the ChemAxon standardizer version 5.12.0 [24], with the following options: "Remove fragment", "Neutralize", "Aromatize", "Clean2D", "Tautomerize" and "Remove explicit hydrogens". After standardization, the entries with ligands annotated against multiple targets were detected based on their canonical SMILES and removed using custom Perl scripts, resulting in a training set of 553,084 instances (262,174 compounds) covering 3,481 protein targets (Additional file 1: Supplementary Information SI 1). The bioactivity data of P. falciparum (1,513 instances – 1,379 compounds covering 41 protein targets) was omitted from this dataset for training purposes. InterPro [25] domain annotations were retrieved for all protein targets using the Uniprot database [26]. P. falciparum dataset The P. falciparum dataset was built using the same criteria as described above, resulting in a set comprising 41 P. falciparum targets and 1,379 compounds. In addition, all annotated and reviewed P. falciparum targets from Uniprot were downloaded, resulting in a total of 148 P. falciparum protein targets. Finally, InterPro domain annotations were retrieved for all protein targets using the Uniprot database (Additional file 2: Supplementary Information SI 2). GSK TCAMS dataset Approximately 2 million compounds present in GSK's screening collection have been tested in vitro by GSK for inhibitors of P. falciparum's intraerythrocytic cycle based on growth inhibition assays [17]. Briefly, assays were performed on both the reference laboratory strain 3D7, as well as on the multidrug resistant strain Dd2, where parasite growth was evaluated using LDH activity [17]. 19,451 compounds were identified as primary hits inhibiting the 3D7 strain growth by more than 80% at 2 μM concentration, of which 13,533 compounds displayed 80% or higher inhibition of parasite growth in at least 2 of the 3 assay runs in independent follow-up experiments. Hence, these 13,533 compounds were considered as confirmed inhibitors (confirmation rate > 70%) and used in the present study. A circular fingerprint implementation, Molprint2D [27,28] was used for encoding molecular structures, since this method has previously been shown to capture structural aspects related to bioactivity better than most other descriptors in comparative studies [29]. This descriptor is based on count vectors of heavy atoms present at a topological distance from each heavy atom of a molecule [28]. For the present study, the pybel implementation was used [30]. Target prediction algorithm A multiclass Laplacian-modified Naïve Bayesian classifier, as described by Nigsch et al. [7] and later implemented by Koutsoukas et al. [10] was implemented to classify the bioactivity dataset and to be able to predict targets for novel compounds. For the query molecule x, consisting of a set of n Molprint2D features f i , the likelihood to be active against a protein target ω α was calculated using the following equation: $$ {S}_{{\boldsymbol{\omega}}_{\boldsymbol{\alpha}}}\left(\boldsymbol{x}\right)={\displaystyle \sum_{\boldsymbol{i}=1}^{\boldsymbol{n}}} log\left(\frac{N_{i,{\omega}_{\alpha }}+1}{N_i\times p\left({\omega}_{\alpha}\right)+1}\right)+ log\left(\frac{{\displaystyle {\prod}_{i=1}^d}p\left({f}_i\right)}{p\left(\boldsymbol{x}\right)}\right) $$ where \( {S}_{{\boldsymbol{\omega}}_{\boldsymbol{\alpha}}}\left(\boldsymbol{x}\right) \) is the logarithmic likelihood score (proportional to the likelihood of bioactivity), \( {N}_{i,{\omega}_{\alpha }} \) is the total number of occurrences of feature f i in protein class ω α and N i is the total number of occurrences of feature f i in the entire training set. Furthermore, p(ω α ) is the prior probability of protein class ω α . The prior probability quantifies how likely a compound is active against protein target ω α in the absence of any feature information. It can be calculated as follows: $$ p\left({\omega}_{\alpha}\right)=\frac{N_{\omega_{\alpha }}}{N} $$ where \( {N}_{\omega_{\alpha }} \) is the number of instances (i.e. bioactivities) in class ω α and N is the total number of instances. The predictive performance of this model was assessed in terms of average class-specific recall and precision. Only target classes with 20 or more data points in the P. falciparum dataset were considered as suitable for testing due to a sufficient number of data points, resulting in a total of 16 target classes. Domain-based extrapolation to P. falciparum targets For each analyzed compound, the top n ranked predicted targets were compared to all 148 P. falciparum targets in terms of their InterPro domain composition. P. falciparum targets with an InterPro domain Tanimoto similarity above a variable cut-off were considered as predicted, but were not ranked. The cut-off value varied between 0.5 and 1, where 1 means that only orthologous proteins are considered. The target prediction and domain-based extrapolation pipeline are illustrated in Figure 1. The domain extrapolation extends the target prediction approach [10,31] by using InterPro protein domain annotations to extrapolate from predicted non-plasmodial targets to P. falciparum targets. This is conceptually similar to a previously reported study for extrapolating bioactivities between species [32], and its application to M. tuberculosis [33]. The inclusion of plasmodial DHFR (CHEMBL1939) bioactivity data was expected to drastically improve the performance, and this was tested in the following way. A 2-fold cross validation (CV) was performed: the instances annotated on plasmodial DHFR were split into 2 half subsets, where one subset was added to the training set and the other half was used as a test set (and vice versa). Proteochemometric modelling IC50 values with a confidence score of 8 or 9 for 20 DHFR sequences (Table S3) were retrieved from ChEMBL16 [22] and this initial dataset comprised 5,827 data points. In the cases where a compound-target combination had more than one annotated bioactivity value, the set of bioactivities was replaced by its mean value. This procedure is robust, because the standard deviation of the differences was smaller than 0.1 pIC50 unit in more than 90% of the cases (Additional file 3: Figure S1). This resulted in a dataset including 3,099 distinct compound-target combinations. The matrix completeness of the dataset, calculated as the number of compound-target combinations present in the dataset over the total number of possible compound-target combinations, was 10.3%. Compounds included in the PCM dataset were not present in the target prediction dataset. Chemical structures were standardized and cleaned with the function StandardiseMolecules of the R package camb using the default parameters [34] and PaDEL descriptors (1-D and 2-D). Morgan fingerprints were calculated in the same environment. The function AA_Descs was used to calculate amino acid descriptors (3 Z-scales). To describe the target space, the residues in the binding site of human DHFR (PDB ID: 1OHJ [35]) within a sphere of 10 Å centered around the ligand were selected. The corresponding residues for the other 19 proteins were obtained from a sequence alignment realized with Clustal Omega [36]. The dataset is available in Additional file 4 (Supplementary Information SI 3). All descriptor values were centered to zero mean and scaled to unit variance. The dataset was split into six subsets, five of which were used to train models, and the sixth, test set, was withheld to assess the predictive ability of the models [37]. The hyperparameter values for all PCM models were optimized by 5-fold cross validation [38]. To assess both model predictive ability and performance, the pIC50 values for the test set were predicted, thus providing the external validation by calculating RMSEtest and R 2 0 test between the observed and the predicted pIC50 values: $$ {R}_{0\ test}^2=1-\frac{{\displaystyle {\sum}_{i=1}^N}{\left({y}_i-{\widehat{y}}_i^{r0}\right)}^2}{{\displaystyle {\sum}_{i=1}^N}{\left({y}_i-\overline{y}\right)}^2} $$ $$ RMSE=\sqrt{\frac{{\left(y-\widehat{y}\right)}^2}{N}} $$ where N represents the size of the test set, y i the observed, ŷ i, the predicted, and \( \overline{y} \) the average pIC50 values of those datapoints included in the test set, and ŷ i ro = sŷ, with \( s={\displaystyle \sum {y}_i{\widehat{y}}_i}/{\displaystyle \sum {\widehat{y}}_i^2} \). Both internal (RMSEint and R 2 int) and external validation (RMSEtest and R 2 0 ext) were assessed according to the criteria proposed by Tropsha et al. [39,40] and calculated using the Validation function of the R package camb [34]. In order to assess whether the combination of compound and target information in a single PCM model constitutes an advantage with respect to one-space (ligand space and target space) models, two validation scenarios were explored. Firstly, a Family QSAR model [41] was trained exclusively on compound descriptors. High performance of this model is expected in cases where the bioactivities of the same compound on different targets are highly correlated. Secondly, the Family QSAM [41] model was trained on target descriptors only. In this case, high performance would indicate that the activities of a diverse set of compounds are correlated on a panel of targets. Thus, compound activities would largely depend on the target, and to a much lesser extent on the ligand structures. Additionally, an inductive transfer PCM model (PCM IT) was trained to assess whether the performance of PCM models arises from explicit learning (EL), where the knowledge is extracted from target descriptors, or inductive transfer (IT). In IT the knowledge acquired when predicting compound bioactivities on a given target is exploited to predict the bioactivity of those compounds on another target [41]. In the PCM IT model, targets were described with identity fingerprints (IFP), which are calculated as follows: $$ IFP\left(i,j\right)=\delta \left(i-j\right)\left(i,j\in 1,\dots, {N}_{\mathrm{targets}}\right) $$ where δ is the Kronecker delta function and Ntargets the number of distinct targets. The performance of the models was assessed on a per target basis by training ten PCM models, each on a different subset of the whole dataset. Subsequently, RMSEtest and R 2 0 test values were calculated on subsets of the test set grouped by target. Machine learning implementation Support Vector Machines (SVM) [42], Gradient Boosting Machines (GBM) [43], Gaussian Processes (GP) [44], and Random Forest (RF) [45] models were built with the R package camb [34,46]. The target prediction algorithm was implemented in Perl. Exploratory analysis of PCM and target prediction datasets A PCA (Figure 2) was performed for the compounds annotated to be active against plasmodial DHFR and those active against T. gondii DHFR. The first two principal components explain 72.73% of the variance. In the two dimensions visualized for the descriptor space used here, the plasmodial inhibitors cover a substantial portion of the chemical space occupied by the T. gondii DHFR inhibitors. However, there are still a number of clusters of T. gondii DHFR inhibitors that occupy novel space not covered by plasmodial inhibitors. Compounds from these clusters contain bicyclic ring systems (shown in red boxes in Figure 2). On the other hand, there are also clusters of plasmodial inhibitors that occupy space not covered by T. gondii inhibitors: these plasmodial inhibitors do not contain bicyclic rings, but instead contain unfused rings (5 scaffolds identified shown in green boxes in Figure 2). In addition to the previous analysis, a PCA was also performed for the compounds present in the PCM dataset (Additional file 3: Figure S2), where the first two principal components explained 51.77% of the variance. Clusters contain compounds whose bioactivities on several targets are included in the dataset, thus indicating that compounds are overall structurally similar across the 20 DHFR sequences considered. PCA of the compounds annotated as actives against plasmodial DHFR (green) as well as T. gondii DHFR (red). Overall, plasmodial DHFR inhibitors cover a substantial portion of the chemical space occupied by T. gondii DHFR inhibitors. However, some clusters of T. gondii DHFR inhibitors are located in additional chemical space not covered by the plasmodial inhibitors (red boxes). These clusters contain compounds with bicyclic ring systems. By contrast, plasmodial inhibitors only contain unfused rings (green boxes). These observations explain why recall is low (~35%) when plasmodial DHFR inhibitors are excluded from the training set: T. gondii inhibitors do not cover all relevant chemical space, particularly the space occupied by compounds with unfused ring systems. Application of target prediction for MoA prediction The performance of the target prediction algorithm was assessed for varying values of n, which represents the top number of non-plasmodial predictions considered for extrapolation (Additional file 3: Figure S3). It can be seen that performance varies widely across target classes: for most targets, including all aminopeptidases, calcium-dependent protein kinase 1, protein kinase Pfmrk, glucose-6-phosphate-1-dehydrogenase, dihydroorotate dehydrogenase, dUTP pyrophosphatase and enoyl-acyl-carrier protein reductase, performance is low, with both recall and precision values below 30%. For a small number of targets, however, the performance is much higher, with recall values up to ~60% and precision values up to 100%. Further investigation revealed that the targets for which the prediction algorithm performed well (plasmepsin 1 and 2, histone deacetylase, DHFR and to a lesser extent, falcipain 2) were plasmodial orthologs of non-plasmodial protein targets. This finding is in agreement with previous studies, which have used orthologous proteins to extrapolate the prediction of bioactivities between target classes across species such as P. falciparum and M. tuberculosis [47,48]. However, these previous studies have not combined target prediction with PCM for MoA analysis, which is precisely the novelty of the approach presented here. Target prediction performance for plasmodial DHFR The predictive performance of the target prediction algorithm was further investigated for the plasmodial target DHFR, where all 145 instances annotated on plasmodial DHFR were used as a test set. The top n predicted non-plasmodial targets were considered (n varied in the 1–12 range), after which these targets were extrapolated to plasmodial targets (section "Domain-based extrapolation to P. falciparum targets" in Materials and Methods). For n in the 1–3 range, the recall values are 0%, 2.8% and 14,5%, respectively, whereas for n in the 4–7 range, the recall values are around 35%. The 2-fold CV resulted in a recall value of 79%. These results indicate that despite the fact that the training set did not contain any plasmodial bioactivity data, the model is still able to predict compounds active against plasmodial DHFR with 100% precision, based on bioactivity data for orthologous proteins across other species. The high precision value arises from the structural similarity of plasmodial DHFR inhibitors and T. gondii DHFR inhibitors in the training set (the average MOLPRINT2D pairwise similarity between the T. gondii inhibitors and the plasmodial inhibitors was 16%, whereas the average pairwise similarity within the plasmodial dataset and the T. gondii dataset was 19% and 18% respectively). These results show the added benefit of incorporating domain-based extrapolation for target prediction purposes. In addition, we found that varying the domain Tanimoto similarity cut-off between 0.5 and 1 did not alter the performance. Hence, in order to maintain high precision, a stringent domain Tanimoto similarity cut-off of 1 (i.e. requiring a 100% overlap in domain presence and absence between two proteins) was chosen and the top n predicted non-plasmodial targets considered was set to 4 for further analysis. Further investigation of the extrapolation from non-plasmodial targets to plasmodial targets revealed that only one protein class (T. gondii DHFR) was responsible for the extrapolation of predicted activities to plasmodial DHFR. As described earlier, there are clusters of T. gondii DHFR inhibitors that do not contain any plasmodial DHFR inhibitors (scaffolds identified in these clusters are shown in red boxes - Figure 2 and clusters of plasmodial inhibitors that occupy space not covered by T. gondii inhibitors (5 scaffolds identified shown in green boxes in Figure 2). Hence, for these clusters there is no overlap in scaffolds between both datasets. These observations explain the low recall of the model at this stage: plasmodial DHFR inhibitors located outside the space covered by T. gondii DHFR inhibitors are not retrieved by the model, thereby increasing the number of false negatives, whereas the plasmodial DHFR inhibitors that are present in the chemical space shared by inhibitors from both species are predicted with very high precision. Adding plasmodial DHFR data to the training set drastically increased performance, more than doubling recall values to 79%, whereas precision values remained 100% (Figure 3 – 2-fold CV). Hence, this observation arises from the fact that the chemical space of the plasmodial DHFR inhibitors adds additional information corresponding to 5 new scaffolds (as highlighted in green boxes in Figure 2) to the model. However, despite the very high precision value achieved (100%), there is a drawback: given the great increase in recall value when novel scaffolds are added to the dataset, the model is only able to correctly predict bioactivities for compounds with scaffolds that are already present in the training data. Hence, a diverse set of molecules is required in the training set in order to optimize recall values of the model. Given the benefit of both domain-based extrapolation and using plasmodial DHFR bioactivity data for model training, all plasmodial DHFR data were included in the training set for further MoA prediction of the GSK TCAMS phenotypic dataset in order to optimize recall values. Performance of the DHFR target prediction model compared across a number of parameters. 145 data points annotated against plasmodial DHFR were used as a test set to assess the performance of the target prediction model. The top n predicted non-plasmodial targets were considered (n was varied for values between 1 and 12), after which these targets were extrapolated to plasmodial targets. When n increases, recall values rise up to 36% (with recall values of ~35% for n =3 and n = 4). On the other hand, precision values are 100% for n ≥ 2. The high precision values are likely to be explained by the fact that plasmodial DHFR inhibitors and T. gondii DHFR inhibitors occupy the same chemical space. In addition to varying the parameter n, we performed a 2-fold cross validation (averaged over 20 randomizations), which resulted in a drastic improvement as a recall value of 79% was achieved (with a standard deviation of 10.1%, which is shown as an error bar). These results show that domain-based extrapolations have added value to the prediction algorithm (correct predictions are made even when bioactivity data on plasmodial DHFR is not present in the training set) and that including plasmodial DHFR bioactivity data in the training set can drastically improve recall values. PCM model validation The four algorithms used in this study (GBM, GP, RF and SVM) displayed similar performance on this dataset as the ranges of RMSEtest and R2 0 test differences are 0.04 pIC50 and 0.02 units, respectively. The GBM model exhibited the highest predictive ability with R2 0 test and RMSEtest values of 0.79 and 0.59 pIC50 units respectively. Both internal and external validation metrics are given in Table 1. Table 1 PCM, Family QSAR and Family QSAM performance on the PCM dataset To ensure that the model's predictive ability was not the consequence of spurious correlations in the data, we trained ten GBM models with an increasingly higher percentage of the pIC50 values randomized. Additional file 3: Figure S4 shows the performance of the ten models, quantified by the RMSEtest and R2 0 test values as a function of the level of randomization of the bioactivity values. The intercept was zero or negative when ~40% of the response variable was randomized (Additional file 3: Figure S4A). Therefore, the relationship established by the PCM models between the descriptor space and the bioactivity values is not a consequence of chance correlations [49]. PCM outperforms one-space models and IT on this dataset The Family QSAM model, trained on target descriptors only, displayed poor predictive ability with RMSEtest and R2 0 test values of 1.22 pIC50 units and 0.09, respectively (Table 1). By contrast, the Family QSAR model, trained on compound descriptors only, displayed satisfactory values for the statistical metrics according to our validation criteria, as the model exhibited RMSEtest and R2 0 test values of 0.78 pIC50 units and 0.63, respectively (Table 1). Hence, compound descriptors explain a large proportion of the variance, which may stem from the high correlation of the bioactivities of identical compounds against orthologs. Indeed, Additional file 3: Figure S5 depicts the correlation (RMSE: 0.95 pIC50 units; R2 0: 0.46) between the pIC50 values of the same compounds on different orthologs. Furthermore, better performance is obtained for the GBM PCM model trained on amino acid descriptors and compound fingerprints, than for the GBM model trained on target identity fingerprints and compound fingeprints, with RMSEtest values of 0.59 vs. 0.63 pIC50 units, respectively. This indicates that our selection of amino acid descriptors captured the binding site information of the different orthologs and thus allows explicit learning on this dataset (Table 1). Overall, these data suggest that the explicit inclusion of target information improves bioactivity prediction. Several high-affinity DHFR Inhibitors are identified by both target prediction and PCM The targets for which the target prediction model had a class-specific F-measure higher than 40% were selected, leading to a shortlist of 5 proteins, namely: plasmepsin 1 and 2, histone deacetylase, DHFR and falcipain 2 (Additional file 3: Figure S6). Overall, a total of 1,291 plasmodial predictions were made for 1,017 compounds. DHFR is the most commonly predicted target, which represents 534 (41%) of the total predictions, followed by plasmepsin 1 (280 predictions – 22%) and plasmepsin 2 (273 predictions – 21 histone deacetylase (184 predictions – 14%) and falcipain 2 (20 predictions – 2%). Plasmodial DHFR has previously been proposed as a candidate target against resistant plasmodial strains [50]. In addition, the plasmepsin (1 and 2) and falcipain targets have previously been proposed as potential targets for anti-malarial therapy [51], due to their involvement in the hemoglobin catabolism that occurs during the erythrocytic stage of the malarial parasite life cycle (plasmepsin proteins and falcipain proteins), and to their involvement in erythrocyte invasion and erythrocyte rupture (falcipain proteins) [52]. Finally, plasmodial histone deacetylase has been proposed as a promising target for anti-malarial therapy due to its key role in regulating gene transcription, and it has been shown that histone deacetylase inhibitors are potent inhibitors of the growth of P. falciparum [53]. Hence, there is sufficient evidence for all 5 predicted proteins for being a potential target. In total, 534 compounds of the GSK TCAMS dataset were predicted to interact with DHFR, representing 3.95% of the total number of compounds in this dataset. Out of these 534 compounds, the predicted pIC50 values using PCM was 7 or greater for 25 compounds, between 6 and 7 for 92 compounds, and between 5 and 6 for 420. None of the 534 compounds was predicted to be inactive on DHFR (Figure 4). Given that many of the compounds in ChEMBL are active in the low micromolar range, it is thus not surprising to obtain most of the predictions in this range [54]. Complementarity between in silico target prediction and PCM. The target prediction algorithm predicted 534 compounds of the GSK TCAMS dataset to interact with DHFR, representing 3.95% of the total number of compounds in this dataset. Out of these 534 compounds, the PCM model predicted 23 compounds to have a pIC50 value of 7 or greater. Therefore, the combination of both methods permits the assessment of compound polypharmacology and provides quantitative bioactivity predictions. Interestingly, 23 of the 25 compounds with a predicted pIC50 value higher than 7 were already predicted to interact with DHFR by the target prediction algorithm (Figure 4) at the exclusion of any other target. The analysis of chemical scaffolds in the 25 compounds shows that only 2 scaffolds were identified, as 22 out of the 25 compounds (Figure 5 - excluding compounds 137850, 123550 and 125380), share a common scaffold, namely: a 5-methylpyrido[2,3-d]pyrimidine-2,4-diamine ring with an aryl substituent in the 6-position. A methyl group or an amine group in the 7-position are also present in some compounds, such as 137637 and 138061, respectively. In all compounds with the common scaffold the aryl substituent is a phenyl ring with different substituents in the 3,4,5-positions, e.g. methoxy, hydroxy and carboxamide, except for compound 137642, which has 2-methyl-thiophene as aryl substituent. Compounds predicted to interact with DHFR by the target prediction algorithm, and predicted by the PCM model to have a pIC50 value higher than 7 pIC50 units. Compound IDs correspond to the TCMDC identifier given in the original dataset. The 23 compounds for which the IDs are accompanied by an upward-pointing arrow were identified by the two methods. The two compounds predicted to have a pIC50 value higher than 7 by the PCM model, but not predicted to interact with DHFR by the target prediction algorithm, are accompanied by a downward-pointing arrow. The 23 compounds predicted to be high-affinity DHFR inhibitors (upward-pointing arrows) share a common scaffold: a 5-methylpyrido[2,3-d]pyrimidine-2,4-diamine ring with an aryl substituent in the 6-position. Overall, it can be seen that these data indicate a high agreement between the target prediction algorithm and the PCM model to identify high-affinity DHFR inhibitors. Two additional compounds, 123550 and 125380 (Figure 5), predicted by PCM to display pIC50 values of 7.11 and 7.07, respectively, represent new scaffolds. Remarkably, these two scaffolds were neither present in the PCM nor in the target prediction training set. Taken together, our results indicate a high agreement between the target prediction algorithm and the PCM model to identify high-affinity DHFR inhibitors. Using both methods simultaneously, it is possible to give higher priority to the compounds that are identified by both methods. In this study, the complementarity of in silico target predictions and proteochemometric modelling (PCM) was evaluated for the retrospective identification of P. falciparum DHFR inhibitors. The target prediction algorithm exhibited respective recall and precision values of 79% and 100% for plasmodial DHFR. The high precision value is explained by the structural similarity of plasmodial and the T. gondii DHFR inhibitors, which were part of the training set and were found to be relevant for extrapolation (the average MOLPRINT2D pairwise similarity between the T. gondii inhibitors and the plasmodial inhibitors was 16%, whereas the average pairwise similarity within the plasmodial dataset and the T. gondii dataset was 19% and 18% respectively). We showed that high-affinity inhibitors from the GSK TCAMS phenotypic dataset are independently identified by both methods: 534 compounds from the GSK TCAMS dataset were identified as DHFR inhibitors by the target prediction algorithm, whereas the PCM algorithm identified 25 high affinity compounds, 23 of which were already identified by the target prediction algorithm. The combination of both methods permits the assessment of compound polypharmacology and provides insight into the potency/affinity of small molecules. We presented an approach that can be potentially extended to other human, bacterial or plasmodial targets. The inherent capability of PCM to combine bioactivity data for related targets, even for targets spanning distant phyla, is likely to improve the mining of currently available multi-target bioactivity databases. Similarly, domain-based extrapolation permits in silico target predictions to be extended to non-mammalian orthologous proteins for which less bioactivity data is usually available. Jalencas X, Mestres J. On the origins of drug polypharmacology. Med Chem Comm. 2013;4:80. Lounkine E, Keiser MJ, Whitebread S, Mikhailov D, Hamon J, Jenkins JL, et al. Large-scale prediction and testing of drug activity on side-effect targets. Nature. 2012;486:361–7. Cortes-Ciriano I, Ain QU, Subramanian V, Lenselink EB, Mendez-Lucio O, IJzerman AP, et al. Polypharmacology modelling using proteochemometrics: recent developments and future prospects. Med Chem Comm. 2015;6:24–50 doi: 10.1039/C4MD00216D. Van Westen GJ, Swier RF, Cortes-Ciriano I, Wegner JK, Overington JP, IJzerman AP, et al. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets. J Chem inform. 2013;5:42. Poroikov V, Filimonov D, Lagunin A, Gloriozova T, Zakharov A. PASS: identification of probable targets and mechanisms of toxicity†. SAR QSAR Env Res. 2007;18:101–10. Nidhi, Glick M, Davies JW, Jenkins JL. Prediction of biological targets for compounds using multiple-category Bayesian models trained on chemogenomics databases. J Chem Inf Model. 2006;46:1124–33. Nigsch F, Bender A, Jenkins JL, Mitchell JBO. Ligand-Target Prediction Using Winnow and Naive Bayesian Algorithms and the Implications of Overall Performance Statistics. J Chem Inf Model. 2008;48:2313–25. Keiser MJ, Roth BL, Armbruster BN, Ernsberger P, Irwin JJ, Shoichet BK. Relating protein pharmacology by ligand chemistry. Nat Biotechnol. 2007;25:197–206. Wale N, Karypis G. Target Fishing for Chemical Compounds using Target-Ligand Activity data and Ranking based Methods. J Chem Inf Model. 2010;49:2190–201. Koutsoukas A, Lowe R, KalantarMotamedi Y, Mussa HY, Klaffke W, Mitchell JBO, et al. In Silico Target Predictions: Defining a Benchmarking Data Set and Comparison of Performance of the Multiclass Naïve Bayes and Parzen-Rosenblatt Window. J Chem Inf Model. 2013;53:1957–66. Van Westen GJP, Wegner JKJ, IJzerman AP, van Vlijmen HWT, Bender A. Proteochemometric modeling as a tool to design selective compounds and for extrapolating to novel targets. Med Chem Comm. 2011;2:16–30. Perlmann P, Troye-Blomberg M. Malaria blood-stage infection and its conyrol by the immune system. Folia Biol (Praha). 2000;46:210–8. Olliaro P. Mode of action and mechanisms of resistance for antimalarial drugs. Pharmacol Ther. 2001;89:207–19. Hecht D, Fogel GB. Modeling the evolution of drug resistance in malaria. J Comput Aided Mol Des. 2012;26:1343–53. Moran M, Guzman J, Ropars A-L. The malaria product pipeline: planning for the future. In: The George Institute for International Health. 2007. ChEMBL - Neglected Tropical Disease. http://www.ebi.ac.uk/chemblntd Gamo F-J, Sanz LM, Vidal J, de Cozar C, Alvarez E, Lavandera J-L, et al. Thousands of chemical starting points for antimalarial lead identification. Nature. 2010;465:305–10. Verma R, Tiwari A, Kaur S, Varshney GC, Raghava GPS. Identification of proteins secreted by malaria parasite into erythrocyte using SVM and PSSM profiles. BMC Bio inform. 2008;9:201. Jamal S, Periwal V, Scaria V. Predictive modeling of anti-malarial molecules inhibiting apicoplast formation. BMC Bio inform. 2013;14:2105–14. Subramaniam S, Mehrotra M, Gupta D. Support Vector Machine Based Prediction of P. falciparum Proteasome Inhibitors and Development of Focused Library by Molecular Docking. Comb Chem High Throughput Screen. 2011;14:898–907. Vortex D: v2013.03.20719. 2013. Gaulton A, Bellis LJ, Bento AP, Chambers J, Davies M, Hersey A, et al. ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res. 2012;40(Database issue):D1100–7. Bender A. Databases: Compound bioactivities go public. Nat Chem Biol. 2010;6:309. ChemAxon. Standardizer. 2013. Hunter S, Jones P, Mitchell A, Apweiler R, Attwood TK, Bateman A, et al. InterPro in 2011: new developments in the family and domain prediction database. Nucleic Acid Res. 2012;40(Database issue):D306–12. The Uniprot Consortium. Update on activities at the Universal Protein Resource (UniProt) in 2013. Nucleic Acid Res. 2013;41:D43–7. Bender A, Mussa HY, Glen RC. Molecular Similarity Searching Using Atom Environments, Information-Based Feature Selection, and a Naïve Bayesian Classifier. J Chem Inf Model. 2004;44:170–8. Bender A, Mussa HY, Glen RC. Similarity Searching of Chemical Databases Using Atom Environment Descriptors (MOLPRINT 2D): Evaluation of Performance. J Chem Inf Model. 2004;44:1708–18. Sastry M, Lowrie JF, Dixon SL, Sherman W. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments. J Chem Inf Model. 2010;50:771–84. O'Boyle NM, Morley C, Hutchison GR. Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit. Chem Cent J. 2008;2:5–11. Crisman TJ, Parker CN, Jenkins JL, Scheiber J, Thoma M, Kang Z, et al. Understanding false positives in reporter gene assays: in silico chemogenimics approaches to prioritize cell-based HTS data. J Chem Inf Model. 2007;47:1319–27. Bender A, Mikhailov D, Glick M, Scheiber J, Davies JW, Cleaver S, et al. Use of Ligand Based Models for Protein Domains To Predict Novel Molecular Targets and Applications To Triage Affinity Chromatography Data. J Proteome Res. 2009;8:2575–85. Prathipati P, Ma NL, Manjunatha UH, Bender A. Fishing the Target of Antitubercular Compounds: In Silico Target Deconvolution Model Development and Validation. J Proteome Res. 2009;8:2788–98. Murrell DS, Cortes-Ciriano I, van Westen GJP, Stott IP, Malliavin T, Bender A, et al. Chemistry Aware Model Builder (camb): an R Package for Predictive Bioactivity Modeling. 2014. http://github.com/cambDI/camb. Cody V, Galitsky N, Luft JR, Pangborn W, Rosowsky A, Blakley RL. Comparison of two independent crystal structures of human dihydrofolate reductase ternary complexes reduced with nicotinamide adenine dinucleotide phosphate and the very tight-binding inhibitor PT523. Biochemistry. 1997;36:13897–903. Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, et al. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol. 2011;7:539. Consonni V, Ballabio D, Todeschini R. Evaluation of model predictive ability by external validation techniques. J Chemometr. 2010;24:194–201. Hawkins DM, Basak SC, Mills D. Assessing Model Fit by Cross-Validation. J Chem Inform Comput Sci. 2003;43:579–86. Tropsha A, Gramatica P, Gombar V. The Importance of Being Earnest: Validation is the Absolute Essential for Successful Application and Interpretation of QSPR Models. QSAR Comb Sci. 2003;22:69–77. Golbraikh A, Tropsha A. Beware of q2! J Mol Graphics Modell. 2002;20:269–76. Brown JB, Okuno Y, Marcou G, Varnek A, Horvath D. Computational chemogenomics: Is it more than inductive transfer? J Comput Aided Mol Des. 2014;28(6):597–618. Ben-Hur A, Ong C. Support vector machines and kernels for computational biology. PLoS Comput Biol. 2008;4:e1000173. Friedman JH. Greedy function approximation: A gradient boosting machine. Ann Stat. 2001;29:1189–232. Rasmussen CE, Williams CKI. Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c 2006 Massachusetts Institute of Technology. Breiman L. Random Forests. Mach Learning. 2001;45:5–32. Kuhn M. Building Predictive Models in R Using the caret Package. J Stat Softw. 2008;28:1–26. Spitzmüller A, Mestres J. Prediction of the P. falciparum target space relevant to malaria drug discovery. PLoS Comput Biol. 2013;9:e1003257. Martínez-Jiménez F, Papadatos G, Yang L, Wallace IM, Kumar V, Pieper U, et al. Target prediction for an open access set of compounds active against Mycobacterium tuberculosis. PLoS Comput Biol. 2013;9:e1003253. Clark R, Fox P. Statistical variation in progressive scrambling. J Comput Aided Mol Des. 2004;18:563–76. Yuthavong Y, Tarnchompoo B, Vilaivan T, Chitnumsub P, Kamchonwongpaisan S, Charman SA, et al. Malarial dihydrofolate reductase as a paradigm for drug development against a resistance-compromised target. Proc Natl Acad Sci U S A. 2012;109:16823–8. Ersmark K, Samuelsson B, Hallberg A. Plasmepsins as Potential Targets for New Antimalarial Therapy. Med Res Rev. 2006;26:626–66. Marco M, Coterón JM. Falcipain inhibition as a promising antimalarial target. Curr Top Med Chem. 2012;12:408–44. Andrews KT, Tran TN, Wheatley NC, Fairlie DP. Targeting histone deacetylase inhibitors for anti-malarial therapy. Curr Top Med Chem. 2009;9:292–308. Cortes-Ciriano I, Koutsoukas A, Abian O, Glen RC, Velazquez-Campoy A, Bender A. Experimental validation of in silico target predictions on synergistic protein targets. Med Chem Comm. 2013;4:278–88. ICC thanks the Paris-Pasteur International PhD Programme for funding. ICC and TM thank CNRS and Institut Pasteur for funding. SP and APIJ thank the Netherlands Organisation for Scientific Research (NWO, grant number NWO-017.009-065) and the Prins Bernhard Cultuurfonds for funding. AB thanks Unilever and the European Research Commission (Starting Grant ERC-2013-StG 336159 MIXTURE) for funding. Department of Chemistry, Centre for Molecular Science Informatics, University of Cambridge, Lensfield Road, CB2 1EW, Cambridge, UK Shardul Paricharak & Andreas Bender Division of Medicinal Chemistry, Leiden Academic Centre for Drug Research, Leiden University, P.O. Box 9502, , 2300 RA, Leiden, The Netherlands & Adriaan P IJzerman Unité de Bioinformatique Structurale, Institut Pasteur and CNRS UMR 3825, Structural Biology and Chemistry Department, 25-28, rue du Dr. Roux, 75 724, Paris, France Isidro Cortés-Ciriano & Thérèse E Malliavin Search for Shardul Paricharak in: Search for Isidro Cortés-Ciriano in: Search for Adriaan P IJzerman in: Search for Thérèse E Malliavin in: Search for Andreas Bender in: Correspondence to Thérèse E Malliavin or Andreas Bender. The authors declare no competing interests. SP and ICC designed research. SP and ICC gathered the data, trained the models, and prepared the figures. SP, ICC, APIJ, TM and AB analyzed the results and wrote the paper. All authors read and approved the final manuscript. Shardul Paricharak and Isidro Cortés-Ciriano contributed equally to this work. Training set for the target prediction algorithm. InterPro domain annotations for 148 Plasmodium falciparum targets. Supplementary Figures and Supplementary Table S3. Paricharak, S., Cortés-Ciriano, I., IJzerman, A.P. et al. Proteochemometric modelling coupled to in silico target prediction: an integrated approach for the simultaneous prediction of polypharmacology and binding affinity/potency of small molecules. J Cheminform 7, 15 (2015) doi:10.1186/s13321-015-0063-9 chemogenomics proteochemometrics QSAR plasmodium falciparum Submission enquiries: [email protected]
CommonCrawl
\begin{definition}[Definition:Uniform Absolute Convergence of Product/General Definition/Definition 2] Let $X$ be a set. Let $\struct {\mathbb K, \norm {\, \cdot \,} }$ be a valued field. Let $\sequence {f_n}$ be a sequence of bounded mappings $f_n: X \to \mathbb K$. The infinite product $\ds \prod_{n \mathop = 1}^\infty \paren {1 + f_n}$ '''converges uniformly absolutely''' {{iff}} the series $\ds \sum_{n \mathop = 1}^\infty f_n$ converges uniformly absolutely. Category:Definitions/Uniform Convergence Category:Definitions/Infinite Products \end{definition}
ProofWiki
When are modules and representations not the same thing? I've been trying for a while to get a real concrete handle on the relationship between representations and modules. To frame the question, I'll put here the standard situation I have in mind: A ring $R$ lives in the category Ab of Abelian groups as an internal monoid $(\mu_R, \eta_R)$. A module is then just an Abelian group $A$ and a map $m : R \otimes A \rightarrow A$ that commutes with the monoid structure in the way you'd expect. Alternatively, take an Abelian group $A$ and look at its group of endomorphisms $[A,A]$. This has an internal monoid $(\mu_A, \eta_A)$ just taking composition and identity. Then a representation is just a monoid homomorphism $(R, \mu_R, \eta_R) \rightarrow (A, \mu_A, \eta_A)$ in Ab. I.e. a ring homomorphism. But then, Ab is monoidal closed, so these are the same concept under the iso $$\hom(R\otimes A, A) \cong \hom(R, [A,A])$$ This idea seems to work for any closed category where one wants to relate a multiplication to composition. So, my question is, since these things are isomorphic in such a general context, why are they taught as two separate concepts? Is it merely pedagogical, or are there useful examples where modules and representations are distinct? rt.representation-theory modules ct.category-theory ra.rings-and-algebras Aleks KissingerAleks Kissinger $\begingroup$ I have never seen the words "module" and "representation" clearly distinguished. My internal stylist says "module" when $R$ is acting on an abelian group, and "representation" when the action, especially by something not a ring, but rather a group, etc., is acting on a vector space. Eventually you run into trouble with some definitions. For example, it's very hard to say when a map $G\to\operatorname{hom}(V,V)$ is algebraic if $G$ is an algebraic group and $V$ is $\infty$-dimensional. So there the only valid definition of "module" is a map $V\to k[G]\otimes V$. $\endgroup$ – Theo Johnson-Freyd Jul 20 '10 at 17:44 $\begingroup$ Theo, you are correct: "rational representations" of algebraic groups are, in fact, modules. However, the situation in Lie theory is the opposite, i.e. there is a good theory of representations (with some intricate points), but while some modules have been considered, they do not entirely capture it, let alone define it. $\endgroup$ – Victor Protsak Jul 20 '10 at 19:30 $\begingroup$ These and other answers/comments are good reminders of the linguistic subtleties here. There is a related usage issue: I tend to use irreducible and completely reducible for representations, simple and semisimple for modules. But people do mix the terms, and it's hard to say what is "correct". $\endgroup$ – Jim Humphreys Jul 20 '10 at 19:55 Here is my representation theorist's perspective: the key difference between representations and modules is that representations are "non-linear", whereas modules are "linear". I'll concentrate on the case of groups as the most familiar, but this applies more generally. As Greg has already mentioned, in the most general sense, a representation is a homomorphism $f:G\to H,$ and usually there is no linear (or additive) structure on $H$, i.e. the set $f(g)$ need not be closed under sums; in fact, if $H$ is a non-abelian group, e.g. the symmetric group, the notion of sum doesn't even make sense (if $H=GL(V)$ then we may view its elements as endomorphisms of $V$ and add them, but this is unnatural since, by definition, $f$ is compatible with multiplicative structure). By contrast, a module involves a linear action $G\times V\to V,$ which is then "completed" by allowing arbitrary linear combinations, leading to certain technical advantages. Here is an example of a construction that is very useful and makes perfect sense module-theoretically, but not representation-theoretically: change of scalars. Given a module $M$ over a group ring $R[G]$ and a commutative ring homomorphism $R\to S,$ one gets a module $S\otimes_R M$ over the group ring $S[G]$. Common examples involve extensions of scalars (e.g. from $\mathbb{R}$ to $\mathbb{C}$, from a field $K$ of definition to the splitting field, from $\mathbb{Z}$ to $\mathbb{Z}_p$) and, more to the point, reductions (e.g. from $\mathbb{Z}$ or $\mathbb{Z}_p$ to $\mathbb{Z}/p\mathbb{Z}$). The module language is, predictably, also very useful in providing categorical descriptions of various operations on representations, such as functors of induction and restriction, $$Ind_H^G: H\text{-mod}\to G\text{-mod}\ \text{ and }\ Res_H^G: G\text{-mod}\to H\text{-mod},$$ where $H$ is a subgroup of $G,$ or the monoidal structure on $G$-mod. Finally, here are two illustrations of the complementary nature of the two approaches besides the group case, in linear algebra. A single linear transformation $T:V\to V$ on a finite-dimensional vector space $V$ over $K$ is most naturally viewed as a representation (no additive structure); in this case, it's a representation of the quiver with a single vertex and a single loop. From this point of view, classification up to isomorphism is a problem about conjugacy classes of linear transformations, $$T\to gTg^{-1},\ g\in GL(V).$$ By contrast, in the module style description we associate with $T$ a module over the ring $K[x]$ of polynomials in one variable over $K$ and classification problem reduces to the structure of modules over $K[x]$, which is a PID, with all the usual consequences. (Here the module picture is more illuminating.) If we consider a linear operator $S:V\to W$ between two different vector spaces, $$S\to hSg^{-1},\ g\in GL(V),\ h\in GL(W),$$ and a classification up to isomorphism is accomplished by row and column reduction. The corresponding quiver $\circ\to\circ$ is a single arrow connecting two distinct vertices, but its path algebra is less familiar. (Here the representation theory picture is more illuminating.) Victor ProtsakVictor Protsak To expand on Tom's answer, the word "representation" is a 19th century word that originally meant "group homomorphism". If $f:G \to H$ is a homomorphism from a group $G$ to a group $H$, then $f(g)$ "represents" the element $g$. $H$ is taken to be a "familiar" or "explicit" group, usually a matrix group but also sometimes a permutation group. The word "module" is a 20th century word (I think) that means "generalized vector space". As has been pointed out, a representation of a group $G$ is equivalent to a $k[G]$-module. These days the terms are largely interchangeable; you can also talk about a representation of an algebra instead of a group. Certainly you can add topology to the conditions, for instance by using the group $C^*$-algebra of a locally compact topological group. To the extent that there is still a useful distinction, there is a difference in emphasis. If a ring $R$ (or a group or whatever) acts on an abelian group $A$, and you consider its action to be a low-level structure, analogous to multiplying a vector by a scalar, then you should call $A$ an $R$-module. On the other hand, if you think of the action as a high-level geometric effect, analogous to a group acting on a manifold, then you should call it a representation. If you don't care, then you can use either term or both and it's all cool AFAIK :-). Possibly the word "module" is slowly supplanting the word "representation", because it's a shorter word as well as more modern and more general. Greg KuperbergGreg Kuperberg $\begingroup$ I like the explanation that the key difference in these two terms is emphasis. How far does this analogy go? Would it be fair to say for instance, modules of the enveloping algebra of sl(2) are a bit like vector spaces, but with some pretty beefy scalars? The primary concern here are the symmetries put on the space by the action of sl(2). I would say maybe yes. In plain old vector spaces, the scalar action is an important symmetry, so much so that we like to talk about identities "up to a scalar", etc. $\endgroup$ – Aleks Kissinger Jul 20 '10 at 15:59 $\begingroup$ A purely historical/etymological remark, possibly wrong (maybe someone can confirm, correct, or deny this): I believe that long before anybody talked about $R$-modules (like maybe late 19th century) certain abelian groups were being called modules. Maybe what we now would call a lattice in a real vector space $V$ -- something which has an $\mathbb R$-basis of $V$ as a $\mathbb Z$-basis? Perhaps with $V=\mathbb C$? Is the term "module" in algebra distantly related to the terms "modular group" and "modular form" by this route? $\endgroup$ – Tom Goodwillie Jul 20 '10 at 16:24 $\begingroup$ @Aleks: Sure, that's a valid viewpoint. The sort of thing that blurs the distinction between beefy scalars and puny scalars, even if you are conservative, is a module over (say) a polynomial ring. And even for a simple Lie algebra, the action of the Cartan subalgebra is very module-like, since in the integrable case it amounts to a graded vector space structure. $\endgroup$ – Greg Kuperberg Jul 20 '10 at 17:34 $\begingroup$ @Tom: I looked in early papers in JSTOR and Google Books. In English, a few authors up to the 1930s used the word "module" on an ad hoc basis to sort-of mean an abelian group, for instance E.T. Bell and Harald Bohr. Suddenly in 1938 there are algebra papers by Nakayama that read like papers written today, with left and right modules of algebras, quotient modules, etc. Nakayama had read a German algebra book. It appears that the whole package of definitions came from the Gottingen school. $\endgroup$ – Greg Kuperberg Jul 20 '10 at 17:48 $\begingroup$ Aleks: 1. In the interest of sanity, the word "scalar" should never be applied to non-commuting quantities. 2. The representation-module language use for Lie algebras mirrors the case of groups: a representation of $sl_2$ is a module over its universal enveloping algebra $U(sl_2).$ From the category theory point of view, a better formulation is that there is an equivalence of categories between {representations of $\mathfrak{g}$} and {$U(\mathfrak{g})$-modules}. $\endgroup$ – Victor Protsak Jul 20 '10 at 17:55 It is certainly true that the category of representations of a group $G$ over a field $k$ is equivalent to the category of modules for the group ring $k[G]$, and it is often productive to rephrase questions about representations about questions about modules. Below, I give some examples of structure which is easier to discuss in terms of representations. But, as I will indicate, it is usually possible to rephrase in terms of modules with enough effort. Tensor products: If $V$ and $W$ are two representations of $G$, then $V \otimes W$ has a natural structure as a $G$-representation. For $k[G]$ modules, this is not true; the tensor product has to be added as additional structure on the category $k[G]$-rep. Here is an explicit example: Let $G=\mathbb{Z}/4$ and let $H = \mathbb{Z}/2 \times \mathbb{Z}/2$. Then $\mathbb{C}[G]$ and $\mathbb{C}[H]$ are isomrphic rings, but the tensor structures on $\mathbb{C}[G]$-modules and $\mathbb{C}[H]$-modules are inequivalent. The same issue exists with duals. People who like rings better than groups would say that the issue is that I am talking about the algebra structure of $k[G]$ when I should be talking about the Hopf algebra structure. Topology: Suppose that $G$ is a topological group (maybe a Lie group) and $k$ a toplogical field (maybe $\mathbb{R}$). Then a continuous representation of $G$ is a map $G \times V \to V$ which is a group action, $k$-linear, and continuous. I imagine there is a way to put a topology on $k[G]$ so that a continuous representation is a $k[G]$-module such that $k[G] \times V \to V$ is continuous, but I haven't seen it. And this will get worse with adjectives like smooth, algebraic, ... David E SpeyerDavid E Speyer $\begingroup$ Concerning your last sentence ('this will get worse with adjectives like ... algebraic'), "algebraic" representations of an affine group scheme $G$ over a field $k$ are actually co-modules for the Hopf algebra of regular functions $k[G]$. This means that the action of $G$ on the representation space $V$ is defined by a mapping $V \to k[G] \otimes V$ satisfying some natural diagrams. When $G$ is smooth and $k$ is alg. closed, one can of course view $V$ as a module for the group algebra of the "abstract" group $G(k)$, but as you point out, it isn't clear this is useful. $\endgroup$ – George McNinch Jul 20 '10 at 15:48 $\begingroup$ TOPOLOGY: Representations of top. groups is probably a good foil for the mod/rep correspondence. For one thing Top fails to be cartesian closed, so the first crack at comparing these things by chasing the iso hom(GxV,V) \cong hom(G,[V,V]) fails unless V and G are nice spaces (compact Hausdorff or something). TENSORS: It seems the natural representation of G on V (x) W already (implicitly) uses the comultiplication from the Hopf algebra structure of k[G], i.e. the induced representation is "(psi (x) phi) o delta" ...where delta is the linear map that copies the basis elements of k[G]. $\endgroup$ – Aleks Kissinger Jul 20 '10 at 15:48 $\begingroup$ Note: positive-dimensional vector spaces over non-discrete locally compact fields are never compact. There are some genuine difficulties in coming up with the correct notion of a group algebra in the topological setting that will faithfully capture representation-theoretic picture (see Kirillov's "Elements of representation theory"). Many of them concern a good category to which it should belong. The reduced $C^*$-algebra of an infinite discrete group is a good illustration. This cannot be fully computed even in easy cases. $\endgroup$ – Victor Protsak Jul 20 '10 at 17:30 I would teach that an $R$-module is an abelian group $A$ plus a map of sets $R\times A\to A$ satisfying certain identities. I would probably also point out that this is the same thing as an abelian group $A$ plus a ring homomorphism $R\to End(A)$. I would also point out that "vector space" is the traditional term for "module" when $R$ is a field. Similarly, I would teach that an action of a group $G$ on a set $X$ is a map of sets $G\times X\to X$ satisfying certain identities, and I would probably also point out that that is the same as a group homomorphism $G\to Aut(X)$; and if it seemed appropriate for those students I would also say that this second point of view is useful for generalizing the idea so as to make groups act on things other than sets. An action of a group $G$ on a $k$-module is the same as a module for the group ring $kG$. You can also call this a representation of $G$ over $k$. This is not traditionally called a representation of $kG$. The fact that there are overlapping definitions is just historical accident. The word "module" was in use in special cases long before there was category theory, even before there was abstract ring theory as we know it. So was the word "representation". The fact that the two terms are both still used is not because somebody decided on a good reason to keep them both; they just survived, as words do. Tom GoodwillieTom Goodwillie $\begingroup$ "The word "module" was in use in special cases long before there was category theory, even before there was abstract ring theory as we know it" - do you have a reference in support of that? Note that van der Waerden's Modern algebra used "abelian group with operators" (I personally think that "module" is a great improvement in terminology). I am only aware of unrelated uses of the word "module" (actually, modulus, pl moduli) prior to the modern era. $\endgroup$ – Victor Protsak Jul 20 '10 at 17:39 $\begingroup$ I meant essentially what Greg says in his comment after my comment in the thread of his answer to this question. Namely, "module" was a word sometimes used for an abelian group in some contexts. I don't have any evidence. I wonder how far back it goes. I heard Serre use the expression "group with operators" in the 70s, but he wasn't talking about abelian groups. $\endgroup$ – Tom Goodwillie Jul 20 '10 at 18:30 $\begingroup$ OK, this seems consistent with what I know, but note that the gap between Nakayama 1938 and Eilenberg-MacLane 1945 is less than 10 years. An interesting bit about Serre! French algebra nomenclature diverges from the English one in quite a few places. $\endgroup$ – Victor Protsak Jul 20 '10 at 19:38 $\begingroup$ Serre was lecturing to undergraduates on the Jordan-Hölder theorem, and mentioning a generalization. $\endgroup$ – Tom Goodwillie Jul 20 '10 at 20:35 Not the answer you're looking for? Browse other questions tagged rt.representation-theory modules ct.category-theory ra.rings-and-algebras or ask your own question. Why is a monoid with closed symmetric monoidal module category commutative? What is the opposite category of the category of modules (or Hopf algebra representations)? Which monoidal categories are equivalent to their centers? Direct construction of cocontinuous functors on Mod(A) Enriched monoidal categories Two definitions of modules in monoidal category When is/isn't the monoidal unit compact projective?
CommonCrawl
What would the effects be on Earth if Jupiter was turned into a star? Maelish In Clarke's book 2010, the monolith and its brethren turned Jupiter into the small star nicknamed Lucifer. Ignoring the reality that we won't have any magical monoliths appearing in our future, what would the effects be on Earth if Jupiter was turned into a star? At it's closest and furthest: How bright would the "back-side" of the earth be with light from Lucifer? How much heat would the small star generate on earth? How many days or months would we actually have night when we circled away behind the sun? How much brighter would the sun-side of earth be when Lucifer and the sun both shine on the same side of the planet? HDE 226868 7 years ago While this is an interesting question, I don't know if there's a proper way to answer it. Jupiter's mass is far less that that of the smallest brown dwarfs, also dubbed "failed stars". Brown dwarfs don't have enough mass to sustain hydrogen fusion, and don't emit a whole lot of light. I don't think that there's any way that you could realistically do the calculations for a Jupiter-star scenario, because of the impossibility of it beginning hydrogen fusion. Still, it's an interesting idea. Okay, I relent. +1 for an interesting idea. Incnis Mrsi 5 years ago One who looks for a good physics expert's opinion about that, look here. Note: Ī̲ don't advertise that site as a whole, only this particular posting. A. C. A. C. 4 years ago Jupiter can burn as brightly as you want it to depending on how much mass you add to it. If you somehow put a very massive core at the center of Jupiter, the total mass of the system would determine how much fusion can take place. It can probably range from a supernova if you put in a neutron star just below the Chandrasekhar limit inside to a very weak red dwarf if you just add enough mass to make fusion start. Jack R. Woods 3 years ago How do you know we won't have any "magical" monoliths appearing in the future? It's as good of a scenario of first contact as any. HDE 226868 Turn off AdBlock to see the correct answer Before I start, I'll admit that I've criticized the question based on its improbability; however, I've been persuaded otherwise .. Maelish 7 years ago It's an interesting answer! It sounds as thought there would be very little effect in regards to extra light or temperature. In answer to the edit you made to your comment: Yep. Not a big difference. At least, not on Earth. An interesting follow-up would be to see if it could indeed cause conditions on Europa to change in favor of life. @HDE 226868 Just for fun did you think anymore about what it would take to make Europa habitable for the aliens (I know, it depends on the alien). Jupiter couldn't get "too hot" obviously. I love A. C. Clarke, but he did need to ignore science for the sake of the story sometimes (ie. humans wouldn't survive in Jupiter's orbit due to the magnetic field). HDE 226868 Correct answer Before I start, I'll admit that I've criticized the question based on its improbability; however, I've been persuaded otherwise. I'm going to try to do the calculations based on completely different formulas than I think have been used; I hope you'll stay with me as I work it out. Let's imagine that Lucifer becomes a main-sequence star - in fact, let's call it a low-mass red dwarf. Main-sequence stars follow the mass-luminosity relation: $$\frac{L}{L_\odot} = \left(\frac{M}{M_\odot}\right)^a$$ Where $L$ and $M$ are the star's luminosity and mass, and $L_\odot$ and $M_\odot$ and the luminosity and mass of the Sun. For stars with $M < 0.43M_\odot$, $a$ takes the value of 2.3. Now we can plug in Jupiter's mass ($1.8986 \times 10 ^{27}$ kg) into the formula, as well as the Sun's mass ($1.98855 \times 10 ^ {30}$ kg) and luminosity ($3.846 \times 10 ^ {26}$ watts), and we get $$\frac{L}{3.846 \times 10 ^ {26}} = \left(\frac{1.8986 \times 10 ^ {27}}{1.98855 \times 10 ^ {30}}\right)^{2.3}$$ This becomes $$L = \left(\frac{1.8986 \times 10 ^ {27}}{1.98855 \times 10 ^ {30}}\right)^{2.3} \times 3.846 \times 10 ^ {26}$$ which then becomes $$L = 4.35 \times 10 ^ {19}$$ watts. Now we can work out the apparent brightness of Lucifer, as seen from Earth. For that, we need the formula $$m = m_\odot - 2.5 \log \left(\frac {L}{L_\odot}\left(\frac {d_\odot}{d}\right) ^ 2\right)$$ where $m$ is the apparent magnitude of the star, $m_\odot$ is the apparent magnitude of the Sun, $d_\odot$ is the distance to the Sun, and $d$ is the distance to the star. Now, $m = -26.73$ and $d(s)$ is 1 (in astronomical units). $d$ varies. Jupiter is about 5.2 AU from the Sun, so at its closest distance to Earth, it would be ~4.2 AU away. We plug these numbers into the formula, and find $$m = -6.25$$ which is a lot less brighter than the Sun. Now, when Jupiter is farthest away from the Sun, it is ~6.2 AU away. We plug that into the formula, and find which is dimmer still - although, of course, Jupiter would be completely blocked by the Sun. Still, for finding the apparent magnitude of Jupiter at some distance from Earth, we can change the above formula to $$m = -26.73 - 2.5 \log \left(\frac {4.35 \times 10 ^ {19}}{3.846 \times 10 6 {26}}\left(\frac {1}{d}\right) ^ 2\right)$$ By comparison, the Moon can have an average apparent magnitude of -12.74 at full moon - much brighter than Lucifer. The apparent magnitude of both bodies can, of course, change - Jupiter by transits of its moon, for example - but these are the optimal values. While the above calculations really don't answer most parts of your question, I hope it helps a bit. And please, correct me if I made a mistake somewhere. LaTeX is by no means my native language, and I could have gotten something wrong. The combined brightness of Lucifer and the Sun would depend on the angle of the Sun's rays and Lucifer's rays. Remember how we have different seasons because of the tilt of the Earth's axis? Well, the added heat would have to do with the tilt of Earth's and Lucifer's axes relative to one another. I can't give you a numerical result, but I can add that I hope it wouldn't be too much hotter than it is now, as I'm writing this! Like I said in a comment somewhere on this page, the mass-luminosity relation really only works for main-sequence stars. If Lucifer was not on the main sequence. . . Well, then none of my calculations would be right. star the-sun light jupiter heat Wikimho 2021.6 - In my humble opinion © 2019-2022 Cookie Policy - Terms of Use and Privacy Policy Source Astronomy - Stack Exchange Network
CommonCrawl
Corrigendum: On the Abel differential equations of third kind The cyclicity of a class of quadratic reversible centers defining elliptic curves doi: 10.3934/dcdsb.2021155 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. The eigenvalue problem for a class of degenerate operators related to the normalized $ p $-Laplacian Fang Liu Department of Mathematics, School of Science, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu, China Received October 2020 Revised March 2021 Early access June 2021 Fund Project: This work was supported by National Natural Science Foundation of China (No. 11501292) In this paper, we investigate a weighted Dirichlet eigenvalue problem for a class of degenerate operators related to the $ h $ degree homogeneous $ p $ -Laplacian $ \begin{equation} \nonumber \left \{ \begin{array}{ll} {|Du|^{h-1}}\Delta_p^N u+ \lambda a(x)|u|^{h-1}u = 0, \quad\quad \rm{in}\quad \Omega, \\ u = 0, \quad\quad \quad \quad \rm{on} \quad\partial\Omega. \end{array}\right. \end{equation} $ $ a(x) $ is a positive continuous bounded function in the closure of $ \Omega\subset \mathbb{R}^n(n\geq 2), $ $ h>1, $ $ 2< p<\infty, $ $ \Delta_p^N u = \frac{1}{p}|Du|^{2-p} {\rm div}\left(|Du|^{p-2}Du\right) $ is the normalized version of the -Laplacian arising from a stochastic game named Tug-of-War with noise. We prove the existence of the principal eigenvalue $ \lambda_\Omega $ , which is positive and has a corresponding positive eigenfunction for $ p>n $ . The method is based on the maximum principle and approach analysis to the weighted eigenvalue problem. When a parameter $ \lambda<\lambda_\Omega $ , we establish some existence and uniqueness results related to this problem. During this procedure, we also prove some regularity estimates including Hölder continuity and Harnack inequality. Keywords: normalized p-Laplacian, maximum principle, principal eigenvalue, regularity, viscosity solution. Mathematics Subject Classification: Primary 35J60; 35J70; 35P30. Citation: Fang Liu. The eigenvalue problem for a class of degenerate operators related to the normalized $ p $-Laplacian. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021155 G. Aronsson, M. G. Crandall and P. Juutinen, A tour of the theory of absolutely minimizing functions, Bull. Amer. Math. Soc., 41 (2004), 439-505. doi: 10.1090/S0273-0979-04-01035-3. Google Scholar M. Bardi and F. Da Lio, On the strong maximum principle for fully nonlinear degenerate elliptic equations, Arch. Math. (Basel), 73 (1999), 276-285. doi: 10.1007/s000130050399. Google Scholar H. Berestycki, L. Nirenberg and S. R. S. Varadhan, The principal eigenvalue and maximum principle for second-order elliptic operators in general domains, Comm. Pure Appl. Math., 47 (1994), 47-92. doi: 10.1002/cpa.3160470105. Google Scholar H. Berestycki, I.C. Dolcetta, A. Porretta and L. Rossi, Maximum Principle and generalized principal eigenvalue for degenerate elliptic operators, J. Math. Pures Appl., 103 (2015), 1276-1293. doi: 10.1016/j.matpur.2014.10.012. Google Scholar I. Birindelli and F. Demengel, First eigenvalue and maximum principle for fully nonlinear singular operators, Adv. Differential Equations, 11 (2006), 91-119. Google Scholar I. Birindelli and F. Demengel, Eigenvalue, maximum principle and regularity for fully nonlinear homogeneous operators, Commun. Pure Appl. Anal., 6 (2007), 335-366. doi: 10.3934/cpaa.2007.6.335. Google Scholar J. Busca, M. J. Esteban and A. Quaas, Nonlinear eigenvalues and bifurcation problems for Pucci's operators, Ann. Inst. H. Poincaré Anal. Non Linéaire, 22 (2005), 187-206. doi: 10.1016/j.anihpc.2004.05.004. Google Scholar M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67. doi: 10.1090/S0273-0979-1992-00266-5. Google Scholar M. G. Crandall, L. C. Evans and R. F. Gariepy, Optimal Lipschitz extensions and the infinity Laplacian, Calc. Var. Partial Differential Equations, 13 (2001), 123-139. doi: 10.1007/s005260000065. Google Scholar K. Does, An evolution equation involving the normalized $p$-Laplacian, Commun. Pure Appl. Anal., 10 (2011), 361-396. doi: 10.3934/cpaa.2011.10.361. Google Scholar A. Elmoataz, M. Toutain and D. Tenbrinck., On the $p$-Laplacian and $\infty$-Laplacian on graphs with applications in image and data processing, SIAM J. Imaging Sci., 8 (2015), 2412-2451. doi: 10.1137/15M1022793. Google Scholar A. Elmoataz, X. Desquesnes and M. Toutain, On the game $p$-Laplacian on weighted graphs with applications in image processing and data clustering, European J. Appl. Math., 28 (2017), 922-948. doi: 10.1017/S0956792517000122. Google Scholar C. Imbert, T. Jin and L. Silvestre, Hölder gradient estimates for a class of singular or degenerate parabolic equations, Adv. Nonlinear Anal., 8 (2019), 845-867. doi: 10.1515/anona-2016-0197. Google Scholar H. Ishii, Viscosity solutions of non-linear partial differential equations, Sugaku Expositions, 9 (1996), 135-152. Google Scholar H. Ishii and P. L. Lions, Viscosity solutions of fully nonlinear second order elliptic partial differential equations, J. Differential Equations, 83 (1990), 26-78. doi: 10.1016/0022-0396(90)90068-Z. Google Scholar R. Jensen, Uniqueness of Lipschitz extensions: Minimizing the sup norm of the gradient, Arch. Ration. Mech. Anal., 123 (1993), 51-74. doi: 10.1007/BF00386368. Google Scholar P. Juutinen, Principal eigenvalue of a very badly degenerate operator and applications, J. Differential Equations, 236 (2007), 532-550. doi: 10.1016/j.jde.2007.01.020. Google Scholar P. Juutinen, P. Lindqvist and J. J. Manfredi, On the equivalence of viscosity solutions and weak solutions for a quasi-linear elliptic equation, SIAM J. Math. Anal., 33 (2001), 699-717. doi: 10.1137/S0036141000372179. Google Scholar B. Kawohl, S. Kröemer and J. Kurtz, Radial eigenfunctions for the game-theoretic $p$-Laplacian on a ball, Differential Integral Equations, 27 (2014), 659-670. Google Scholar M. Lewicka and J. J. Manfredi, Game theoretical methods in PDEs, Boll. Unione Mat. Ital., 7 (2014), 211-216. doi: 10.1007/s40574-014-0011-z. Google Scholar F. Liu and F. Jiang, Parabolic biased infinity Laplacian equation related to the biased tug-of-war, Advanced Nonlinear Studies, 19 (2019), 89-112. doi: 10.1515/ans-2018-2019. Google Scholar Q. Liu and A. Schikorra, General existence of solutions to dynamic programming equations, Commun. Pure Appl. Anal., 14 (2015), 167-184. doi: 10.3934/cpaa.2015.14.167. Google Scholar F. Liu, L. Tian and P. Zhao, A weighted eigenvalue problem of the degenerate operator associated with infinity Laplacian,, Nonlinear Analysis: TMA, 200 (2020), 112001, 15 pp. doi: 10.1016/j.na.2020.112001. Google Scholar F. Liu and X. Yang, A weighted eigenvalue problem of the biased infinity Laplacian, Nonlinearity, 34 (2021), 1197-1237. doi: 10.1088/1361-6544/abd85d. Google Scholar G. Lu and P. Wang, A PDE perspective of the normalized infinity Laplacian, Comm. Part. Diff. Eqns., 33 (2008), 1788-1817. doi: 10.1080/03605300802289253. Google Scholar P. J. Martínez-Aparicio, M. Pérez-Llanos and J. D. Rossi, The limit as $p\rightarrow \infty$ for the eigenvalue problem of the 1-homogeneous $p$-Laplacian, Rev. Mat. Complut., 27 (2014), 241-258. doi: 10.1007/s13163-013-0124-4. Google Scholar J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization for a class of nonlinear parabolic equations related to tug-of-war games, SIAM J. Math. Anal., 42 (2010), 2058-2081. doi: 10.1137/100782073. Google Scholar J. J. Manfredi, M. Parviainen and J. D. Rossi, On the definition and properties of $p$-harmonious functions, Ann. Scuola Normale Sup. Pisa., 11 (2012), 215-241. Google Scholar J. J. Manfredi, M. Parviainen and J. D. Rossi, Dynamic programming principle for tug-of-war games with noise, ESAIM Control Optim. Calc. Var., 18 (2012), 81-90. doi: 10.1051/cocv/2010046. Google Scholar Y. Peres, G. Pete and S. Somersille, Biased tug-of-war, the biased infinity Laplacian, and comparison with exponential cones, Calc. Var. Partial Differential Equations, 38 (2010), 541-564. doi: 10.1007/s00526-009-0298-2. Google Scholar Y. Peres, O. Schramm, S. Sheffield and D. Wilson, Tug-of-war and the infinity Laplacian, J. Amer. Math. Soc., 22 (2009), 167-210. doi: 10.1090/S0894-0347-08-00606-1. Google Scholar Y. Peres and S. Sheffield, Tug-of-war with noise: A game theoretic view of the $p$-Laplacian, Duke Math. J., 145 (2008), 91-120. doi: 10.1215/00127094-2008-048. Google Scholar A. Quaas and B. Sirakov, On the principal eigenvalues and the Dirichlet problem for fully nonlinear operators,, C. R. Math. Acad. Sci. Paris, 342 (2006), 115–118. doi: 10.1016/j.crma.2005.11.003. Google Scholar A. Quaas and B. Sirakov, Principal eigenvalues and the Dirichlet problem for fully nonlinear operators, Adv. Math., 218 (2008), 105-135. doi: 10.1016/j.aim.2007.12.002. Google Scholar E. N. Dancer, Zhitao Zhang. Critical point, anti-maximum principle and semipositone p-laplacian problems. Conference Publications, 2005, 2005 (Special) : 209-215. doi: 10.3934/proc.2005.2005.209 Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012 Kanishka Perera, Andrzej Szulkin. p-Laplacian problems where the nonlinearity crosses an eigenvalue. Discrete & Continuous Dynamical Systems, 2005, 13 (3) : 743-753. doi: 10.3934/dcds.2005.13.743 Isabeau Birindelli, Francoise Demengel. Eigenvalue, maximum principle and regularity for fully non linear homogeneous operators. Communications on Pure & Applied Analysis, 2007, 6 (2) : 335-366. doi: 10.3934/cpaa.2007.6.335 Lingyu Jin, Yan Li. A Hopf's lemma and the boundary regularity for the fractional p-Laplacian. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1477-1495. doi: 10.3934/dcds.2019063 Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107 Dimitri Mugnai. Bounce on a p-Laplacian. Communications on Pure & Applied Analysis, 2003, 2 (3) : 371-379. doi: 10.3934/cpaa.2003.2.371 Shigeaki Koike, Andrzej Świech. Local maximum principle for $L^p$-viscosity solutions of fully nonlinear elliptic PDEs with unbounded coefficients. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1897-1910. doi: 10.3934/cpaa.2012.11.1897 Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040 Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194 Kerstin Does. An evolution equation involving the normalized $P$-Laplacian. Communications on Pure & Applied Analysis, 2011, 10 (1) : 361-396. doi: 10.3934/cpaa.2011.10.361 Genni Fragnelli, Dimitri Mugnai, Nikolaos S. Papageorgiou. Robin problems for the p-Laplacian with gradient dependence. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 287-295. doi: 10.3934/dcdss.2019020 Francesca Colasuonno, Benedetta Noris. A p-Laplacian supercritical Neumann problem. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3025-3057. doi: 10.3934/dcds.2017130 Shin-Yi Lee, Shin-Hwa Wang, Chiou-Ping Ye. Explicit necessary and sufficient conditions for the existence of a dead core solution of a p-laplacian steady-state reaction-diffusion problem. Conference Publications, 2005, 2005 (Special) : 587-596. doi: 10.3934/proc.2005.2005.587 Francesca Da Lio. Remarks on the strong maximum principle for viscosity solutions to fully nonlinear parabolic equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 395-415. doi: 10.3934/cpaa.2004.3.395 Ángel Arroyo, Joonas Heino, Mikko Parviainen. Tug-of-war games with varying probabilities and the normalized p(x)-laplacian. Communications on Pure & Applied Analysis, 2017, 16 (3) : 915-944. doi: 10.3934/cpaa.2017044 Agnid Banerjee, Nicola Garofalo. On the Dirichlet boundary value problem for the normalized $p$-laplacian evolution. Communications on Pure & Applied Analysis, 2015, 14 (1) : 1-21. doi: 10.3934/cpaa.2015.14.1 Giuseppina Barletta, Roberto Livrea, Nikolaos S. Papageorgiou. A nonlinear eigenvalue problem for the periodic scalar $p$-Laplacian. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1075-1086. doi: 10.3934/cpaa.2014.13.1075 Patrizia Pucci, Mingqi Xiang, Binlin Zhang. A diffusion problem of Kirchhoff type involving the nonlocal fractional p-Laplacian. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 4035-4051. doi: 10.3934/dcds.2017171 Robert Stegliński. On homoclinic solutions for a second order difference equation with p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 487-492. doi: 10.3934/dcdsb.2018033 \begin{document}$ p $\end{document}-Laplacian" readonly="readonly">
CommonCrawl
application of eigenvalues and eigenvectors in statistics And the eigenvalue is the scale of the stretch: 1 means no change, 2 means doubling in length, −1 means pointing backwards along the eigenvalue's direction. One special extension is multiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[45]. / For example, many quantitative variables have been measured on plants. 2 Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features. x ‖ That is, the first column of is usually selected to be less than The cumulative energy content g for the j th eigenvector is the sum of the energy content across all of the eigenvalues from 1 through j : Note that matrix $A$ is of rank two because both eigenvalues are non-zero. In practical implementations, especially with high dimensional data (large p), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. An Application of Eigenvectors: Vibrational Modes and Frequencies One application of eigenvalues and eigenvectors is in the analysis of vibration problems. $trace(A_1)=\lambda_1$ and $trace(A_2)=\lambda_2$. {\displaystyle k} ) The eigenvalues and eigenvectors of a matrix are often used in the analysis of financial data and are integral in extracting useful information from the raw data. Spike sorting is an important procedure because extracellular recording techniques often pick up signals from more than one neuron. 2 , When the eigenvalues are distinct, the vector solution to $(A-\lambda_i\,I)Z_i=0$ is uniques except for an arbitrary scale factor and sign. [13] However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes. {\displaystyle \mathbf {x} } 1 t T , To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. ) Furthermore, the eigenvectors are mutually orthogonal; ($Z_i'Z_i=0$ when $i\ne j$). Recommended papers. {\displaystyle P} MPCA has been applied to face recognition, gait recognition, etc. n This is done by calculating − n In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data. However, not all the principal components need to be kept. ( n Example: Let the matrix $A=\begin{bmatrix}10&3\\3 & 8\end{bmatrix}$. Statistics; Workforce { } Search site. Important Linear Algebra Topics In order to understand eigenvectors and eigenvalues, one must know how to do linear transformations and matrix operations such as row reduction, dot product, and subtraction. α x The optimality of PCA is also preserved if the noise The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see below). ‖ $(A-\lambda_2\,I)Z_i=0$ for the element of $Z_i$; \begin{align*}(A-12.16228I)\begin{bmatrix}Z_{11}\\Z_{21}\end{bmatrix} &=0\\\left(\begin{bmatrix}10&3\\3&8\end{bmatrix}-\begin{bmatrix}12.162281&0\\0&12.162281\end{bmatrix}\right)\begin{bmatrix}Z_{11}\\Z_{21}\end{bmatrix}&=0\\\begin{bmatrix}-2.162276 & 3\\ 3 & -4.162276\end{bmatrix}\begin{bmatrix}Z_{11}\\Z_{21}\end{bmatrix}&=0\end{align*}. 21, No. This form is also the polar decomposition of T. Efficient algorithms exist to calculate the SVD of X without having to form the matrix XTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix[citation needed], unless only a handful of components are required. given a total of The well-known examples are geometric transformations of 2D and … x 7.4Applications of Eigenvalues and Eigenvectors Model population growth using an age transition matrix and an age distribution vector, and find a stable age distribution vector. The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. {\displaystyle \mathbf {n} } 46, No. Another popular generalization is kernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel. "Engineering Statistics Handbook Section 6.5.5.2", MATLAB PCA-based Face recognition software, "Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension", The Pricing and Hedging of Interest Rate Derivatives: A Practical Guide to Swaps, Principal Component Analysis for Stock Portfolio Management, Confirmatory Factor Analysis for Applied Research Methodology in the social sciences, "Spectral Relaxation for K-means Clustering", "K-means Clustering via Principal Component Analysis", "Clustering large graphs via the singular value decomposition", Journal of Computational and Graphical Statistics, "A Direct Formulation for Sparse PCA Using Semidefinite Programming", "Generalized Power Method for Sparse Principal Component Analysis", "Spectral Bounds for Sparse PCA: Exact and Greedy Algorithms", "Sparse Probabilistic Principal Component Analysis", Journal of Machine Learning Research Workshop and Conference Proceedings, "A Selective Overview of Sparse Principal Component Analysis", "ViDaExpert – Multidimensional Data Visualization Tool", Principal Manifolds for Data Visualisation and Dimension Reduction, "A Survey of Multilinear Subspace Learning for Tensor Data", "Network component analysis: Reconstruction of regulatory signals in biological systems", Multiple Factor Analysis by Example Using R, A Tutorial on Principal Component Analysis, Multivariate adaptive regression splines (MARS), Autoregressive conditional heteroskedasticity (ARCH), https://en.wikipedia.org/w/index.php?title=Principal_component_analysis&oldid=991744951, Wikipedia articles needing page number citations from November 2020, Articles with unsourced statements from August 2014, Wikipedia articles needing clarification from March 2011, Articles with unsourced statements from March 2011, Articles with unsourced statements from November 2019, Creative Commons Attribution-ShareAlike License, data matrix, consisting of the set of all data vectors, one vector per row, the number of row vectors in the data set, the number of elements in each row vector (dimension). Let X be a d-dimensional random vector expressed as column vector. In order to extract these features, the experimenter calculates the covariance matrix of the spike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. ( There are three special kinds of matrices which we can use to simplify the process of finding eigenvalues and eigenvectors. The following is a detailed description of PCA using the covariance method (see also here) as opposed to the correlation method.[26]. The PCA transformation can be helpful as a pre-processing step before clustering. , The trace of each of the component rank $-1$ matrix is equal to its eigenvalue. Le Borgne, and G. Bontempi. Consider an Solving this equation gives the $n$ values of $\lambda$, which are not necessarily distinct. In that case the eigenvector is "the direction that doesn't change direction" ! One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. X It means multiplying by matrix P N no longer makes any difference. This choice of basis will transform our covariance matrix into a diagonalised form with the diagonal elements representing the variance of each axis. Implemented, for example, in LOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-level BLAS matrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique. PCA essentially rotates the set of points around their mean in order to align with the principal components. If you love it, our example of the solution to eigenvalues and eigenvectors of 3×3 matrix will help you get a better understanding of it. In this seminar, we will explore and exploit eigenvalues and eigenvectors of graphs. T. Chen, Y. Hua and W. Y. Yan, "Global convergence of Oja's subspace algorithm for principal component extraction," IEEE Transactions on Neural Networks, Vol. The principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X. is termed the regulatory layer. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations. t Brenner, N., Bialek, W., & de Ruyter van Steveninck, R.R. ( PCA was invented in 1901 by Karl Pearson,[7] as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s. Eigenvectors for: Now we must solve the following equation: First let's reduce the matrix: This reduces to the equation: There are two kinds of students: those who love math and those who hate it. PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. There are non-zero solution to $(A-\lambda_i\,I)=0$ only if the matrix ($A-\lambda_i\,I$) is less than full rank (only if the determinant of $(A-\lambda_i\,I)$ is zero). Comparing to the other modulo, students will see applications of some advance topics. The eigenvalues of $A$ can be found by $|A-\lambda\,I|=0$. 5. Eigenvalues and Eigenvectors are important in the study of covariance matrix structure in statistics. Thus the weight vectors are eigenvectors of XTX. MPCA is solved by performing PCA in each mode of the tensor iteratively. {\displaystyle P} The concept of eigenvalues and eigenvectors is used in many practical applications. A. Miranda, Y. When analyzing the results, it is natural to connect the principal components to the qualitative variable species. where W is a p-by-p matrix of weights whose columns are the eigenvectors of XTX. The application we will be looking at is structural analysis and in particular the 1940 Tacoma Narrows bridge collapse. It is therefore common practice to remove outliers before computing PCA. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: Pearson Product-Moment Correlation). If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of T will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix W, which can be thought of as a high-dimensional rotation of the co-ordinate axes). = X This also shows one quick application of eigenvalues and eigenvectors in environmental science. Any lack in the prerequisites should be m… {\displaystyle \mathbf {s} } Enter your email address to subscribe to https://itfeature.com and receive notifications of new posts by email. are iid), but the information-bearing signal Y. Hua, Y. Xiang, T. Chen, K. Abed-Meraim and Y. Miao, "A new look at the power method for fast subspace tracking," Digital Signal Processing, Vol. The latter approach in the block power method replaces single-vectors r and s with block-vectors, matrices R and S. Every column of R approximates one of the leading principal components, while all columns are iterated simultaneously. … P Applications. k λ [21] The residual fractional eigenvalue plots, that is, See also the elastic map algorithm and principal geodesic analysis. of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector (where as a function of component number [38] Trading multiple swap instruments which are usually a function of 30–500 other market quotable swap instruments is sought to be reduced to usually 3 or 4 principal components, representing the path of interest rates on a macro basis. p The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. {\displaystyle k} p are often thought of as superpositions of eigenvectors in the appropriate function space. 4, pp. [17] The FRV curves for NMF is decreasing continuously [21] when the NMF components are constructed sequentially,[20] indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA,[21] indicating the less over-fitting property of NMF. The transformation T = X W maps a data vector x(i) from an original space of p variables to a new space of p variables which are uncorrelated over the dataset. The rotation has no eigenevector[except the case of 180-degree rotation]. Use a matrix equation to solve a system of first-order linear differential equations. E ) Gate Cse Syllabus, Samsung Gas Oven Won't Heat Past 150, Simple Mushroom Soup Recipe, Zinus Armita Box Spring Assembly, Hard Work Is The Key To Success Essay 100 Words, Plant Glossary With Pictures Pdf,
CommonCrawl
\begin{document} \title{Non-local control in the conduction coefficients: \\ well posedness and convergence to the local limit} \author{Anton Evgrafov\textsuperscript{1,*} and Jos\'e C. Bellido\textsuperscript{2} \\ \small 1 Department of Mechanical Engineering, Technical University of Denmark \\ \small 2 Department of Mathematics, University of Castilla-La Mancha \\ \small * [email protected]} \maketitle \noindent \paragraph*{Abstract:} We consider a problem of optimal distribution of conductivities in a system governed by a non-local diffusion law. The problem stems from applications in optimal design and more specifically topology optimization. We propose a novel parametrization of non-local material properties. With this parametrization the non-local diffusion law in the limit of vanishing non-local interaction horizons converges to the famous and ubiquitously used generalized Laplacian with SIMP (Solid Isotropic Material with Penalization) material model. The optimal control problem for the limiting local model is typically ill-posed and does not attain its infimum without additional regularization. Surprisingly, its non-local counterpart attains its global minima in many practical situations, as we demonstrate in this work. In spite of this qualitatively different behaviour, we are able to partially characterize the relationship between the non-local and the local optimal control problems. We also complement our theoretical findings with numerical examples, which illustrate the viability of our approach to optimal design practitioners. \noindent \paragraph*{Keywords:} Nonlocal optimal design, nonlocal optimal control in the coefficients, convergence to local problems, numerical approximation of nonlocal problems. \section{Introduction} \label{sec:intro} Nonlocal problems receive a lot of attention nowadays owing to a wide range of applications they have in a variety of contexts. In particular, we mention micromechanics~\cite{Rogers91}, image processing~\cite{Gios08}, phase transitions~\cite{AlBe98}, pattern formation~\cite{Fife03}, population dispersal~\cite{CoCoElMa07}, optimal design~\cite{andres2015nonlocal} and shape optimization~\cite{bonder}, optimal control~\cite{Delia1} and inverse problems \cite{Delia2}; see also a very recent monograph on the subject~\cite{Du_book}. Among nonlocal problems, fractional or nonlocal diffusion plays a central role. It has attracted enormous interest and a great deal of work has been done over the past twenty years. The number of references on the subject and connections of nonlocal diffusion with remarkable applications is really overwhelming. We refer the interested readers to the monographs~\cite{Mazon_book, Valdinoci_book} and the references therein. In the context of continuum mechanics the non-local modelling goes back at least to the Eringen's model~\cite{eringen,MMS_2019}, and more recently has been focused on peridynamical modelling~\cite{kunin1975,silling2000reformulation,Peridynamics_book,Handbook,EmWe07, MengeshaDu}. These latter models refrain from using the gradients of the state fields with the goal of unified description of singular phenomena, such as fracture or dislocation. When considering scalar equations, such as for example the steady state heat equation, peridynamics equations may essentially be seen as a nonlocal diffusion equations on bounded domains~\cite{AkMe10,andres2015type}. In this paper we consider a prototypical optimal design problem for diffusion phenomena, in which one has to determine the best way of distributing conducting materials inside a given computational domain. For models governed by the local diffusion phenomena such problems have been studied for a long time and are quite well understood, see for example~\cite{allaire2012shape,bendose2003topology,Pablo_book} and references therein. As a very rough summary we can say that these problems are typically ill-posed and do not attain their infimum. One possible way of dealing with this issue is to bring the limits of minimizing sequences into consideration and interpret them as composite materials obtained from mixing the original materials in the sense described by the theory of homogenization. Another option is to restrict the set of considered material distributions by introducing constraints or penalty functions with regularizing effect. An extremely successful example of the latter approach, which is nowadays widely utilized in engineering practice, is the SIMP (Solid Isotropic Material with Penalization) material parametrization model combined with additional regularization techniques~\cite{bendose2003topology,bendsoe1988generating,bourdin2001filters,sigmund1997design}. An interesting recent study, which can be interpreted in the light of comparing the two outlined approaches in the context of the steady state heat conduction, is~\cite{Suna}. Research in optimal design of systems described by the non-local governing equations is in its infancy. Our present model is inspired by the recent studies~\cite{andres2015nonlocal, andres2017convergence}, where a very similar optimal design problem for a nonlocal diffusion state law was analyzed. The main novelty of our work is the way in which the material properties (conductivities) enter into the model. We chose a nonlocal interpolation of material properties that allows us to establish a natural link between the nonlocal optimal design problem and a local one in which material properties are interpolated by the SIMP scheme. Our main objective is to analyze this \emph{nonlocal} optimal design problem and to characterize its relationship with the local optimal design problem as the nonlocal parameter (the \emph{interaction horizon}) goes to zero. We have documented our preliminary findings in a brief note~\cite{eb2019sensitivity}, where we have been primarily concerned with the qualitative relation between our non-local model and a particular heuristic regularization method (``sensitivity filtering'', see~\cite{sigmund1997design}) for the local model with SIMP. This work includes the technical results, proofs, and numerical experiments, which have been omitted from~\cite{eb2019sensitivity} owing to the space requirements. The outline of the paper is as follows. In Section~\ref{sec:model} we formulate the non-local state equations and discuss their well-posedness. We also state the nonlocal optimal design problem, which is going to be the main subject of our study, as well as its local counterpart. Section~\ref{sec:model} also includes Theorem~\ref{thm:dsgn2state}, an interesting result illustrating the continuity of the conductivity-to-state map for the non-local problem. In Section~\ref{sec:exist} we include several existence results for the nonlocal optimal design problem depending on the SIMP penalization parameter. In Section~\ref{sec:gamma} we address the convergence of the nonlocal problems to the local one and the connection of our proposed model with SIMP in the local case. Finally, in Section~\ref{sec:num}, a numerical approach to the nonlocal optimal design problems is described and several numerical examples are presented. \section{Problem formulation and perliminaries} \label{sec:model} \subsection{Local control in the conduction coefficients} \label{subsec:locprob} We begin our discussion with a well understood optimal control problem in the conduction coefficients of generalized Laplace equation, or topology optimization through material distribution, see for example~\cite{cea1970example,allaire2012shape,bendose2003topology} and references therein. This problem will serve us both as a reference physical model and also as a limiting problem later on. Let \(\Omega \subset \R^n\), \(n\ge 2\), be an open, bounded, and connected domain. In this domain we consider the generalized Laplace equation with a volumetric source \(f \in L^2(\Omega)\), homogeneous Dirichlet boundary conditions, and a spatially heterogeneous diffusion (conduction) coefficient \(\kappa^{\text{loc}}\). Its weak solution \(u \in \mathcal{U}_0 = W^{1,2}_0(\Omega)\) satisfies the variational statement \begin{equation}\label{eq:w_loc} a_{\kappa^{\text{loc}}}(u,v) = \ell(v), \qquad \forall \: v \in \mathcal{U}_0, \end{equation} where the parametric bilinear form \(a_{\kappa^{\text{loc}}}\) and the linear functional \(\ell\) are given by \begin{equation}\label{eq:al_loc} \begin{aligned} a_{\kappa^{\text{loc}}}(u,v) &= \int_{\Omega} \kappa^{\text{loc}}(x) \nabla u(x)\cdot \nabla v(x)\,\mathrm{d}x,\quad \text{and}\\ \ell(v) &= \int_{\Omega} f(x)v(x)\,\mathrm{d}x. \end{aligned} \end{equation} We recall that \begin{equation}\label{eq:loc_dir_pple} u \text{\ solves~\eqref{eq:w_loc}} \quad\iff\quad u = \argmin_{v \in \mathcal{U}_0} I_{\kappa^{\text{loc}}}(v) := \bigg[\frac{1}{2} a_{\kappa^{\text{loc}}}(v,v) - \ell(v)\bigg], \end{equation} where \(I_{\kappa^{\text{loc}}}\) is an associated quadratic energy functional. We will utilize the shorthand notation \(\mathcal{S}^{\text{loc}}(\kappa^{\text{loc}}) := \argmin_{v \in \mathcal{U}_0} I_{\kappa^{\text{loc}}}(v)\) for the control coefficient-to-state operator for this system. In connection with this governing equation we consider a problem of optimal distribution of conductive material in \(\Omega\) under simple constraints. Specifically, we will define the following convex set of admissible material distributions: \begin{equation}\label{eq:adm_loc} \mathcal{A} = \bigg\{\, \rho \in L^\infty(\Omega) \mid \underline{\rho}\le \rho(\cdot)\le \overline{\rho}, \text{a.e.\ in \(\Omega\)}, \int_{\Omega} \rho(x)\,\mathrm{d}x \le \gamma|\Omega| \,\bigg\}, \end{equation} where \(\gamma \in ]\underline{\rho},\overline{\rho}[\) is a given volume fraction, and \(0 < \underline{\rho} < \underline{\rho} < \infty\) are given bounds.\footnote{ Our setup can be easily generalized to the situation when \(\underline{\rho}, \overline{\rho} \in L^\infty(\Omega)\).} We assume that the local material conductivity \(\kappa^{\text{loc}}\) is related to the control parameter \(\rho \in \mathcal{A}^\delta\) through the so-called SIMP (Simple Interpolated Material with Penalization) model, see~\cite{bendose2003topology,allaire2012shape}: \(\kappa^{\text{loc}}(x) = \rho^p(x)\), where \(p \ge 1\) plays a role of a penalty parameter in certain optimal design problems. Each \(\rho\in \mathcal{A}^\delta\) is therefore mapped into \(\kappa^{\text{loc}}\) satisfying the bounds \(\underline{\kappa} = \underline{\rho}^p \le \kappa^{\text{loc}} \le \overline{\kappa} = \overline{\rho}^p\). For a given performance functional \(J: \mathcal{A} \times \mathcal{U}_0 \to \R\) we consider the following optimization problem: \begin{equation}\label{eq:min_loc} \minz_{\rho \in \mathcal{A}} J(\rho,\mathcal{S}^{\text{loc}}(\rho^{p})). \end{equation} Note that the use of ``\(\minz\)'' instead of ``\(\inf\)'' does not automatically mean that the infimum is attained in~\eqref{eq:min_loc}. In fact, in most interesting situations this is not the case without additional regularization of the problem~\eqref{eq:min_loc}. We will return to this issue in Section~\ref{sec:exist}. Of particular interest to us will be the case of \(J(\rho,u) = \ell(u)\), corresponding to \emph{compliance minimization}. Note that in this case the \emph{reduced} compliance can be expressed in a variety of ways: \begin{equation} \mathfrak{c}(\rho) := \ell(\mathcal{S}^{\text{loc}}(\rho^{p})) = a_{\rho^p}(\mathcal{S}^{\text{loc}}(\rho^{p}),\mathcal{S}^{\text{loc}}(\rho^{p})) = -2 I_{\rho^p}(\mathcal{S}^{\text{loc}}(\rho^{p})). \end{equation} The last expression in conjunction with~\eqref{eq:loc_dir_pple} allows us to state the compliance minimization problem as a saddle point problem, which will be used later on. \subsection{Non-local state equation} We will now introduce a non-local analogue of the local governing equations~\eqref{eq:w_loc}, \eqref{eq:al_loc}. We will use \(\delta>0\) to denote the non-local interaction horizon, and \(B(x,\delta) = \{\, y \in \R^n : |x-y| < \delta \,\}\) to denote an open ball of radius \(\delta\) centered at \(x \in \R^n\). Let \(\Omega_{\delta} = \cup_{x \in \Omega} B(x,\delta)\) be the set of points located within distance \(\delta\) from points in \(\Omega\), and \(\Gamma^{\text{nl}} = \Omega_{\delta}\setminus\Omega\) be the ``non-local boundary'' of \(\Omega\). We consider nonlocal linear diffusion equations, that in this scalar framework can also be seen as peridynamics equations. Similarly to~\eqref{eq:w_loc}, they will be formulated with the help of the following parametric bilinear form \(a_{\delta,\kappa}(\cdot,\cdot)\): \begin{equation}\label{eq:al} \begin{aligned} a_{\delta,\kappa}(u,v) &= \int_{\Omega_{\delta}}\int_{\Omega_{\delta}} \kappa(x,x') A_{\delta}(|x-x'|)\frac{u(x)-u(x')}{|x-x'|}\frac{v(x)-v(x')}{|x-x'|}\,\mathrm{d}x\,\mathrm{d}x', \end{aligned} \end{equation} while we use the same linear functional \(\ell(\cdot)\) defined in~\eqref{eq:al_loc}. In the equations above, \(\kappa \in L^\infty(\Omega_{\delta}\times\Omega_{\delta})\) is a ``nonlocal conductivity'', and \(A_{\delta}:\R_+\to \R_+\) is a radial kernel satisfying certain conditions. More precisely, we assume that \begin{equation}\label{eq:adm_kappa} \kappa \in \mathcal{K}^\delta = \{\, \tilde{\kappa} \in L^\infty(\Omega_{\delta}\times\Omega_{\delta}) \mid \underline{\kappa}\le \tilde{\kappa}(x,x')=\tilde{\kappa}(x',x) \le \overline{\kappa}, \forall x,x'\in \Omega_{\delta} \,\}, \end{equation} for some \(0< \underline{\kappa} < \overline{\kappa} < +\infty\)\footnote{ Dependence on \(\delta\) in the definition of the set \(\mathcal{K}^\delta\) could be omitted by considering nonlocal conductivities defined on the whole space \(\R^n\times\R^n\). }, and that \(A_\delta\) satisfies the following conditions: \begin{equation}\label{eq:kernelnrm} \frac{1}{n}\int_{\R^n} A_\delta(|x|)\,\mathrm{d}x = 1, \end{equation} \begin{equation}\label{eq:smallsupport} \supp A_\delta(|\cdot|) \subset B(0,\delta), \quad \forall \; \delta>0, \end{equation} and there exists \(s \in (0,1)\) and \(c_{\delta}>0\) such that \begin{equation}\label{fractionallb} A_\delta(|x|) \ge \frac{c_{\delta}}{|x|^{n+2s-2}}, \quad\forall \; x \in B(0,\delta/2) \setminus\{0\}, \delta>0. \end{equation} Let \(\mathcal{U}^\delta = \{\, u \in L^2(\Omega_{\delta}) \mid a_{\delta,1}(u,u) < +\infty\,\}\), and \(\mathcal{U}_0^\delta\) be the topological closure of \(C^\infty_c(\Omega)\) (where we extend the functions in \(C^\infty_c(\Omega)\) by \(0\) outside of \(\Omega\)) in \(\mathcal{U}_\delta\) with respect to the semi-inner product \(a_{\delta,1}(\cdot,\cdot)\). Since the nonlocal conductivities are uniformly bounded from above and away from zero, for any \(\kappa \in \mathcal{K}^\delta\) and \(u \in \mathcal{U}^\delta\) we have the estimates \begin{equation*} \underline{\kappa}a_{\delta,1}(u,u) \le a_{\delta,\kappa}(u,u) \le \overline{\kappa}a_{\delta,1}(u,u). \end{equation*} Consequently, \(u\in {\mathcal{U}}^\delta\) if and only if \(\|u\|_{L^2(\Omega_{\delta})} < +\infty\) and \(a_{\delta,\kappa}(u,u)<+\infty\). In this notation, the non-local governing equations we will study can be stated as follows: find \(u\in \mathcal{U}_0^\delta\) such that \begin{equation}\label{eq:w} a_{\delta,\kappa}(u,v) = \ell(v), \quad \forall v \in \mathcal{U}_0^\delta. \end{equation} \subsection{Non-local control in the conduction coefficients} We will now introduce a non-local analogue of~\eqref{eq:min_loc}, where the local governing equations~\eqref{eq:w_loc}, \eqref{eq:al_loc} are replaced with their non-local analogue introduced in the previous section. Similarly to~\eqref{eq:adm_loc}, we define the following convex set of admissible material distributions: \begin{equation}\label{eq:adm} \mathcal{A}^\delta = \bigg\{\, \rho \in L^\infty(\Omega_{\delta}) \mid \underline{\rho}\le \rho(\cdot)\le \overline{\rho}, \text{a.e.\ in \(\Omega_{\delta}\)}, \int_{\Omega} \rho(x)\,\mathrm{d}x \le \gamma|\Omega| \,\bigg\}, \end{equation} where the parameters are exactly as in~\eqref{eq:adm_loc}. We still assume that the local material conductivity \(\kappa^{\text{loc}}\) satisfies the SIMP model \(\kappa^{\text{loc}}(x) = \rho^p(x)\), \(x\in\Omega_{\delta}\). In addition, we assume that the non-local conductivity \(\kappa(x,x')\) entering~\eqref{eq:w} is simply a geometric mean of \(\kappa^{\text{loc}}(x)\) and \(\kappa^{\text{loc}}(x')\), that is, \(\kappa(x,x') = \sqrt{\kappa^{\text{loc}}(x)\kappa^{\text{loc}}(x')} = \rho^{p/2}(x)\rho^{p/2}(x')\). For a given performance functional \(J: \mathcal{A}^\delta \times \mathcal{U}^\delta \to \R\) we consider the following optimization problem: \begin{equation}\label{eq:min} \begin{aligned} &\minz_{(\rho,u) \in \mathcal{A}^\delta\times\mathcal{U}_0^\delta} J(\rho,u),\\ &\text{subject to \(u\) solving~\eqref{eq:w} with \(\kappa(x,x') = \rho^{p/2}(x)\rho^{p/2}(x')\)}. \end{aligned} \end{equation} We will still refer to the case \(J(\rho,u) = \ell(u)\) as the \emph{compliance minimization} problem. \begin{remark} Essentially, the only difference between the non-local optimal design problem considered here and the one in~\cite{andres2015nonlocal} is the way the control variables enter the bilinear form. In order to preserve the symmetry of the form with respect to \((x,x')\) one takes an average between the local conductivities at \(x\) and \(x'\) to be the non-local conductivity in the bilinear form defining the non-local governing equations. The authors of~\cite{andres2015nonlocal} take an arithmetic mean, whereas we use a geometric mean (for \(p=1\)) for the same purpose. Advantages of this choice will become clear when we discuss the relationship between the nonlocal problem described in this section and the incredibly popular and successful SIMP method for (local) optimal design outlined in Section~\ref{subsec:locprob}. \end{remark} \subsection{Well-posedness and continuity of the conductivity-to-state operator} In this subsection we recall a couple of known results about the non-local equation~\eqref{eq:w}. Under the assumptions~\eqref{eq:kernelnrm}--\eqref{fractionallb}, \(a_{\delta,1}(\cdot,\cdot)\) defines an inner product on \(\mathcal{U}_0^\delta\) thereby making it a Hilbert space~\cite{andres2015type}. We denote by \(\|\cdot \|_{\mathcal{U}_0^\delta}\) the norm induced by this inner product. Furthermore, owing to~\eqref{fractionallb}, there are constants \(\tilde{c}_{\delta}>0\), \(\hat{c}_{\delta}>0\) independent from \(u\) such that for any \(\delta>0\), \(\kappa \in \mathcal{K}^\delta\) and \(u\in \mathcal{U}_0^\delta\) we have the inequalities \begin{equation}\label{eq:L2coerc} \begin{aligned} a_{\delta,\kappa}(u,u) &\ge c_{\delta_0} \underline{\kappa}\int_{\Omega_{\delta}} \int_{\Omega_{\delta}\cap B(x,\delta/2)} \frac{(u(x)-u(x'))^2}{|x-x'|^{n+2s}} \,\mathrm{d}x'\,\mathrm{d}x \ge \tilde{c}_{\delta} \underline{\kappa}\int_{\Omega_{\delta}} \int_{\Omega_{\delta}} \frac{(u(x)-u(x'))^2}{|x-x'|^{n+2s}}\,\mathrm{d}x'\,\mathrm{d}x \\&\ge \hat{c}_{\delta} \underline{\kappa}\|u\|^2_{L^2(\Omega)}, \end{aligned} \end{equation} where the second inequality is a consequence of \cite[Proposition~6.1]{bellido2014} and the third inequality is established in~\cite[Lemma~6.2]{bellido2014}. When combined with Cauchy--Bunyakovsky--Schwarz inequality, \eqref{eq:L2coerc} implies that \(\ell: \mathcal{U}_0^\delta \to \R\) is a continuous linear functional with respect to $\| \cdot\|_{{\mathcal{U}}_0^\delta}$. Consequently we have verified the necessary assumptions for the application of Lax--Milgram Lemma (see, for example, \cite{brezis}) allowing us to conclude the following. \begin{theorem}\label{thm:existence} For each \(\delta \in ]0,\delta_0[\), \(\kappa \in \mathcal{K}^\delta\) and \(f \in L^2(\Omega)\) there is a unique solution \(u \in \mathcal{U}_0^\delta\) of~\eqref{eq:w}. Furthermore, this solution minimizes the associated quadratic energy functional: \begin{equation}\label{eq:nl_dir_pple} u = \argmin_{v \in \mathcal{U}_0^\delta} I_{\delta,\kappa}(v):= \bigg[ \frac{1}{2}a_{\delta,\kappa}(v,v) - \ell(v) \bigg]. \end{equation} \end{theorem} This well-posedness result allows us to univocally define the coefficient-to-state operator \(\mathcal{S}_\delta(\kappa):=\argmin_{v \in \mathcal{U}_0^\delta} I_{\delta,\kappa}(v)\), exactly as in the local case. Furthermore, we put the \emph{reduced} non-local compliance to be \begin{equation} \begin{aligned} \mathfrak{c}_\delta(\rho) &:= \ell(\mathcal{S}_\delta(\kappa)) = a_{\kappa}(\mathcal{S}_\delta(\kappa),\mathcal{S}_\delta(\kappa)) = -2 I_{\delta,\kappa}(\mathcal{S}_\delta(\kappa)), \end{aligned} \end{equation} where \(\kappa(x,x') = \rho^{p/2}(x)\rho^{p/2}(x')\). Nonlocal diffusion and peridynamics equations have been studied extensively. Existence results for a much more general class of nonlinear and nonlocal variational principles have been obtained in~\cite{bellido2014}. Particularly, well-posedness of equation~\eqref{eq:w} has been shown in~\cite[Theorem~1.2]{andres2015type} for slightly more general conditions on the kernel \(A_\delta\). More specifically, hypothesis~\eqref{fractionallb} could be further relaxed without sacrificing the conclusions of Theorem~\ref{thm:existence}. Even though our assumption~\eqref{fractionallb} is not optimal if one is only concerned with solvability of the non-local governing equations, it is convenient for us since it implies the compact embedding of \(\mathcal{U}_0^\delta\) into \(L^2(\Omega_{\delta})\), and ultimately a certain continuity of the coefficient-to-state operator for the non-local equation~\eqref{eq:w}. Indeed, let us revisit the string of inequalities~\eqref{eq:L2coerc}. Note that the third term in~\eqref{eq:L2coerc} is nothing else but the Gagliardo seminorm \(|u|_{W^{s,2}(\Omega_{\delta})}^2\). Therefore, in addition to the continuous embedding of \(\mathcal{U}_0^\delta\) into \(L^2(\Omega_{\delta})\), we also have a continuous embedding of \(\mathcal{U}_0^\delta\) into the fractional Sobolev space \(W^{s,2}(\Omega_{\delta})\). Since the latter space is compactly embedded into \(L^2(\Omega_{\delta})\) (see~\cite[Theorem~7.1]{di2012hitchhikers}), we also have a compact embedding of \(\mathcal{U}_0^\delta\) into \(L^2(\Omega_{\delta})\). Let us now discuss the conductivity-to-state operator for the non-local equation~\eqref{eq:w}. In the case of the classical, local diffusion (generalized Laplace) equation, the conductivity-to-state operator \(\mathcal{S}^{\text{loc}}\) is famously not continuous with respect to weak\(^*\) convergence of conductivities in \(\mathcal{K}^\delta\). In fact a stronger \(H\)-convergence, or \(G\)-convergence in our self-adjoint case, has been specifically defined to obtain such a result~\cite{Murat1,Murat2, spagnolo1967sul,Tartar1,Tartar2}. The notion of \(H\)-convergence has been recently extended to the nonlocal \(p\)-Laplacian in~\cite{bonder_Hconv}, proving its sequential compactness for bounded coefficients. In stark contrast, the conductivity-to-state operator \(\mathcal{S}\) for the non-local equation~\eqref{eq:w} is continuous with respect to weak\(^*\) convergence of conductivities in \(\mathcal{K}^\delta\). This fact has been established in~\cite[Theorem~6]{andres2015nonlocal}. For the sake of completeness and clarity we include a simpler proof of this result, which is in the same spirit as the simple characterization of $H$-convergence for fractional $p$-Laplacian equations given by the authors in~\cite{H-convergence}. \begin{theorem}\label{thm:dsgn2state} Let us fix \(\delta \in ]0,\delta_0[\) and \(f \in L^2(\Omega)\). Consider a sequence of conductivities \(\kappa_j \in \mathcal{K}^\delta\), together with the corresponding sequence \(u_j = \mathcal{S}_\delta(\kappa_j) \in \mathcal{U}_0^\delta\) of solutions to~\eqref{eq:w}. Assume that \(\kappa_j \rightharpoonup \hat{\kappa}\), weak\(^*\) in \(L^\infty(\Omega_{\delta}\times\Omega_{\delta})\), and let \(\hat{u} = \mathcal{S}(\hat{\kappa}) \in \mathcal{U}_0^\delta\) be the corresponding solution to~\eqref{eq:w}. Then, we have \(\lim_{j\to\infty} \|u_{j}-\hat{u}\|_{L^2(\Omega_{\delta})} =\lim_{j\to\infty} \|u_{j}-\hat{u}\|_{\mathcal{U}_0^\delta} = 0\). \end{theorem} \begin{proof} Recall that \(\hat{u}\) is the unique minimizer of \(I_{\delta,\hat{\kappa}}\), and \(u_j\) is that of \(I_{\delta,\kappa_j}\). Keeping in mind that \(\kappa_j\rightharpoonup \hat{\kappa}\), weak\(^*\) in \(L^\infty(\Omega_{\delta}\times\Omega_{\delta})\), and the inclusion \begin{equation*} A_\delta(|x-x'|)\frac{(\hat{u}(x)-\hat{u}(x'))^2}{|x-x'|^2} \in L^1(\Omega_{\delta}\times \Omega_{\delta})\end{equation*} we have the inequality \begin{equation}\label{eq:conv1} \limsup_{{j}\to\infty} I_{\delta,\kappa_{j}}( u_{j}) \le \lim_{{j}\to\infty} I_{\delta,\kappa_{j}}(\hat{u})= I_{\delta,\hat{\kappa}}(\hat{u}). \end{equation} Let us now extract a subsequence \(\{(\kappa_{j'},u_{j'})\}\), \({j'}=1,2,\dots\) from the original sequence such that \(\liminf_{j\to\infty} I_{\delta,\kappa_{j}}( u_{j}) =\lim_{j'\to\infty} I_{\delta,\kappa_{j'}}( u_{j'})\). Note that as a direct consequence of~\eqref{eq:L2coerc} we get an uniform estimate \(\underline{\kappa}\|u_{j}\|_{\mathcal{U}_0^\delta}^2 \le a_{\delta,\kappa_j}(u_j,u_j) = \ell(u_j) \le \hat{c}_{\delta}^{-1/2}(\overline{\kappa}/\underline{\kappa})^{1/2}\|f\|_{L^2(\Omega)} \|u_{j}\|_{\mathcal{U}_0^\delta}\), \(j=1,2,\dots\) Therefore there exists \(u\in \mathcal{U}_0^\delta\) and a further subsequence, labelled by \(\{u_{j''}\}\), \(j''=1,2,\dots\), such that \begin{equation*} u_{j''}\rightharpoonup u, \text{weakly in\ } \mathcal{U}_0^\delta, \quad u_{j''}\rightarrow u, \text{ strongly in\ } L^2(\Omega_{\delta}), \text{\ and\ } u_{j''}(x)\rightarrow u(x), \text{\ a.e.\ in \(\Omega_{\delta}\)}. \end{equation*} Let us define the finite measures \begin{equation*} \begin{aligned} \mu_j(E)&=\int_E\kappa_j(x,x')\,\mathrm{d}x\,\mathrm{d}x'=\int_{\Omega_{\delta}\times\Omega_{\delta}}\chi_{E}(x,x')\kappa_j(x,x')\,\mathrm{d}x\,\mathrm{d}x', \quad j=1,2,\dots \quad\text{and}\\ \hat{\mu}(E)&=\int_E\hat{\kappa}(x,x')\,\mathrm{d}x\,\mathrm{d}x'=\int_{\Omega_{\delta}\times\Omega_{\delta}}\chi_{E}(x,x')\hat{\kappa}(x,x')\,\mathrm{d}x\,\mathrm{d}x', \end{aligned} \end{equation*} where \(E\subset \Omega_{\delta}\times\Omega_{\delta}\) is an arbitrary Lebesgue measurable set and \(\chi_E \in L^1(\Omega_{\delta}\times\Omega_{\delta})\) is its characteristic function. Weak\(^*\) convergence of \(\kappa_j\) to \(\hat{\kappa}\) implies the strong convergence of measures \(\lim_{j\to\infty} \mu_j = \hat{\mu}\) (i.e. \(\lim_{j\rightarrow \infty} \mu_j(E)=\hat{\mu}(E)\) for any measurable set \(E\subset (\Omega_{\delta}\times\Omega_{\delta})\)). Furthermore, in view of~\eqref{eq:conv1} and the continuity of \(\ell\) the non-negative sequence \(a_{\delta,\kappa_{j}}(u_{j},u_{j}) = 2 [I_{\delta,\kappa_{j}}(u_{j}) + \ell(u_{j})]\), \({j}=1,2,\dots\) is bounded from above. Therefore we can apply the generalized Fatou's lemma \cite[Prop. 17, pg. 269]{royden}: \begin{equation*} \begin{aligned} \liminf_{{j}\to\infty} I_{\delta,\kappa_{j}}(u_{j}) &= \lim_{{j''}\to\infty}\bigg[ \frac{1}{2}\int_{\Omega_{\delta}\times\Omega_{\delta}} A_\delta(|x-x'|)\frac{(u_{j''}(x)-u_{j''}(x'))^2}{|x-x'|^2} \,\mathrm{d}\mu_{j''}\bigg] - \lim_{{j''}\to\infty}\ell(u_{j''}) \\& \ge \frac{1}{2}\int_{\Omega_{\delta}\times\Omega_{\delta}} A_\delta(|x-x'|)\frac{(u(x)-u(x'))^2}{|x-x'|^2} \,\mathrm{d}\hat{\mu} - \ell(u) =I_{\delta,\hat{\kappa}}(u) \ge I_{\delta,\hat{\kappa}}(\hat{u}). \end{aligned} \end{equation*} In particular, \(\lim_{j\to\infty} I_{\delta,\kappa_j}(u_j)= I_{\delta,\hat{\kappa}}(\hat{u})\). The strong convergence of \(u_j\) towards \(\hat{u}\) in \(\mathcal{U}\), and owing to the continuous embedding also in \(L^2(\Omega_{\delta})\), follows from the already established facts as follows: \begin{equation*} \begin{aligned} 0\le \limsup_{{j}\to\infty} \underline{\kappa} \|u_{j} - \hat{u}\|^2_{\mathcal{U}_o^\delta} &\le \lim_{{j}\to\infty} [ \underbrace{a_{\delta,\kappa_{j}}(u_{j},u_{j})}_{=-2I_{\delta,\kappa_j}(u_j)} \underbrace{-2 a_{\delta,\kappa_{j}}(u_{j},\hat{u}) + a_{\delta,\kappa_{j}}(\hat{u},\hat{u})}_{=2I_{\delta,\kappa_{j}}(\hat{u})}] = -2I_{\delta,\hat{\kappa}}(\hat{u}) + 2I_{\delta,\hat{\kappa}}(\hat{u}) = 0. \end{aligned} \end{equation*} \end{proof} \section{Existence of optimal conductivity distributions} \label{sec:exist} Existence of solutions for the optimization problem~\eqref{eq:min} in the special case of compliance minimization, which corresponds to \(J(\rho,u) = \ell(u)\), has been briefly outlined in~\cite{eb2019sensitivity}. For the sake of keeping this manuscript self-contained, we include the short proofs of these results here. We also establish existence of solutions to~\eqref{eq:min} for more general objective functions, but for a specific penalization value \(p=2\). \subsection{Compliance minimization, convex case: \(p=1\)} \label{subsec:convex} Let us first consider the convex case of compliance minimization, which is obtained by setting \(p=1\), \(J(\rho,u) = \ell(u)\) in~\eqref{eq:min}. The argument appeals to the convexity of the problem\footnote{ Note that the same arguments apply to the practically uninteresting case \(0<p<1\) as well.}, see~\cite{cea1970example}. \begin{proposition}\label{prop:compl_p1} The compliance minimization problem~\eqref{eq:min} (that is, \(J(\rho,u) = \ell(u)\)), admits an optimal solution \((\rho^*,u^*) \in \mathcal{A}^\delta\times \mathcal{U}_0^\delta\) for \(p=1\). \end{proposition} \begin{proof} In view of the existence of states for every conductivity distribution \(\rho \in \mathcal{A}^\delta\), and also since these states satisfy the energy minimization principle~\eqref{eq:nl_dir_pple}, our optimal design problem can be equivalently stated as the following saddle point problem: \begin{equation*} \max_{\rho \in \mathcal{A}^\delta} \min_{u \in \mathcal{U}_0^\delta} I_{\delta,\kappa}(u), \end{equation*} where \(\kappa(x,x')= \rho^{1/2}(x)\rho^{1/2}(x')\). Note that the map \(\R^2_{+}\ni (\xi,\eta) \mapsto \xi^{1/2}\eta^{1/2} \in \R\) is concave. Consequently, the map \(\mathcal{A}^\delta \ni \rho \mapsto I_{\delta,\kappa}(u) \in \R\) is concave and continuous (with respect to \(L^\infty(\Omega_{\delta})\)-norm) for each \(u\in \mathcal{U}_0^\delta\), where continuity is owing to the dominated Lebesgue convergence theorem. Therefore, the map \(\mathcal{A}^\delta \ni \rho \mapsto I_{\delta,\kappa}(u) \in \R\) is weak\(^*\) sequentially upper semicontinuous in \(L^\infty(\Omega_{\delta})\). This property is preserved under taking the infimum, therefore \(\mathcal{A}^\delta \ni \rho \mapsto \min_{u\in\mathcal{U}_0^\delta} I_{\delta,\kappa}(u) \in \R\) is weak\(^*\) sequentially upper semicontinuous in \(L^\infty(\Omega_{\delta})\). Finally, the set \(\mathcal{A}^\delta\) is non-empty, closed and convex in \(L^\infty(\Omega_{\delta})\), thereby also weak\(^*\) sequentially compact. It only remains to apply the Weierstrass' existence theorem to conclude the proof. \end{proof} \subsection{Compliance minimization, nonconvex case: \(1<p\le 2\)} \label{subsec:nonconvex} Contrary to the case \(p=1\), where lower semicontinuity in the appropriate weak\(^*\) topology holds in the local case thereby ensuring the existence of optimal solutions~\cite{cea1970example}, for \(p>1\) the nonlinear dependence on the control variable in the state equation destroys these properties. Consequently, additional reqularization is required to guarantee the existence of optimal solutions for the local compliance minimization problem with SIMP, which is also confirmed and reflected in numerous numerical algorithms based on SIMP method~\cite{allaire2012shape,bendose2003topology}. Surprisingly, the non-local compliance minimization problem still attains its infimum even in the non-convex case \(1<p\le 2\). We begin with the following simple statement. \begin{lemma}\label{prop:wstar_cont} The map \(\mathcal{A}^\delta \ni \rho \mapsto \rho(x)\rho(x') \in L^\infty(\Omega_{\delta}\times\Omega_{\delta})\) is sequentially continuous in the weak\(^*\) topology of \(L^\infty\). \end{lemma} \begin{proof} Consider a sequence \(\rho_k \in \mathcal{A}^\delta\), \(k=1,2,\dots\) with \(\rho_k \rightharpoonup \hat{\rho}\), weak\(^*\) in \(L^\infty(\Omega_{\delta})\). Let us take an arbitrary \(\psi \in L^1(\Omega_{\delta}\times\Omega_{\delta})\). Owing to Fubini's theorem, the sequence \(\phi_k(x) = \int_{\Omega_{\delta}} \psi(x,x')\rho_k(x')\,\mathrm{d}x'\), \(k=1,2,\dots\), converges towards \(\hat{\phi}(x) = \int_{\Omega_{\delta}} \psi(x,x')\hat{\rho}(x')\,\mathrm{d}x'\), for almost all \(x \in \Omega_{\delta}\). As the elements of this sequence are dominated by an \(L^1(\Omega_{\delta})\) function \(\overline{\rho}\int_{\Omega_{\delta}} |\psi(x,x')|\,\mathrm{d}x'\), Lebesgue's dominated convergence theorem applies implying that \(\lim_{k\to\infty} \|\phi_k-\hat{\phi}\|_{L^1(\Omega_{\delta})} = 0\). Finally, \begin{equation*} \begin{aligned} 0 &\le \lim_{k\to\infty} \bigg| \int_{\Omega_{\delta}}\int_{\Omega_{\delta}} \psi(x,x')[\rho_k(x')\rho_k(x)-\hat{\rho}(x')\hat{\rho}(x)]\,\mathrm{d}x'\,\mathrm{d}x \bigg| = \lim_{k\to\infty} \bigg|\int_{\Omega_{\delta}} [\phi_k(x)\rho_k(x) - \hat{\phi}(x)\hat{\rho}(x)]\,\mathrm{d}x \bigg| \\ & \le \lim_{k\to\infty} \bigg|\int_{\Omega_{\delta}} \hat{\phi}(x)[\rho_k(x) - \hat{\rho}(x)]\,\mathrm{d}x \bigg| + \lim_{k\to\infty} \overline{\rho} \|\phi_k-\hat{\phi}\|_{L^1(\Omega_{\delta})} = 0. \end{aligned} \end{equation*} \end{proof} With this in mind, we can extend Proposition~\ref{prop:compl_p1} to the non-convex case. \begin{proposition}\label{prop:compl_p12} The compliance minimization problem~\eqref{eq:min} (that is, \(J(\rho,u) = \ell(u)\)), admits an optimal solution \((\rho^*,u^*) \in \mathcal{A}^\delta\times \mathcal{U}_0^\delta\) for \(1<p\le 2\). \end{proposition} \begin{proof} As in the proof of Proposition~\ref{prop:compl_p1} it is sufficient to establish the weak\(^*\) sequential upper semicontinuity of the map \(\mathcal{A}^\delta \ni \rho \mapsto I_{\delta,\kappa}(u) \in \R\) with \(\kappa(x,x')= \rho^{p/2}(x)\rho^{p/2}(x')\), since upper semicontinuity is preserved under taking minimum over \(u \in \mathcal{U}_0^\delta\). The required property follows easily from the norm-continuity and concavity of the map \(\mathcal{K}^\delta \ni \tilde{\kappa} \mapsto I_{\delta,\tilde{\kappa}^{p/2}}(u) \in \R\) for \(0<p\le 2\) and Lemma~\ref{prop:wstar_cont}. \end{proof} \subsection{More general objective functions: \(p=2\)} \label{subsec:nonconvex_gen} SIMP has been successfully utilized within other contexts than compliance minimization, see~\cite{bendose2003topology}. The non-local optimal design problem we consider admits optimal solutions without the need for further regularization in the special case \(p=2\) for a wide class of objective functions. \begin{proposition} Let \(p=2\), and assume that the objective function \((\rho,u)\mapsto J(\rho,u)\) is sequentially lower semicontinuous with respect to weak\(^*\) topology of \(L^\infty(\Omega_{\delta})\) \(\times\) norm topology of \(\mathcal{U}_0^\delta\). Then the optimal design problem~\eqref{eq:min} admits an optimal solution \((\rho^*,u^*) \in \mathcal{A}^\delta\times \mathcal{U}_0^\delta\). \end{proposition} \begin{proof} In view of weak\(^*\) compactness of \(\mathcal{A}^\delta\) in \(L^\infty(\Omega_{\delta})\), in order to apply the direct method of calculus of variations and conclude the existence of optimal solutions it is sufficient to establish that for an arbitrary minimizing sequence \((\rho_k,u_k) \in \mathcal{A}^\delta\times\mathcal{U}_0^\delta\) with \(\rho_k \rightharpoonup \hat{\rho}\), weak\(^*\) in \(L^\infty(\Omega_{\delta})\), we have the corresponding convergence \(u_k \to \hat{u}\) in \(\mathcal{U}_0^\delta\), where \(\hat{u} = \mathcal{S}_{\delta}(\hat{\kappa})\) with \(\hat{\kappa}(x,x') = \hat{\rho}(x)\hat{\rho}(x')\). However, this follows immediately from Theorem~\ref{thm:dsgn2state} in view of Proposition~\ref{prop:wstar_cont}. \end{proof} \section{Convergence to the local problem as \(\delta\to 0\): connection to SIMP} \label{sec:gamma} We will now turn our attention to the relationship between the non-local compliance minimization problem and the local one, which arizes as a natural candidate for the limiting object for vanishing non-local interaction horizons \(\delta \to 0\). More specifically, we would like to understand whether infimum values of the nonlocal problems converge to the infimum of the local problem, and/or whether sequences of minimizers of the nonlocal problems converge towards minimizers of the local problem. The standard framework for studying variational convergence of functionals, which is equipped with the precise vocabulary for formulating and answering such questions, is that of \(\Gamma\)-convergence~\cite{braides_beginners}. Unfortunately, in our situation it is impossible to expect the local compliance minimization problem to be the \(\Gamma\)-limit of the nonlocal compliance minimization problems for any \(p\in[1,2]\). Indeed, the \(\Gamma\)-limit is always a lower semicontinuous functional in the topology in which the \(\Gamma\)-convergence is set~\cite{braides_beginners}. However, the local compliance functional in the presense of SIMP penalization with \(p>1\) is \emph{not} lower semicontinuous in a relevant topology. This is precisely the fundamental reason for the lack of optimal solutions to the local compliance minimization problem in this situation, the fact which is well documented and understood in the literature~\cite{bendose2003topology,allaire2012shape}. In spite of this unfortunate insurmountable obstacle, in this section we would like to investigate what kind of relationship between the two problems can be salvaged for any \(p\in[1,2]\). In order to succinctly discuss convergence of minimizers it would be convenient to put them into the same function space, which is not a priori the case given the fact that \(\Omega_{\delta}\) decreases to \(\Omega\) as \(\delta\to 0\). Since we are only concerned with small \(\delta >0\), we fix an arbitrary \(\delta_0 > 0\) and will only consider \(\delta \in ]0,\delta_0[\). This allows us to consider material distributions to be elements of the ``largest'' space \(L^\infty(\Omega_{\delta_0})\), extending them by \(\underline{\rho}\) outside of their domain of definition \(\Omega_\delta\), \(0<\delta<\delta_0\). (The same applies to the limiting local model, if we ``by continuity'' put \(\Omega_0 = \Omega\).) In a similar fashion we will extend the state functions by \(0\) outside of their domain of definition \(\Omega_\delta\), \(0\le \delta<\delta_0\). With such an extension we have that both \(\mathfrak{c}_\delta\) and \(\mathfrak{c}\) are defined on a subset of the same function space, \(L^\infty(\Omega_{\delta_0})\), which we equip with weak\(^*\) topology. Convergence \(\delta\to 0\) will be understood as convergence for any sequence \(\delta_j\to 0\) as \(j\to \infty\). \subsection{``\(\Gamma\)-lower semi-continuity''} The first result is in the spirit of the \(\liminf\)-inequality of \(\Gamma\)-convergence, but with the unfortunate exception that the functional arguments do not converge in the natural topology of the function space we work with, expect for \(p=1\). \begin{proposition}\label{prop:LI} Let \(\rho_\delta \in L^\infty(\Omega_{\delta_0})\) be such that \(\underline{\rho}\le \rho_\delta(x),\,\rho(x) \le \overline{\rho}\) for almost all \(x\in \Omega_{\delta_0}\). Assume that \(\rho^p_\delta\rightharpoonup \rho^p\), weak\(^*\) in \(L^\infty(\Omega_{\delta_0})\) as $\delta\to 0$. Let \(u_{\delta} = \mathcal{S}_{\delta}(\kappa_\delta)\) be the solution of~\eqref{eq:w}, where \(\kappa_\delta(x,x')=\rho_\delta^{p/2}(x) \rho_\delta^{p/2}(x')\), and let \(u=\mathcal{S}(\rho^p)\) be the solution to~\eqref{eq:w_loc}. Then \begin{equation*} \liminf_{\delta\to 0} \mathfrak{c}_\delta(\rho_\delta) \ge \mathfrak{c}(\rho). \end{equation*} \end{proposition} \begin{proof} For convenience we put \(\kappa(x,x') = \rho^{p/2}(x)\rho^{p/2}(x')\); then \(\lim_{\delta\to 0}\kappa_\delta(x,x') =\kappa(x,x')\), for almost all \((x,x')\in\Omega_{\delta_0}^2\). Note that owing to~\cite[Theorem~1]{BBM} we have the inclusion \(u\in \mathcal{U}_0^\delta\), for all \(\delta \in ]0,\delta_0[\). Therefore, we can test~\eqref{eq:w} with \(v=u\) to get the equality \begin{equation*} a_{\delta,\kappa_\delta}(u_\delta,u) = \ell(u) = a_{\rho^p}(u,u). \end{equation*} Consequently, the difference \(L_\delta := \mathfrak{c}_\delta(\rho_\delta)-\mathfrak{c}(\rho)= \ell(u_\delta-u)\) can be written as \begin{equation*} \begin{aligned} L_\delta &= a_{\delta,\kappa_\delta}(u_\delta,u_\delta) - 2a_{\delta,\kappa_\delta}(u_\delta,u) + a_{\rho^p}(u,u) \\&= \underbrace{[a_{\delta,\kappa_\delta}(u_\delta,u_\delta) - 2a_{\delta,\kappa_\delta}(u_\delta,u) + a_{\delta,\kappa_\delta}(u,u)]}_{=:L^{(1)}_\delta} + \underbrace{[a_{\rho^p}(u,u)-a_{\delta,\kappa_\delta}(u,u)]}_{=:L^{(2)}_\delta}. \end{aligned} \end{equation*} Now we have \(L^{(1)}_\delta = a_{\delta,\kappa_\delta}(u_\delta-u,u_\delta-u)\ge 0\) because \(a_{\delta,\kappa_\delta}(\cdot,\cdot)\) is an inner product on \(\mathcal{U}^\delta_0\), and therefore \[L_\delta\ge L_\delta^{(2)},\] and hence in order to prove the result it is enough to show that \[ \liminf_{\delta\to 0} L_\delta^{(2)}\ge 0.\] But, applying Young's inequality to the nonlocal conductivity \[ \rho_\delta^{p/2}(x) \rho_\delta^{p/2}(x') \le \frac{1}{2}\left(\rho_\delta^{p}(x)+\rho_\delta^{p}(x') \right)\] we have that \[ \begin{aligned} -a_{\delta,\kappa_\delta}(u,u) &\ge -\int_{\Omega_{\delta_0}}\int_{\Omega_{\delta_0}} \frac{\rho^{p}_\delta(x)+\rho^p_\delta(x')}{2} A_{\delta}(|x-x'|)\frac{(u(x)-u(x'))^2}{|x-x'|^2}\,\mathrm{d}x\,\mathrm{d}x' \\&= -\int_{\Omega_{\delta_0}}\rho^{p}_\delta(x')\int_{\Omega_{\delta_0}} A_{\delta}(|x-x'|)\frac{(u(x)-u(x'))^2}{|x-x'|^2}\,\mathrm{d}x\,\mathrm{d}x', \end{aligned} \] and the last term converges to \(-a_{\rho^p}(u,u)\) owing to~\cite[Corollary~1]{BBM} and the weak convergence of \(\rho^{p}_\delta\). Consequently \[\liminf_{\delta\to 0} [a_{\rho^p}(u,u)-a_{\delta,\kappa_\delta}(u,u)] \ge 0\] and the proof is finished. \end{proof} \subsection{Pointwise convergence: \(\lim_{\delta\to 0} \mathfrak{c}_\delta(\rho) = \mathfrak{c}(\rho)\)} The following result establishes the pointwise convergence of \(\mathfrak{c}_\delta\) to \(\mathfrak{c}\) in \(L^\infty(\Omega_{\delta_0})\). It should be understood as a \(\limsup\)-inequality in \(\Gamma\)-convergence, where the recovering sequence is the constant sequence. In order to prove it we need the following lemma, which establishes that the estimate~\eqref{eq:L2coerc} can be made uniform with respect to small \(\delta > 0\). \begin{lemma}\label{poincare} There exists a constant \(\hat{\delta} \in ]0,\delta_0[\), and a constant \(C_{\hat{\delta}}>0\), independent from \(\delta\) and \(u\), such that we have the inequality \(C_{\hat{\delta}}\|u\|_{L^2(\Omega)}\le \|u\|_{\mathcal{U}^\delta_0}\) for any \(\delta\in ]0,\hat{\delta}[\) and \(u\in \mathcal{U}_0^\delta\). \end{lemma} \begin{proof} For the sake of contradiction, we assume that for each \(j=1,2,\dots\) there is \(\delta_j\in ]0,\delta_0[\), and \(u_j\in \mathcal{U}_0^{\delta_j}\) such that \begin{equation*} \lim_{j\to\infty} \delta_j = 0, \qquad \lim_{j\to\infty} \|u_j\|_{\mathcal{U}^{\delta_j}_0} = 0, \quad\text{but}\quad \|u_j\|_{L^2(\Omega)}=1. \end{equation*} Note that since \(u_j\) is extended by zero outside of \(\Omega_{\delta_j}\), \(u_j \equiv 0\) in \(\Omega_{\delta_0}\setminus \Omega\). When combined with the smallness of support of \(A_{\delta_j}(|\cdot|)\), see~\eqref{eq:smallsupport}, this implies the equality \begin{equation*} \int_{\Omega_{\delta_0}}\int_{\Omega_{\delta_0}} A_{\delta_j}(|x-x'|) \frac{(u_j(x)-u_j(x'))^2}{|x-x'|^2}\,\mathrm{d}x'\,\mathrm{d}x =a_{\delta_j,1}(u_j,u_j)=\|u_j\|^2_{\mathcal{U}^{\delta_j}_0}. \end{equation*} We can therefore apply~\cite[Theorem~1.2]{ponce2004estimate}, which asserts that the sequence \(u_j\) is relatively compact in \(L^2(\Omega_{\delta_0})\) with all its accumulation points being in \(W^{1,2}(\Omega_{\delta_0})\). Let \(u_0 \in W^{1,2}(\Omega_{\delta_0})\) be such an \(L^2(\Omega_{\delta_0})\)-accumulation point of \(u_j\); in particular \(\|u_0\|_{L^2(\Omega)}=1\). Since the accumulation point does not depend on any finite number of terms in the sequence \(u_j\), we can utilize the estimate in~\cite[Theorem~1.2]{ponce2004estimate} as follows: \begin{equation*} \int_{\Omega_{\delta_0}}|\nabla u_0(x)|^2\,\mathrm{d}x\le \limsup_{j \to \infty} \{a_{\delta_j,1}(u_j,u_j)\}=0. \end{equation*} Therefore, \(u_0\) must be a constant on \(\Omega_{\delta_0}\). On the other hand we have the pointwise (in fact, finite) convergence \(\lim_{j\to\infty} u_j(x) = 0\), \(x \in \Omega_{\delta_0} \setminus \cl\Omega\), and consequently \(u_0\equiv 0\) on \(\Omega_{\delta_0} \setminus \cl\Omega\). Therefore \(u_0\equiv 0\) on \(\Omega_{\delta_0}\), which contradicts the previously established fact \(\|u_0\|_{L^2(\Omega)} = 1\). \end{proof} \begin{remark} Note that Lemma~\ref{poincare} implies the coercivity of the nonlocal equation~\eqref{eq:w} even in the absence of assumption~\eqref{fractionallb}, as~\cite[Theorem~1.2]{ponce2004estimate} does not require such a condition. However, as we mentioned previously, for our purposes the assumption~\eqref{fractionallb} is a natural hypothesis, as it implies the continuous embedding of \(\mathcal{U}_0^\delta\) into \(W^{s,2}(\Omega_{\delta})\) and therefore also the compact embedding into \(L^2(\Omega_{\delta})\). \end{remark} Before staying the main result of this section we need to define the following class of designs: \begin{equation*} \tilde{\mathcal{A}}^\delta=\bigg\{\, \rho \in \mathcal{A}^\delta\;:\; \rho(x)=\sum_{k=1}^K \alpha_k\chi_{B_k}(x),\, K\in\mathbb{N},\, \alpha_k>0,\, B_k \text{\ open, pairwise disjoint, and such that\ } \overline{\cup_{k=1}^K B_k}\supset \Omega_{\delta}\,\bigg\}. \end{equation*} Note that \(\tilde{\mathcal{A}}^\delta\) is not just the class of simple functions in \(\mathcal{A}^\delta\), but the class on simple functions \emph{supported} on open sets. This subtle but important restriction is going to be needed to apply \(\Gamma\)-convergence results of~\cite{ponce2004gamma,bellido2015} in the Step~2 of the proof below. \begin{proposition}\label{prop:limit} Consider an arbitrary \(\rho\in \tilde{\mathcal{A}}^\delta\). Let \(u_\delta = \mathcal{S}_\delta(\kappa) \in \mathcal{U}_0^\delta\) be the sequence of solutions to~\eqref{eq:w} corresponding to a fixed \(f \in L^2(\Omega)\) and \(\kappa(x,x')= \rho^{p/2}(x)\rho^{p/2}(x')\), but varying \(\delta \to 0\). Let further \(u = \mathcal{S}^{\text{loc}}(\rho^{p}) \in W^{1,2}_0(\Omega)\) be the weak solution to the local generalized Laplace problem~\eqref{eq:w_loc}. Then \begin{equation*} \lim_{\delta\to 0} \|u_\delta - u\|_{L^2(\Omega)} = 0, \text{\ and\ } \lim_{\delta\to 0} a_{\delta,\kappa}(u_\delta,u_\delta) = a_{\rho^p}(u,u). \end{equation*} Consequently, \begin{equation*} \lim_{\delta\to 0} \mathfrak{c}_\delta(\rho)= \mathfrak{c}(\rho), \end{equation*} for any \(\rho \in \mathcal{A}\). \end{proposition} \begin{proof} Let \(\hat{\delta} \in ]0,\delta_0[\) and \(C_{\hat{\delta}}>0\) be those established in Lemma~\ref{poincare}. Since \(u_\delta = \mathcal{S}_\delta(\kappa)\) solves~\eqref{eq:w}, for all \(\delta \in ]0,\hat{\delta}[\) we get the estimate \begin{equation*} C_{\hat{\delta}}\underline{\kappa} \| u_\delta\|^2_{L^2(\Omega)} \le \underline{\kappa} a_{\delta,1}(u_\delta,u_\delta) \le a_{\delta,\kappa}(u_\delta,u_\delta) \le \|f\|_{L^2(\Omega)}\|u_\delta\|_{L^2(\Omega)}, \end{equation*} and consequently the uniform stability estimates \begin{equation*} \begin{aligned} \| u_\delta\|_{L^2(\Omega_{\hat{\delta}})} = \| u_\delta\|_{L^2(\Omega)}\le\frac{1}{C_{\hat{\delta}}\underline{\kappa}} \|f\|_{L^2(\Omega)},\quad\text{and}\quad \| u_\delta\|_{\mathcal{U}^\delta_0} \le \frac{1}{C_{\hat{\delta}}^{1/2}\underline{\kappa}}\|f\|_{L^2(\Omega)}. \end{aligned} \end{equation*} Utilizing~\cite[Theorem~1.2]{ponce2004estimate} as in Lemma~\ref{poincare}, we establish the existence of an \(L^2(\Omega_{\hat{\delta}})\)-accumulation point \(u_0 \in W^{1,2}_0(\Omega)\) of \(u_\delta\). Let \(\delta_j\), \(j=1,2,\dots\) be a sequence realizing convergence towards this accumulation point, that is \(\lim_{j\to\infty} \delta_j = 0\) and \(\lim_{j\to\infty} \|u_{\delta_j}-u_0\|_{L^2(\Omega)}=0\). As mentioned previously, in the following discussion both \(u_0\) and \(u\), the solution of the limiting local problem, are extended by \(0\) outside of \(\Omega\). \noindent\textbf{Step 1.} We claim that \begin{equation}\label{step1} \limsup_{j\to \infty}I_{\delta_j,\kappa}(u_{\delta_j})\le I_{\rho^p}(u). \end{equation} Indeed, owing to~\eqref{eq:kernelnrm} and~\cite[Theorem~1]{BBM} we have the bound \(\|u\|_{\mathcal{U}^{\delta_j}_0} \le C \|u\|_{W^{1,2}(\Omega)}\) and therefore also the inclusion \(u\in \mathcal{U}^{\delta_j}_0\). Consequently, owing to the variational characterization~\eqref{eq:nl_dir_pple} we have the inequalities \(I_{\delta_j,\kappa}(u_{\delta_j})\le I_{\delta_j,\kappa}(u)\), for each \(\delta_j\), \(j=1,2,\dots\). Therefore, in order to establish~\eqref{step1} it is sufficient to show the inequality \(\limsup_{j\to \infty} a_{\delta_j,\kappa}(u,u) \le a_{\rho^p}(u,u)\). But this inequality follows from the direct application of Proposition~\ref{prop:LI} to the constant sequence \(\rho\): \[\liminf_{j\to \infty} \left(-2I_{\delta_j,\kappa}(u_{\delta_j})\right)\ge \left(-2 I_{\rho^p}(u)\right)\] and therefore \eqref{step1} holds. \noindent\textbf{Step 2.} We claim that \begin{equation*} I_{\rho^p}(u_0) \le \liminf_{j\to\infty} I_{\delta_j, \kappa}(u_{\delta_j}). \end{equation*} Note that owing to the strong \(L^2\) convergence \(\lim_{j\to\infty} \|u_{\delta_j}-u_0\|_{L^2(\Omega)}=0\) it is sufficient to prove the inequality \begin{equation*} a_{\rho^p}(u_0,u_0) \le \liminf_{j\to\infty} a_{\delta_j, \kappa}(u_{\delta_j},u_{\delta_j}). \end{equation*} Let us recall that \(\rho\in\tilde{\mathcal{A}}^{\delta_0}\) is a simple function supported on open sets, that is \[\rho(x)=\sum_{i=1}^I \alpha_i \chi_{B_i}(x),\] where \(I\in\mathbb{N}\), \(\alpha_i>0\) and the \(B_i\) open and pairwise disjoint such that \(\cl(\cup_{i=1}^I B_i) \supset \Omega_{\delta_0}\). Then \begin{equation*} \begin{aligned} a_{\delta_j,\kappa}(u_{\delta_j},u_{\delta_j}) &= \int_{\Omega_{\delta_0}} \int_{\Omega_{\delta_0}} \rho^{p/2}(x) \rho^{p/2}(x') A_{\delta_j}(|x-x'|)\frac{(u_{\delta_j}(x)-u_{\delta_j}(x'))^2}{|x-x'|^{2}}\,\mathrm{d}x'\,\mathrm{d}x \\ &= \sum_{i,j=1}^I \alpha_i^{p/2}\alpha_j^{p/2} \int_{B_i}\int_{B_j} A_{\delta_j}(|x-x'|)\frac{(u_{\delta_j}(x)-u_{\delta_j}(x'))^2}{|x-x'|^{2}}\,\mathrm{d}x'\,\mathrm{d}x \\ &\ge \sum_{i=1}^I \alpha_i^p \int_{B_i}\int_{B_i} A_{\delta_j}(|x-x'|)\frac{(u_{\delta_j}(x)-u_{\delta_j}(x'))^2}{|x-x'|^{2}}\,\mathrm{d}x'\,\mathrm{d}x. \end{aligned} \end{equation*} Applying the \(\Gamma\)-convergence results in \cite{ponce2004gamma,bellido2015} we conclude that \begin{equation}\label{eq:lbu0} \liminf_{j\to\infty} \int_{B_i}\int_{B_i} A_{\delta_j}(|x-x'|)\frac{(u_{\delta_j}(x)-u_{\delta_j}(x'))^2}{|x-x'|^{2}}\,\mathrm{d}x'\,\mathrm{d}x \ge \int_{B_i}|\nabla u_0(x)|^2\,\mathrm{d}x. \end{equation} Summing up these inequalities and recalling that \(u_0\) vanishes outside \(\Omega\) we can conclude that \begin{equation*} \liminf_{j\to\infty} a_{\delta_j,\kappa}(u_{\delta_j},u_{\delta_j})\ge a_{\rho^p}(u_0,u_0). \end{equation*} Since \(\lim_{j\to\infty} \|u_{\delta_j}-u_0\|_{L^2(\Omega)}=0\) this is sufficient to conclude the proof of step~2 for simple functions \(\rho\). Notice that the requirement of \(B_i\) to be open is necessary in order to apply the results in \cite{ponce2004gamma,bellido2015}. \noindent\textbf{Step 3: Conclusion.} Combining the inequalities obtained in steps~1 and~2 we obtain the following string of inequalities: \begin{equation*} \limsup_{j\to\infty} \underbrace{I_{\delta_j,\kappa}(u_{\delta_j})}_{=-0.5\mathfrak{c}_{\delta_j}(\rho)} \le I_{\rho^p}(u) \le \underbrace{I_{\rho^p}(u_0)}_{=-0.5\mathfrak{c}(\rho)} \le \liminf_{j\to\infty} \underbrace{I_{\delta_j,\kappa}(u_{\delta_j})}_{=-0.5\mathfrak{c}_{\delta_j}(\rho)}. \end{equation*} The variational characterization of the local problem~\eqref{eq:loc_dir_pple} and the uniqueness of solutions to~\eqref{eq:w_loc} implies that \(u=u_0\). Therefore the family of solutions \(\{u_{\delta}\}_{0<\delta<\hat{\delta}}\), which is relatively compact in \(L^2(\Omega_{\hat{\delta}})\), has only one accumulation point, and the sequence \(\delta_j\), \(j=1,2,\dots\) selected in the beginning of the proof is in fact arbitrary. This finishes the proof of the proposition. \end{proof} The conclusions of Proposition~\ref{prop:limit} hold in fact for an even larger class of material distributions than \(\tilde{\mathcal{A}}^{\delta_0}\). \begin{corollary}\label{cor:limit} Suppose that \(\rho\in \mathcal{A}^{\delta_0}\) be such that there exists a sequence \(\{\rho_i\}_i\subset \tilde{\mathcal{A}}^{\delta_0}\) such that \(\rho_i(x) \leq \rho(x)\) and \(\lim_{i\to \infty }\rho_i(x)=\rho(x)\) a.e. in \(x\in\Omega_{\delta_0}\). Then the conclusion of Proposition~\ref{prop:limit} holds for this \(\rho\). \end{corollary} \begin{proof} Note that Steps~1 and~3 of the proof above do not utilize the simple structure of \(\rho\). Thus, it is only necessary to amend Step~2 of the proof, which we do here. Owing to our assumptions, there exists a sequence of non-negative simple functions \(\rho_{i}\), which approximates \(\rho\) almost everywhere in \(\Omega_{\delta_0}\) from below. Let \(\kappa_{i}(x,x')=\rho_{i}^{p/2}(x)\rho_{i}^{p/2}(x')\). Then \(\kappa_i(x,x') \leq \kappa(x,x')\) for almost all \((x,x')\in \Omega_{\delta_0}^2\). Therefore, in view of Proposition~\ref{prop:limit}, for each \(i=1,2,\dots\) we can write: \begin{equation*} \liminf_{j\to\infty} a_{\delta_j,\kappa}(u_{\delta_j},u_{\delta_j}) \ge \liminf_{j\to\infty} a_{\delta_j,\kappa_i}(u_{\delta_j},u_{\delta_j}) \ge a_{\rho_i^p}(u_0,u_0). \end{equation*} It remains to take the limit with respect to \(i\to\infty\) and utilize the dominated Lebesgue convergence theorem to reach the inequality claimed in Step~2 of the proof of Proposition~\ref{prop:limit}. \end{proof} \begin{remark} Note that one can approximate an arbitrary bounded and measurable \(\rho\) pointwise with a non-decreasing sequence of \emph{simple} functions, see~\cite[Theorem~1.17]{rudin}. Unfortunately, in Corollary~\ref{cor:limit} we need that these simple functions are additionally \emph{supported} on open, and not just measurable, sets \(B_i\). Therefore we have not succeeded in showing the result for a general \(\rho\in \mathcal{A}^{\delta_0}\). We actually believe, and conjecture, that Corollary~\ref{cor:limit} is true without this additional restriction. \end{remark} \subsection{Discussion of \(\Gamma\)-convergence} General \(\Gamma\)-convergence theory establishes that \(\Gamma\)-convergence together with equi-coercivity implies both convergence of infima to the minimum of the limit problem, and that any cluster point of any sequence of minimizers is a minimizer of the limit problem \cite{braides_beginners}. As we have pointed out above, the unfortunate fact that convergence of functional arguments for liminf result, Proposition \ref{prop:LI}, is not weak\(^*\) convergence in \(L^\infty(\Omega_{\delta_0})\) spoils the possibility of a general \(\Gamma\)-convergence result for any \(p\in]1,2]\). Furthermore, local compliance for \(p\in ]1,2]\) is not lower semicontinous and consequently cannot be a \(\Gamma\)-limit. In addition, the restricting hypothesis on the admissible material distributions in the statement of Corollary~\ref{cor:limit} rules out the possibility of obtaining a general \(\Gamma\)-convergence result even for \(p=1\). Still, we can point out at a few inequalities characterizing the relationship between the non-local and the local optimal control problems in the convex case \(p=1\). On the one hand, for each \(\delta>0\) we can take a minimizer \(\rho_\delta\) for the non-local compliance minimization. Clearly the sequence \(\rho_\delta\) is bounded in \(L^\infty(\Omega_{\delta_0})\). Therefore, there exists a sequence \(\delta_j\), with \(\delta_j\to 0\), and \(\rho\in \mathcal{A}\), such that \(\rho_{\delta_j}\rightharpoonup \rho\) weak\(^*\) in \(L^\infty(\Omega_{\delta_0})\). Taking into account that \(p=1\), Proposition~\ref{prop:LI} implies \[m\le \mathfrak{c}(\rho)\le \liminf_{j\to \infty} \mathfrak{c}_{\delta_j}(\rho_{\delta_j}) = \liminf_{j\to \infty} m_{\delta_j},\] where \[m=\inf_{\rho\in \mathcal{A}} \mathfrak{c}(\rho), \quad m_\delta =\min_{\rho \in \mathcal{A}^\delta} \mathfrak{c}_{\delta}(\rho).\] As this argument can be made for any sequence $\delta_j$ converging to zero, we have that \begin{equation}\label{eq:infimaineq} m\le \liminf_{\delta\to 0} m_\delta.\end{equation} Furthermore, if \(\inf_{\rho\in \mathcal{A}} \mathfrak{c}(\rho)\) is attained at \(\hat{\rho}\in \mathcal{A}\) satisfying the assumptions of Corollary~\ref{cor:limit}, then the inequality~\eqref{eq:infimaineq} becomes equality: \begin{equation*} m= \mathfrak{c}(\hat{\rho}) = \lim_{j\to \infty} \mathfrak{c}_{\delta_j}(\hat{\rho}) \geq \limsup_{j\to \infty} m_{\delta_j}, \end{equation*} for any \(\delta_j\to 0\). Finally, if Corollary~\ref{cor:limit} were true for any \(\rho\in \mathcal{A}^{\delta_0}\), then we would have that \(\mathfrak{c}_\delta\) \(\Gamma\)-converges to \(\mathfrak{c}\) as \(\delta\) goes to 0. Indeed, weak\(^*\) topology is metrizable on bounded sets of \(L^\infty(\Omega_{\delta_0})\) (cf.~\cite[Theorem~3.28]{brezis}). Therefore \(\mathcal{A}^{\delta_0}\) equipped with the weak\(^*\) topology is a metric space, and therefore \(\Gamma\)-convergence requires only two facts to hold, namely, limsup and liminf inequalities. These would then be direct consequences of Proposition~\ref{prop:LI} and Corollary~\ref{cor:limit}, respectively. \section{Numerical experiments} \label{sec:num} The objective of this section is to numerically illustrate the behaviour of the proposed optimization model, with emphasis on the results established in the previous sections. All our numerical experiments are performed with \(n=2\), \(\Omega=]0,1[^n\), \(A_\delta(|x|) = c_{\text{nrm}}|x|^{-(n+2s-2)}\max\{0,\delta^2-|x|^2\}^\beta\), with \(\beta = 3.0\) and \(c_{\text{nrm}}\) determined from~\eqref{eq:kernelnrm}. Additionally, we use \(\underline{\rho} = 10^{-3}\), \(\overline{\rho} = 1.0\), \(s \in \{1/3,2/3\}\), \(\delta \in \{0.05,0.1,0.2\}\), and \(p \in \{1,2\}\). \subsection{Galerkin FEM discretization of the state equations} The variational formulation~\eqref{eq:w} with a symmetric and coercive bilinear form \(a_{\delta,\kappa}(\cdot,\cdot)\) naturally lends itself for an application of Galerkin method. In our numerical experiments we only consider polyhedral sets \(\Omega\), and therefore we proceed in the standard fashion by decomposing \(\Omega_{\delta}\) into a union of shape-regular simplices \(\Omega_{\delta}^h\), where \(h>0\) will denote a characteristic size (diameter) of the elements in our mesh. We make sure that \(\Omega_{\delta}^h\) conforms with the subdivision of \(\Omega_{\delta}\) into \(\Omega\) and \(\Gamma^{\text{nl}}\), see Fig.~\ref{fig:mesh}. \begin{figure}\label{fig:mesh} \end{figure} \begin{figure} \caption{ Left: reference elements \(T_1\) (red), and elements \(T_2\) (orange), participating in the pre-computation of the integrals. Note that only half of the elements in the proximity of the reference cell need to be considered owing to the symmetry of the bilinear form with respect to the change of the integration variables \(x \leftrightarrow x'\). Right: contour plot of \(A_{\delta}(|\cdot-x|)\) for \(x\) located at the barycenter of one of the reference elements. \(\delta = 0.2\) and \(s=1/3\) is used. } \label{fig:mesh2} \end{figure} Functions in \(\mathcal{U}\) will be approximated with continuous piecewise-linear polynomials \(u^h \in \mathcal{U}^h = \{\, v^h \in \mathcal{U} \cap C^0(\bar{\Omega}_\delta) : v^h|_T \text{\ is a linear polynomial}, \forall T \in \Omega_{\delta}^h\,\}\). Naturally we put \(\mathcal{U}^h_0 = \mathcal{U}^h \cap \mathcal{U}_0\), which leads us to the following discrete variational principle (system of linear algebraic equations): find \(u_h \in \mathcal{U}^h_0\), such that \begin{equation}\label{eq:wh} a_{\delta,\kappa}(u_h,v_h) = \ell(v_h), \quad \forall v_h \in \mathcal{U}_0^h. \end{equation} Assembly process for the right hand side of this system is completely standard, whereas in order to assemble the left hand side of this system we need to loop over all \emph{pairs} \((T_1,T_2)\in \Omega_{\delta}^h\times \Omega_{\delta}^h\) of elements in the mesh, which are not further than the distance of \(2\delta\) from each other, and compute the local integral contribution to \( a_{\delta,\kappa}(u_h,v_h)\), that is, the integral over \(T_1 \times T_2\). Note that when \(\bar{T}_1 \cap \bar{T}_2\neq\emptyset\), the integrand is unbounded; even when \(\bar{T}_1 \cap \bar{T}_2=\emptyset\) the integrand is not a polynomial function. In our implementation we utilize the quadratures described in~\cite{chernov2015quadrature}, which are taylored for a nearly identical situation.\footnote{ It should be noted that our integrands do not always satisfy the assumptions imposed in~\cite{chernov2015quadrature} as the terms \((u(x)-u(x'))/|x-x'|\) and \((v(x)-v(x'))/|x-x'|\) are only bounded and not continuous across the boundaries \(\bar{T}_1 \cap \bar{T}_2\).} In order to avoid commiting a variational crime by not integrating the bilinear form precisely, we first estimate how many quadrature points we need for the accurate integration; the results are reported in Fig.~\ref{fig:integration}. Despite the fact that the assumptions imposed in~\cite{chernov2015quadrature} are not always satisfied we observe exponential convergence of the quadratures. However, note the unusial scaling of the \(x\)-axis; in the most singular case \(k=2\) corresponding to \(T_1=T_2\) we need approximately \(15^5 = 759375\) quadrature points (when using Gauss--Jacobi quadrature in the singular direction, see~\cite{chernov2015quadrature} for details) to achieve nearly full IEEE double precision accuracy before the round off errors start to play a role! \begin{figure} \caption{ Verification of the exponential convergence rate of the quadratures~\cite{chernov2015quadrature}, which we use for computing elemental stiffness matrices; \(s=1/3\) in this simulation. \(k\) is the dimension of the intersection \(\bar{T}_1 \cap \bar{T}_2\); \(k=-1\) means \(\bar{T}_1 \cap \bar{T}_2=\emptyset\) and therefore the integrand is regular; see \cite{chernov2015quadrature} for details. (a): \(k=-1\) and \(\diam(\bar{T}_1 \cup \bar{T}_2)<\delta\); (b): \(k=-1\) and \(\diam(\bar{T}_1 \cup \bar{T}_2)>\delta\); (c): \(k=0\); (d): \(k=1\); (e): \(k=2\). } \label{fig:integration} \end{figure} Because of such a high cost of elemental integration, and because the number of integrals in a quasi-uniform grid grows as \(O(\delta^n h^{-2n})\), we focus on regular grids (see Fig.~\ref{fig:mesh}). In this setting we only need to evaluate integrals for a fixed ``reference'' \(T_1\) and varying \(T_2\), thereby bringing the number of integrals down to \(O(\delta^n h^{-n})\) as shown in Fig.~\ref{fig:mesh2}. Even with this preprocessing, both the work and memory requirements for the global matrix assembly scale as \(O(\delta^n h^{-2n})\). Putting this into perspective, for Grid2 with \(h=2^{1/2}/320\) (i.e., each side of the unit square \(\Omega\) is discretized with \(320\) elements) we need approximately \(4.7\)Mb to store the precomputed integrals and approximately \(3.87\cdot 10^3\)Mb to store the assembled matrix. Direct solver such as UMFPACK~\cite{davis2004algorithm} quickly run out of memory for problems with \(\delta=0.1\), and we switch to CG-accelerated Ruge--Stuben AMG solver PyAMG~\cite{OlSc2018} (even smoothed aggregation is too much memory and computationally demanding). \subsubsection{\(h\)-convergence test} In order to test the code, we use the method of manufactured solutions, see e.g.~\cite{roache2002code}. We put \(\delta=0.1\), \(s\in\{1/3,2/3\}\), \(\kappa\equiv 1\), and let the analytical solution to be \(u_{\text{ana}}(x,y) = [x(1-x)y(1-y)]^2 \sin(2\pi(x+y^2))\), when \(x,y \in \Omega\), and zero otherwise. The corresponding right hand side can be (numerically) computed as \begin{equation}\label{eq:anarhs} f(x,y) = -2\lim_{\epsilon\to 0}\int_{\epsilon}^\delta \int_{0}^{2\pi} A_{\delta}(r)\frac{u_{\text{ana}}(x+r\cos(\theta),y+r\sin(\theta))-u_{\text{ana}}(x,y)}{r} \,\mathrm{d}\theta\,\mathrm{d}r, \end{equation} which is evaluated using the standard adaptive quadrature package in SciPy. The results of this test are shown in Fig.~\ref{fig:hconv}. In both cases we do observe convergence, although it is difficult to say whether we reach the asymptotic convergence rate (we run the simulations until we run out of memory), or indeed whether the adaptive quadratures evaluate~\eqref{eq:anarhs} accurately enough. \begin{figure} \caption{ Verification of convergence of the discretization (and the numerical implementation) with respect to \(h\)-refinement. Left: \(s=1/3\), right: \(s=2/3\). The relative error is measured with respect to \(L^2(\Omega_{\delta})\) norm. } \label{fig:hconv} \end{figure} \subsubsection{``\(\Gamma\)-convergence'' test} Another test we perform is that of ``numerical \(\Gamma\)-convergence'', that is, we try to illustrate Proposition~\ref{prop:limit}. Namely we put \(s=1/3\), \(\rho(x,y)= \underline{\rho} +(\overline{\rho}-\underline{\rho}) \exp\{-[(x-m_x)^2+(y-m_y)^2]/\sigma\}\) with \((m_x,m_y) = (1/2,2/3)\) and \(\sigma = 0.1\), and \(\kappa\) computed from \(\rho\) using the SIMP model with \(p=2\), \(u_{\text{ana}}(x,y) = \sin(2\pi x) \sin(\pi y)\), for \(x,y \in \Omega\), and zero otherwise. We compute \(f = -\diver[\rho^p \nabla u_{\text{ana}}]\), and solve a variety of non-local problems with varying \(\delta\) and \(h\) on Grid2. The results are summarized in Fig.~\ref{fig:deltaconv}. The main observation is that the quantity \(e(\delta)= \|u_{\text{ana}}-\lim_{h\to 0} u_{\delta,h}\|_{L^2(\Omega)}\), where \(u_{\delta,h}\) is the numerical solution to the discretized non-local problem~\eqref{eq:wh}, appears to be decreasing with \(\delta\). Unsurprisingly, one needs finer resolution meshes to resolve non-local problems with smaller values of \(\delta\). \begin{figure} \caption{ Numerical verification of \(\Gamma\)-convergence. The error is measured as \(L^2(\Omega)\)-norm of the difference between the numerical solution of the non-local problem and the analytical solution of the local problem. } \label{fig:deltaconv} \end{figure} \subsection{Solving the optimization problem} In our implementation we solve a discretized version of the optimization problem~\eqref{eq:min} with a tiny discrepancy: instead of the constraint \(\int_{\Omega} \rho(x)\,\mathrm{d}x \le \gamma|\Omega|\) in~\eqref{eq:adm} we have implemented the constraint \(\int_{\Omega_{\delta}} \rho(x)\,\mathrm{d}x \le \gamma|\Omega|\). We focus exclusively on compliance minimization, that is, \(J(\rho,u)=\ell(u)\), and we employ the so-called optimality criterion (OC) scheme for solving these problems. This choice is primarily owing to the popularity of OC in the topology optimization community, and any other gradient-based non-linear constrained optimization algorithm could be utilized in its place. Within the OC scheme, given a current material distribution \(\rho_k\) we first compute the corresponding state \(u_{\rho_k}\) by solving the discretized system~\eqref{eq:wh}, and then the derivatives of non-local compliance \(\mathfrak{c}_\delta\) as \begin{equation*} \begin{aligned} \mathfrak{c}_\delta'(\rho_k;\xi) = -\frac{p}{2}\int_{\Omega_{\delta}}\int_{\Omega_{\delta}} [\rho_k^{p/2-1}(x)\rho_k^{p/2}(x')\xi(x) +\rho_k^{p/2}(x)\rho_k^{p/2-1}(x')\xi(x')] A_{\delta}(|x-x'|)\frac{(u_{\rho_k}(x)-u_{\rho_k}(x'))^2}{|x-x'|^2}\,\mathrm{d}x'\,\mathrm{d}x. \end{aligned} \end{equation*} The new material distribution is defined by a simple pointwise update scheme \begin{equation*} \rho_{k+1} = \pi_{B_k}[\rho_k (-\nabla \mathfrak{c}_\delta(\rho_k)/\lambda_{k+1})^\xi], \end{equation*} where \(\pi_{B_k}[\cdot]\) is a projection operator onto a closed, convex, and non-empty set \(B_k = \{\, \rho \in L^2(\Omega_{\delta}) \mid \max[\underline{\rho},(1-\eta)\rho_k] \leq \rho \leq \min[\overline{\rho},(1+\eta)\rho_k]\,\}\), and \(\eta>0\) and \(\xi \in (0,1)\) are trust-region like and damping parameters, respectively, and \(\nabla \mathfrak{c}_\delta(\rho_k)\) is \(L^2(\Omega_\delta)\) representation of the directional derivatives \(\mathfrak{c}_\delta'(\rho_k;\xi)\). Finally, \(\lambda_{k+1}\) is computed by finding the root of the equation \(\int_{\Omega_{\delta}} \rho_{k+1}(x) \,\mathrm{d}x = \gamma|\Omega|\) using, for example, the bisection algorithm. We put \(\eta = 0.2\), \(\xi = 0.5\), and stop the algorithm when \(\|\rho_{k+1}-\rho_k\|_{L^2(\Omega_{\delta})} < 10^{-4}\). For more details see~\cite{bendose2003topology}. \subsubsection{Convex case: \(p=1\)} In the the ``easy'' convex case corresponding to \(p=1\) we can expect that the optimization algorithm computes approximations to (discretized approximations of) globally optimal solutions. The computed local conductivities \(\kappa^{\text{loc}}=\rho\) and the corresponding states for several values of \(\delta\) are shown in Figure~\ref{fig:p1opt}. Note that in this case the local compliance minimization problem~\eqref{eq:min_loc} admits globally optimal solutions, which is also shown in Figure~\ref{fig:p1opt} with the corresponding state. The qualitative resemblance between the solutions to the non-local and local problems is clear from these pictures. Additionally, we seem to have a quantitative connection between the two problems, illustrated in Table~\ref{tbl:p1}. Note that the number of optimization iterations needed to solve the problem is virtually independent from the value of the non-local horizon \(\delta\) or size of the mesh \(h\). Still, each iteration of the optimization algorithm, which requires solving the discretized state equations, becomes significantly more costly for larger \(\delta>0\). \begin{table} \centering \begin{tabular}{rrrr} \(\delta\) & \(h\) & \(J^*\) & \(N\)\\ \hline \(0.2\) & \(3.54\cdot 10^{-2}\) & \(9.0727 \cdot 10^{-2}\) & \(26\)\\ \(0.2\) & \(1.77\cdot 10^{-2}\) & \(9.1778 \cdot 10^{-2}\) & \(27\)\\ \(0.2\) & \(8.84\cdot 10^{-3}\) & \(9.2408 \cdot 10^{-2}\) & \(27\)\\ \(0.2\) & \(4.42\cdot 10^{-3}\) & \(9.2798 \cdot 10^{-2}\) & \(27\)\\ \hline \(0.1\) & \(3.54\cdot 10^{-2}\) & \(7.8627 \cdot 10^{-2}\) & \(28\)\\ \(0.1\) & \(1.77\cdot 10^{-2}\) & \(7.9479 \cdot 10^{-2}\) & \(32\)\\ \(0.1\) & \(8.84\cdot 10^{-3}\) & \(7.9930 \cdot 10^{-2}\) & \(32\)\\ \(0.1\) & \(4.42\cdot 10^{-3}\) & \(8.0218 \cdot 10^{-2}\) & \(32\)\\ \hline \(0.05\) & \(3.54\cdot 10^{-2}\) & \(7.3251 \cdot 10^{-2}\) & \(29\)\\ \(0.05\) & \(1.77\cdot 10^{-2}\) & \(7.3896 \cdot 10^{-2}\) & \(34\)\\ \(0.05\) & \(8.84\cdot 10^{-3}\) & \(7.4276 \cdot 10^{-2}\) & \(36\)\\ \(0.05\) & \(4.42\cdot 10^{-3}\) & \(7.4484 \cdot 10^{-2}\) & \(35\)\\ \hline \(0\) & \(3.13\cdot 10^{-3}\) & \(6.9460 \cdot 10^{-2}\) & \(38\)\\ \hline \end{tabular} \caption{Summary of the results for the convex case \(p=1\). \(J^*\): optimal value; \(N\): number of OC iterations. \(\delta=0\) corresponds to the limiting local model (solved using a separate code) which admits optimal solutions in the convex case \(p=1\), see~\cite{cea1970example,bendose2003topology,allaire2012shape}.} \label{tbl:p1} \end{table} \begin{figure}\label{fig:p1opt} \end{figure} \subsubsection{Nonconvex case: \(p=2\)} A significantly more interesting case, in view of the very different behaviour of the non-local and local problems, corresponds to the non-convex optimization problem~\eqref{eq:min} with \(p=2\). In this case, for the local problem the intermediate values of the conductivity are effectively penalized by the underlying physics and one can expect the computed optimized conductivity distribitions to be of ``bang-bang'' structure, assuming either the lowest or the highest possible values of conductivity everywhere. Note also that in this case the local optimization problem~\eqref{eq:min_loc} does not admit optimal solutions, and therefore we have no local solution to compare with. The computed local conductivities \(\kappa^{\text{loc}}=\rho^p\) and the corresponding states for several values of \(\delta\) are shown in Figure~\ref{fig:p2opt}. They agree with our expectations. One can also note that for smaller \(\delta\) the computed conductivity distributions display smaller features and progressively more oscillatory character. Indeed, in the case of the local problem, minimizing sequences consist of conductivities, which are locally highly oscillatory (periodic) and can be mathematically understood as converging to composite materials, see~\cite{bendose2003topology,allaire2012shape}. Quantitative information about these solutions is shown in Table~\ref{tbl:p2}. Note that for smaller \(\delta\) the optimization algorithm requires significantly more iterations to converge (which does not mean that it is more difficult to solve, as each iteration is less computationally expensive in this case). The non-existence of optimal solutions for the local compliance minimization problem manifests itself numerically as ``mesh-dependence'' of optimal designs, where progressively more oscillatory conductivity distributions are encountered as one refines the computational mesh. This is often used as an ``engineering'' test of existence of optimal solutions for a given problem. The optimal conductivity distributions corresponding to \(\delta=0.2\) and a range of discretization levels is shown in Figure~\ref{fig:p2meshindep}. Indeed, one can recognize the sequence of convergent shapes as the discretization gets finer and finer. Again, this example works by pure coincidence: indeed as the problem is non-convex the optimization algorithm may end up in different local minima at different discretization levels, and indeed we have observed such behavior for other values of \(\delta\). Finally, we perform a ``cross-check'' of the computed non-local designs by evaluating all computed designs for all values of \(\delta\). We expect (but of course cannot guarantee this, as we can only hope to find locally optimal solutions and not global ones) that the distribution optimized for a specific value of \(\delta\) would outperform other designs computed for different values of \(\delta\). The results of this test are shown in Table~\ref{tbl:p2_crosscheck}. Indeed, our expectations are confirmed: within each row the diagonal element is the smallest one. \begin{figure}\label{fig:p2opt} \end{figure} \begin{figure}\label{fig:p2meshindep} \end{figure} \begin{table} \centering \begin{tabular}{rrrr} \(\delta\) & \(h\) & \(J^*\) & \(N\)\\ \hline \(0.2\) & \(3.54\cdot 10^{-2}\) & \(1.6902 \cdot 10^{-1}\) & \(1080\)\\ \(0.2\) & \(1.77\cdot 10^{-2}\) & \(1.7327 \cdot 10^{-1}\) & \(1062\)\\ \(0.2\) & \(8.84\cdot 10^{-3}\) & \(1.7516 \cdot 10^{-1}\) & \(1300\)\\ \(0.2\) & \(4.42\cdot 10^{-3}\) & \(1.7613 \cdot 10^{-1}\) & \(1139\) \\ \hline \(0.1\) & \(3.54\cdot 10^{-2}\) & \(1.2857 \cdot 10^{-1}\) & \(514\)\\ \(0.1\) & \(1.77\cdot 10^{-2}\) & \(1.3242 \cdot 10^{-1}\) & \(1116\)\\ \(0.1\) & \(8.84\cdot 10^{-3}\) & \(1.3480 \cdot 10^{-1}\) & \(1395\)\\ \(0.1\) & \(4.42\cdot 10^{-3}\) & \(1.3560 \cdot 10^{-1}\) & \(1521\)\\ \hline \(0.05\) & \(3.54\cdot 10^{-2}\) & \(1.0721 \cdot 10^{-1}\) & \(261\)\\ \(0.05\) & \(1.77\cdot 10^{-2}\) & \(1.1066 \cdot 10^{-1}\) & \(790\)\\ \(0.05\) & \(8.84\cdot 10^{-3}\) & \(1.1139 \cdot 10^{-1}\) & \(1358\)\\ \(0.05\) & \(4.42\cdot 10^{-3}\) & \(1.1185 \cdot 10^{-1}\) & \(2394\) \\ \hline \end{tabular} \caption{Summary of the results for the non-convex case \(p=2\). \(J^*\): computed objective value; \(N\): number of OC iterations.} \label{tbl:p2} \end{table} \begin{table} \centering \begin{tabular}{rrrr} Original \(\delta\) & \(J\) for \(\delta=0.05\) & \(J\) for \(\delta=0.1\) & \(J\) for \(\delta=0.2\)\\ \hline \(0.05\) & \(\mathbf{1.1185\cdot 10^{-1}}\) & \(1.1724\cdot 10^{-1}\) & \(1.3225\cdot 10^{-1}\)\\ \(0.1\) & \(1.3954\cdot 10^{-1}\) & \(\mathbf{1.3560\cdot 10^{-1}}\) & \(1.4172\cdot 10^{-1}\)\\ \(0.2\) & \(1.9678\cdot 10^{-1}\) & \(1.8213\cdot 10^{-1}\) & \(\mathbf{1.7613\cdot 10^{-1}}\)\\ \hline \end{tabular} \caption{``Cross-checking'' of the computed designs, \(h=4.42\cdot 10^{-3}\) in all cases.} \label{tbl:p2_crosscheck} \end{table} \end{document}
arXiv
Stochastic averaging principle for dynamical systems with fractional Brownian motion June 2014, 19(4): 1213-1226. doi: 10.3934/dcdsb.2014.19.1213 Pullback attractors for three dimensional non-autonomous planetary geostrophic viscous equations of large-scale ocean circulation Bo You 1, , Chengkui Zhong 2, and Fang Li 3, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an,710049, China Department of Mathematics, Nanjing University, Nanjing 210093 Department of Mathematics, Nanjing University, Nanjing, 210093, China Received June 2013 Revised January 2014 Published April 2014 This paper is devoted to the existence of pullback attractors for the process $\{U(t,\tau)\}_{t\geq \tau}$ associated with the three dimensional non-autonomous planetary geostrophic viscous equations of large-scale ocean circulation. We first prove the existence of pullback absorbing sets in $H$ and $V$ for the process $\{U(t,\tau)\}_{t\geq \tau}$ associated with (1)-(8), and then we prove the existence of a pullback attractor in $H$ by the Sobolev compactness embedding theorem. Finally, we obtain the existence of a pullback attractor in $V$ for the process $\{U(t,\tau)\}_{t\geq \tau}$ associated with (1)-(8) by verifying the pullback $\mathcal{D}$ condition $(PDC)$. Keywords: planetary geostrophic viscous equations, Sobolev compactness embedding theory, pullback $\mathcal{D}$ condition $(PDC)$., Pullback attractor. Mathematics Subject Classification: Primary: 37B55; Secondary: 35Q8. Citation: Bo You, Chengkui Zhong, Fang Li. Pullback attractors for three dimensional non-autonomous planetary geostrophic viscous equations of large-scale ocean circulation. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1213-1226. doi: 10.3934/dcdsb.2014.19.1213 R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975. Google Scholar C. S. Cao, E. S. Titi, Global well-posedness and finite-dimensional global attractor for a 3-D planetary geostrophic viscous model, Communications on Pure and Applied Mathematics, 56 (2003), 198-233. doi: 10.1002/cpa.10056. Google Scholar C. S. Cao, E. S. Titi and M. Ziane, A "horizontal" hyper-diffusion 3-D thermocline planetary geostrophic model: well-posedness and long-time behavior, Nonlinearity, 17 (2004), 1749-1776. doi: 10.1088/0951-7715/17/5/011. Google Scholar C. S. Cao, E. S. Titi, Global well-posedness of the three-dimensional viscous primitive equations of large-scale ocean and atmosphere dynamics, Annals of Mathematics, 166 (2007), 245-267. doi: 10.4007/annals.2007.166.245. Google Scholar D. N. Cheban, P. E. Kloeden and B. Schmalfuß, The relationship between pullback, forwards and global attractors of nonautonomous dynamical systems, Nonlinear Dynamics and Systems Theory, 2 (2002), 125-144. Google Scholar T. Caraballo, G. Łukasiewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Analysis, 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111. Google Scholar V. Chepyzhov, M. I. Vishik, Attractors for equations of mathematical physics, volume 49 of American Mathematical Society Colloquium Publications, AMS, Providence, RI, 2002. Google Scholar B. D. Ewaldy, R. Temam, Maximum principles for the primitive equations of the atmosphere, Discrete and Continuous Dynamical Systems- A, 7 (2001), 343-362. doi: 10.3934/dcds.2001.7.343. Google Scholar P. E. Kloeden, B. Schmalfuß, Non-autonomous systems, cocycle attractors and variable time-step discretization, Numerical Algorithms, 14 (1997), 141-152. doi: 10.1023/A:1019156812251. Google Scholar P. E. Kloeden, D. J. Stonier, Cocycle attractors in nonautonomously perturbed differential equations, Dynamics of Continuous, Discrete and Impulsive Systems, 4 (1998), 211-226. Google Scholar Y. J. Li, C. K. Zhong, Pullback attractors for the norm-to-weak continuous process and application to the nonautonomous reaction-diffusion equations, Applied Mathematics and Computation, 190 (2007), 1020-1029. doi: 10.1016/j.amc.2006.11.187. Google Scholar Y. J. Li, S. Y. Wang and H. Q. Wu, Pullback attractors for non-autonomous reaction-diffusion equations in $L^p,$ Applied Mathematics and Computation, 207 (2009), 373-379. doi: 10.1016/j.amc.2008.10.065. Google Scholar J. Pedlosky, The equations for geostrophic motion in the ocean, Journal of Physical Oceanography, 14 Academic Press, (1984) 448-455. doi: 10.1175/1520-0485(1984)014<0448:TEFGMI>2.0.CO;2. Google Scholar J. Pedlosky, Geophysical Fluid Dynamics, Springer-Verlag, New York, 1987. doi: 10.1115/1.3157711. Google Scholar N. A. Phillips, Geostrophic motion, Reviews of Geophysics, 1 (1963), 123-176. doi: 10.1029/RG001i002p00123. Google Scholar A. Robinson, H. Stommel, The oceanic thermocline and associated thermohaline circulation, Tellus, 11 (1959), 295-308. doi: 10.1111/j.2153-3490.1959.tb00035.x. Google Scholar B. Schmalfuß, Attractors for non-autonomous dynamical systems, in Proc. Equadiff 99 (eds. B. Fiedler, K. Gröer and J. Sprekels), Berlin, World Scientific, Singapore, (2000), 684-689. Google Scholar R. M. Samelson, R. Temam and S. Wang, Some mathematical properties of the planetary geostrophic equations for large-scale ocean circulation, Applicable Analysis, 70 (1998), 147-173. doi: 10.1080/00036819808840682. Google Scholar R. M. Samelson, R. Temam and S. Wang, Remarks on the planetary geostrophic model of gyre scale ocean circulation, Differential and Integral Equations, 13 (2000), 1-14. Google Scholar R. M. Samelson, G. K. Vallis, A simple friction and diffusion scheme for planetary geostrophic basin models, Journal of Physical Oceanography, 27 (1997), 186-194. doi: 10.1175/1520-0485(1997)027<0186:ASFADS>2.0.CO;2. Google Scholar R. Temam, Infinite-dimensional dynamical systems in mechanics and physics, New York, Springer-Verlag, 1997. Google Scholar P. Welander, An advective model of the ocean thermocline, Tellus, 11 (1959), 309-318. doi: 10.1111/j.2153-3490.1959.tb00036.x. Google Scholar Y. H. Wang, C. K. Zhong, Pullback $\mathcalD$-attractors for nonautonomous sine-Gordon equations, Nonlinear Analysis, 67 (2007), 2137-2148. doi: 10.1016/j.na.2006.09.019. Google Scholar B. You, C. K. Zhong and F. Li, Regularity of the global attractor for three dimensional planetary geostrophic viscous equations of large-scale ocean circulation,, in preparation., (). Google Scholar L. Yang, M. H. Yang and P. E. Kloeden, Pullback attractors for non-autonomous quasi-linear parabolic equations with a dynamical boundary condition, Discrete Continuous Dynam. Systems - B, 17 (2012), 2635-2651. doi: 10.3934/dcdsb.2012.17.2635. Google Scholar Mustapha Yebdri. Existence of $ \mathcal{D}- $pullback attractor for an infinite dimensional dynamical system. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 167-198. doi: 10.3934/dcdsb.2021036 Yangrong Li, Lianbing She, Jinyan Yin. Longtime robustness and semi-uniform compactness of a pullback attractor via nonautonomous PDE. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1535-1557. doi: 10.3934/dcdsb.2018058 Xue-Li Song, Yan-Ren Hou. Pullback $\mathcal{D}$-attractors for the non-autonomous Newton-Boussinesq equation in two-dimensional bounded domain. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 991-1009. doi: 10.3934/dcds.2012.32.991 Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060 Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194 Ning Ju. The global attractor for the solutions to the 3D viscous primitive equations. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 159-179. doi: 10.3934/dcds.2007.17.159 Ning Ju. The finite dimensional global attractor for the 3D viscous Primitive Equations. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7001-7020. doi: 10.3934/dcds.2016104 Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143 Yangrong Li, Jinyan Yin. A modified proof of pullback attractors in a Sobolev space for stochastic FitzHugh-Nagumo equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1203-1223. doi: 10.3934/dcdsb.2016.21.1203 Julia García-Luengo, Pedro Marín-Rubio, José Real. Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1603-1621. doi: 10.3934/cpaa.2015.14.1603 Julia García-Luengo, Pedro Marín-Rubio, José Real, James C. Robinson. Pullback attractors for the non-autonomous 2D Navier--Stokes equations for minimally regular forcing. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 203-227. doi: 10.3934/dcds.2014.34.203 Pedro Marín-Rubio, José Real. Pullback attractors for 2D-Navier-Stokes equations with delays in continuous and sub-linear operators. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 989-1006. doi: 10.3934/dcds.2010.26.989 Grzegorz Łukaszewicz. Pullback attractors and statistical solutions for 2-D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 643-659. doi: 10.3934/dcdsb.2008.9.643 Julia García-Luengo, Pedro Marín-Rubio. Pullback attractors for 2D Navier–Stokes equations with delays and the flattening property. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2127-2146. doi: 10.3934/cpaa.2020094 Pan Zhang, Lan Huang, Rui Lu, Xin-Guang Yang. Pullback dynamics of a 3D modified Navier-Stokes equations with double delays. Electronic Research Archive, 2021, 29 (6) : 4137-4157. doi: 10.3934/era.2021076 Daomin Cao, Xiaoya Song, Chunyou Sun. Pullback attractors for 2D MHD equations on time-varying domains. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 643-677. doi: 10.3934/dcds.2021132 Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 509-523. doi: 10.3934/dcdsb.2017195 T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037 Shulin Wang, Yangrong Li. Probabilistic continuity of a pullback random attractor in time-sample. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2699-2722. doi: 10.3934/dcdsb.2020028 Bo You Chengkui Zhong Fang Li
CommonCrawl
Advances in BeiDou Navigation Satellite System (BDS) and satellite navigation augmentation technologies Rui Li1,2, Shuaiyong Zheng1, Ershen Wang3, Jinping Chen4, Shaojun Feng5, Dun Wang6 & Liwen Dai7 Satellite Navigation volume 1, Article number: 12 (2020) Cite this article A Correction to this article was published on 31 March 2020 Several noteworthy breakthroughs have been made with the BeiDou Navigation Satellite System (BDS) and other global navigation satellite systems as well as the associated augmentation systems, such as the commissioning of the BDS-3 preliminary system and the successful launch of the first BDS-3 GEO satellite which carries the satellite-based augmentation payload. Presently, BDS can provide basic services globally, and its augmentation system is also being tested. This paper gives an overview of BDS and satellite navigation augmentation technologies. This overview is divided into four parts, which include the system segment technologies, satellite segment technologies, propagation segment technologies, and user segment technologies. In each part, these technologies are described from the perspectives of preliminary information, research progress, and summary. Moreover, the significance and progress of the BeiDou Satellite-based Augmentation System (BDSBAS), low earth orbit augmentation, and the national BeiDou ground-based augmentation system are presented, along with the airborne-based augmentation system. Furthermore, the conclusions and discussions covering popular topics for research, frontiers in research and development, achievements, and suggestions are listed for future research. GPS came into full operation in 1995, when Russia, Europe, and China subsequently put their satellite navigation systems into full use (Elliott and Christopher 2006). In recent years, China has been actively promoting the construction and development of the BeiDou Navigation Satellite System (BDS), and by the end of the year 2000 the construction of BDS-1 was complete and BDS-1 began to provide GPS services for China. The construction of BDS-2 was completed at the end of 2012, after which BDS-2 was utilized to provide service for regions all over the Asia–Pacific region. China is also working on applying its BDS to serve users all over the world. With the rapid development of BDS in all aspects, the chief designer of BDS, Yang Changfeng, announced the initial operation of BDS-3 at the state council information office of China in early 2019, and since this time BDS-3 has been officially providing global positioning, navigation, and timing services. Completion of the construction of BDS-3 is expected to occur around 2020, after which BDS-3 will be able to provide services to the world on a wider basis. It is well known that the navigation service provided by GNSS covers use with bicycles, cars, and trains, which require adequate position accuracy. As GNSS is gradually applied to civil aviation, stricter requirements need to be satisfied for safe flight. In addition to accuracy, integrity is also emphasized for the safe navigation of aircraft. The civil aviation authority has produced the requirements for satellite navigation systems in terms of required navigation performance (RNP), including accuracy, integrity, continuity, and availability. These requirements in terms of accuracy and integrity (Guo et al. 2019) are illustrated in Fig. 1. The requirements for accuracy in civil aviation vary from the kilometer level to the meter level, for safety en-route and when carrying out CAT-III approaches, and the requirements for integrity risk probability lie between 10−4/h and 10−9/approach level. Performance requirements in terms of accuracy and integrity The International Civil Aviation Organization (ICAO) has recommended specific GNSS requirements for different phases of flight (Anonymous 2006), which are listed in Table 1. The ICAO divides the satellite navigation augmentation systems into three categories: the Satellite-Based Augmentation System (SBAS), the Ground-Based Augmentation System (GBAS), and the Airborne-Based Augmentation System (ABAS). Generally speaking, as GBAS and SBAS are built with corresponding auxiliary facilities, they are also known as external augmentation systems. ABAS is built with an internal receiver or other airborne navigation sources and is hence also sometimes called the internal augmentation system. Table 1 ICAO GNSS requirements for different phases of flight Satellite navigation augmentation technology is technology that can further improve the accuracy, integrity, continuity (Li et al. 2018), and availability of GNSS by generating augmentation information or adding a signal source. More specifically, satellite navigation augmentation technologies mainly include: satellite integrity monitoring and alarms, GNSS signal propagation technologies such as ionospheric propagation modeling, monitoring, and alarms, wide area integrity monitoring, local area integrity monitoring, GNSS differential correction, pseudolite augmentation, the design and implementation of the GNSS augmentation system, the analysis and verification of accuracy, integrity, continuity, availability, the accurate detection of faults and alarms, integrity risk modeling, anti-spoofing, and other augmentation technologies. According to the principle of GNSS augmentation technologies, satellite navigation augmentation technology is divided into two categories: information augmentation technologies and signal augmentation technologies. For information augmentation technologies, the ground tracking network calculates GNSS signal error corrections and integrity information. The ground tracking network then broadcasts these data to users through the internet or satellite communication channels, and the user utilizes the received corrections and integrity information to process the GNSS signal received. For instance, the augmentation technology of GBAS is denoted information augmentation technology. Signal augmentation technologies refer to technology which provides additional ranging signals to complement those already provided by GNSS. Through the use of signal augmentation technologies, users can obtain accurate and reliable information concerning position in situations where GNSS cannot usually be used or does not work sufficiently (Ge et al. 2018; Wang et al. 2018a, b). For example, Low Earth Orbit (LEO) augmentation technology is a type of signal augmentation technology (Li et al. 2019a; Ma 2018; Reid et al. 2016). SBAS is a combination of information augmentation technologies and signal augmentation technologies, providing multiple augmentations including ranging signals, corrections, and integrity. Satellite navigation augmentation technologies are of four types: system segment technologies, satellite segment technologies, propagation segment technologies, and user segment technologies. In this study, the theory and application of the augmentation technologies used in the BDS/GNSS augmentation systems are introduced with an emphasis on the theory and application of the BeiDou Satellite-Based Augmentation System (BDSBAS) and the National Beidou Ground-based Augmentation System (NBGAS). The popular topics in satellite navigation augmentation technologies are summarized from four perspectives. Through the introduction of the relevant technologies, the development status and trends in satellite navigation augmentation technologies are presented, providing a reference for future research. System segment technologies The system segment is the core of the GNSS augmentation system with the mission of monitoring GNSS satellites and generating differential corrections and integrity information over a wide area. System segment technologies are the key to the construction of GNSS augmentation systems and determine the service capability of GNSS augmentation systems. GNSS augmentation technologies used in system segments can be divided into two types: technology used in the SBAS system segment and technology used in the GBAS system segment. Technologies of SBAS system segment SBAS system segments are mainly composed of monitor stations and master stations. Monitor stations receive and process signals from the GNSS and SBAS satellites (Parkinson et al. 1996). More specifically, monitor stations collect dual frequency code carriers and the pseudorange. The detection of cycle slip errors and repair can be implemented with dual frequency code carriers and the dual frequency carrier is used to smooth the dual frequency pseudorange, subsequently removing any ionospheric delay. Meteorological parameters are utilized to compute the tropospheric delay (SC-159 2013). The geometric range can be computed using the navigation message and the position of the monitor station. Common view time transfer is used to estimate the clock offset of the monitor station and the synchronized pseudorange (Chen et al. 2017; Tsai 1999). Finally, the monitor stations forward the resultant data to the master stations. Master stations generate differential corrections and integrity information with regard to each monitored satellite and each monitored ionosphere grid point (IGP). Differential corrections include long-term corrections, fast corrections, and ionospheric corrections. Long-term corrections and fast corrections are used to mitigate the slowly changing errors of the satellite clock-ephemeris and the rapidly changing errors of the satellite clock, respectively. Ionospheric corrections are used to revise the atmospheric delay and IGPs are used to deduce the pseudorange errors in the ionospheric pierce point. The accuracy of position provided by GNSS can be improved with differential corrections. Integrity Information includes the user differential range error (UDRE), clock-ephemeris covariance matrix, and the grid ionospheric vertical error (GIVE). UDRE is used to compute the integrity error bound for satellite clock-ephemeris errors and GIVE is used to compute the integrity error bound for ionospheric errors. Using GIVE and UDRE, the integrity protection levels of SBAS can be determined to analyze the probability of integrity. After generation by the master stations, the differential corrections and integrity information are quantified. All these data are packaged into SBAS messages and sent to navigation earth stations, as illustrated in Fig. 2. These stations then upload this augmentation information to SBAS satellites in space, and it is broadcast to users. The user receives the basic navigation signal from the GNSS satellite and the augmentation signal from the SBAS satellites, and thus determines location and information concerning safety. Workflow of SBAS Research progress Satellite navigation augmentation systems, especially SBAS, have been a popular topic for many years. Existing SBAS systems include the Wide Area augmentation System (WAAS) in the United States, the European Geostationary Navigation Overlay System (EGNOS) in Europe, the System of Differential Correction and Monitoring (SDCM) in Russia, BDSBAS in China, the Mtsat Satellite-based Augmentation System (MSAS) in Japan, and Gps-Aided Geo-Augmented Navigation (GAGAN) in India. WAAS has been in full use for 16 years, providing the LPV200 service. China successfully launched the GEO-1 satellite (SBAS PRN 130) on November 1, 2018. BDSBAS is still under construction, with three GEOs yet to be launched (Liu 2019). The BDSBAS signal has been broadcast since November 9, 2018 (Liu 2019). BDSBAS is now broadcasting augmentation information to test the performance of SBAS. The augmentation information broadcast by BDSBAS does not conform to the service requirements for the Minimum Operational Performance Standards 229 of Radio Technical Commission for Aeronautics (RTCA MOPS 229); hence BDSBAS cannot yet provide services for civil aviation. The goals surrounding the construction of BDSBAS are to perform wide area differential processing and integrity monitoring for BDS/GNSS satellites in China and its surrounding areas, broadcast augmentation messages via the B1C and B2a signals to users through the GEO satellites, and to improve the accuracy, integrity, continuity, and availability of the service. BDSBAS will initially suit the requirements of civil aviation users for APVI precision approach, ultimately reaching the level required for CAT-I precision approach. BDSBAS interoperates with WAAS, EGNOS, and other SBAS to provide information for the RTCA L1CA and RTCA DFMC interfaces. The BDSBAS satellites are located at 80°E, 110°E, and 140°E (Chen 2019; Guo et al. 2019; Liu 2019). Few papers have been produced concerning BDSBAS. Chen Jinping, chief engineer of the Beijing Satellite Navigation Center, both designed and carried out preliminary testing on BDSBAS, producing a general design for BDSBAS including the system work mode, information processing mode, and analysis (Chen 2019), as shown in Fig. 3. The pseudorange and code phase of the monitor stations are returned to the information processing center at the master station in order to calculate the corrections necessary for the satellite, the ionospheric grid delay, and the corresponding information concerning integrity for use with multi-GNSS. Each monitor station is equipped with three independent receivers which are used for the calculation of differential corrections, integrity checks, and the backup of data. The master stations simultaneously process augmentation information for RTCA L1CA with the B1C signal and DFMC interfaces with the B2a signal. This information is then broadcasted by the GEO satellites. BDSBAS operating process BDSBAS uses the kinematics model and the dynamics model for orbit and clock correction, respectively. According to analysis of the regional monitoring network data, the accuracy of the corrections for orbit offset is similar to that of the corrections of the clock offset. Results show that the UDRE of BDS-3 is near to 0.4 m which is conservative compared with that of WAAS and EGNOS. The parameters UDRE and GIVE can provide the integrity confidences for the corresponding corrections. Degradation parameters are useful for guaranteeing service integrity on the rare occasions when SBAS users fail to receive differential corrections. BDSBAS monitor stations across China have been selected as users to analyze the performance of BDSBAS in terms of the position domain and the single point position accuracy of BDSBAS, which is similar to that of WAAS and EGNOS. However, the probability of integrity is lower for BDSBAS than it is for WAAS and EGNOS. BDSBAS will soon be able to provide RTCA L1CA, RTCA DFMC, and other protocol augmentation information (Chen 2019). Dual frequency range error (DFRE) is a critical integrity parameter in DFMC SBAS, and the method used to calculate DFRE has not yet been introduced in the relevant literature abroad. Shao Bo, an engineer at the 20th Research Institute of China Electronics Technology Group Corporation developed an integrity method for use with DFRE, using a projection method. The satellite clock-ephemeris covariance matrix is used to find the maximal projection direction, and the projection of the covariance matrix in this direction is defined as a DFRE which can form a bound for satellite correction error (Shao 2019; Shao et al. 2011). The DFRE of BDS and GPS were solved and compared with the maximal corrected error using the observations made by the 24 monitor stations in China shown in Fig. 4. The DFRE, calculated using the projection method, can form an envelope for the maximal corrected error, which is suitable for monitoring the integrity of different constellations or types of satellites (Shao 2019; Shao et al. 2012). Results show that the DFRE solved with the projection method can bound the satellite correction error with a probability of 99.9%, and is suitable for use with different constellations or different kinds of satellites. After further validation, the method can be applied to the DFRE calculation of BDSBAS (Shao 2019; Shao et al. 2011). Distribution of the selected monitor stations in China In 2013, the China Aerospace Science and Technology Corporation Ltd signed a contract with the Algerian space agency regarding the Algerian communication satellite (Alcomsat-1). Alcomsat-1 utilizes a DFH-4 satellite and is equipped with 33 transponders, including the L1 and L5 navigation augmentation transponder payloads. The China Aerospace Science and Technology Corporation Ltd constructed a satellite augmentation system based on the Alcomsat-1, named Al-SBAS, in order to provide services for Algeria and the surrounding area. The Alcomsat-1 communications satellite was launched on December 11, 2017, and is located at 24.8° W in a geostationary orbit (Li 2019). Compatible with ICAO standards and based on ALCOMSAT-1, the SBAS aims to improve the positioning accuracy and integrity in Algeria and the surrounding area, providing services for users in many fields such as surveying, transportation, aviation, railways, and the ocean (Li 2019), as seen in Fig. 5. The system collects GPS observations and solves GPS satellite ephemeris errors, clock errors, and ionospheric errors together with the corresponding integrity parameters in real-time, and broadcasts differential corrections and integrity information through GEO satellites with a high accuracy and a significant capability for integrity augmentation (Li 2019). Al-SBAS operating process based on ALCOMSAT-1 The ground segment of Al-SBAS is composed of 18 monitor stations in Algeria, a data processing center in Algiers, and an uplink station in Algiers. Signals that are supported include GPS, GLONASS, BDS, and GALILEO. The data processing center is used to perform local system redundancy, automatic switch-over, autonomous state monitoring, and fast seamless recovery. The system collects data from GPS monitor stations to generate satellite corrections, ionospheric corrections, and integrity parameters, and then broadcasts these augmentation data through the GEO satellite Alcomsat-1 which is equipped with two way L1 and L5 navigation augmentation payloads (Li 2019). The positioning accuracy of the single frequency and dual frequency services is at a sub-meter level. Follow-up plans include continuous operation, monitoring and evaluation of the L1 augmentation service, updating the L5 augmentation signal, BDS augmentation services, the issue of a signal-in-space ICD, and public services. AL-SBAS improves the positioning accuracy and integrity of GPS in the Algerian region. The construction and final testing of the ground monitor stations, data processing center, and the uplink station for AL-SBAS have been completed (Li 2019). The future prospects of Al-SBAS in Algeria and its surroundings are therefore promising. The Wide Area Precise Positioning System (WAPPS) is a GNSS differential system for real-time high accuracy positioning and navigation as shown in Fig. 6. By referring to the work mode of SBAS, WAPPS broadcasts corrections by broadcasting from satellites and via the internet to ensure that users perform precise point position-ing (PPP) (Shen et al. 2019). With the advantages of the low density of stations, wide range of services, variation in services, and the simple user terminal, WAPPS is applied to marine transportation, surveying, and accuracy agriculture, among others. WAPPS provides services that are related to the safety, for which integrity with high accuracy in real-time can be obtained from the WAPPS signal produced by broadcasting satellites and the WAPPS service on the internet (Wang 2019e). Demonstration of the WAPPS operating process Similar to other differential systems such as SBAS and GBAS, integrity monitoring also needs to be performed for WAPPS. The performance of both the pseudorange and carrier correction is monitored to guarantee the service performance of PPP. The percentage of missed alerts is 10−3, the percentage of false alerts is 10−5, and the time taken to produce an alert is less than 10 s. WAPPS provides services using dual frequency ionosphere-free combination. Dual frequency ionosphere free combination can remove any ionospheric delay sufficiently, even when abnormal phenomena occur in the ionosphere. The carrier cycle slip and other abnormalities in the user segment need to be guaranteed by the receiver, which are not considered in the integrity monitoring of the system. WAPPS integrity fault modes can be divided into step faults and slow drift faults. All faults will affect the pseudorange, the carrier observation, and the positioning performance. Results reveal that in normal status, integrity monitoring can ensure the requirements for missed alerts and false alerts. When step faults or slow drift faults occur in the corrections, the alert has to be sent to a user within a short period of time. A slow drift rate of 0.1 m/s can be detected within 5 s with carrier phase monitoring (Wang 2019e). An engineer at Space Star technology CO., Ltd recently predicted the service performance of single frequency BDSBAS and dual-frequency Multi-constellation (DFMC) SBAS in China without consideration of the broadcast of clock-ephemeris covariance matrix and augmentation information quantification (Chen et al. 2019). The results of this research are better than the BDSBAS performance derived from the general engineering community, which has led to many questions. Considerable work is needed to forecast the service performance of BDSBAS. BDSBAS is still under construction. China successfully launched the first SBAS satellite and began broadcasting its augmentation signal last year. Two SBAS satellites still remain to be launched. Although BDSBAS has been broadcasting an augmentation signal for approximately 1 year, BDSBAS is still being tested and therefore cannot yet provide services for civil aviation. Several studies have been carried out concerning the system segment of BDSBAS. The methods used for augmentation information are being developed from institute to university. Some scholars have given the design and preliminary analysis of BDSBAS the principle of differential correction and integrity parameters, and the service performance of BDSBAS. The algorithms used in the system segment of SBAS are significant for the construction of BDSBAS and are hence popular targets for research. Technologies of GBAS system segment GBAS is a kind of differential GNSS (DGNSS) that is applied to aircraft for a precision approach. GBAS is able to provide CAT-I and higher level precision approach and landing guidance services for aircrafts equipped with the corresponding airborne equipment within the airspace of the terminal area of an airport (Geng 2019). Aviation GBAS consists of a ground segment and an airborne segment. The ground segment consists of a reference receiving subsystem, a ground processing subsystem, a maintenance management subsystem, and a VDB (very high frequency Data Broadcasting) subsystem. The airborne segment mainly refers to the multi-mode receiver (MMR). The reference receiving subsystem receives the ranging signal of the GNSS satellites and forwards it to the ground processing subsystem. The ground processing subsystem then generates the augmentation information for GBAS users. The maintenance management subsystem is used to ensure the normal work of the ground processing subsystem can detect any faults in the ground processing subsystem and send control commands to the ground processing subsystem. After receiving the augmentation information, the VDB subsystem transmits it to GBAS users, receiving the augmentation information from the MMR and calculating the location and the integrity information for the pilot. In detail, the ground processing subsystem generates the differential corrections for the visible satellite by combining observations from each reference receiver. The integrity information of the visible satellite or the navigation system is formed at the same time through the real-time monitoring of the navigation signal or the abnormality at the ground stations. The final approach segment (FAS) data, calibration, and integrity information are then transmitted through VDB to the airborne users, as illustrated in Fig. 7. The airborne user receives the augmentation information from the VDB subsystem and generates flight information which is displayed in the flight control instruments and display. As GBAS normally uses specialized reference stations that are located close to each other, which are always distributed around an airport, and the distance between the airborne users and the GBAS stations is close (less than 50 km), the errors between them are strongly correlated and GBAS can thus improve the positioning accuracy and integrity of the airborne users. Demonstration of aviation GBAS GBAS is currently under development worldwide. The GBAS sls-3000 ground stations by Honeywell were installed at Malaga airport in Spain early in 2007. France conducted a signal-in-space verification of the CAT-I ground station for GBAS that was installed at Toulouse in 2006, and continuously monitors GBAS performance. As of 2009, the FAA has placed multiple CAT-I GBAS installations into service using the Honeywell SLS-4000 ground station. In 2017, the FAA began to carry out system design approval for the GAST-D system. However, the FAA had to suspend this project due to a lack of funding. In 2018, Japan finished the development and deployment of ground and airborne subsystem software for GBAS, and began to perform data collection and analysis. Japan plans to conduct air-ground experiments in 2019. In China, the China Electronics Technology Group Corporation is performing research and development on the GBAS Approach Service Type D (GAST-D), using its CAT-I GBAS products as a base. China is also conducting research on NBGAS, using the BDS. NBGAS is a pivotal part of BDS. NBGAS uses an advanced system architecture, data processing system, and software to realize multiple mode positioning accuracy augmentation, utilizes various broadcasting means to broadcast augmented data products, and provides positioning with accuracy at the meter, decimeter, and centimeter level, or on the millimeter level after post-processing. Geng Yongchao, a senior engineer at the CETC Northwest Group Co., LTD, developed GBAS for civil aviation, including the concept of using GBAS for civil aviation, the civil aviation GBAS architecture, and progress in GBAS. The CETC Northwest Group Co., LTD has performed flight testing with inspection aircrafts. After performing a static check, a taxi check, GBAS reception, and a flight test, the group concluded that lgf-1a GBAS equipment can provide aircraft precision approach, automatic landing, and taxi guidance services under the current regulatory specifications (Geng 2019). Cai Yi, a chief designer at the China Research & Development Academy of Machinery Equipment, developed NBGAS, with test equipment and a user terminal as shown in Fig. 8. NBGAS refers to the national BeiDou ground-based augmentation system which can provide service for ships, trains, and cars, etc. Unlike GBAS, which is designed to serve civil aviation, NBGAS is designed to provide a high precision positioning service for many areas. NBGAS focuses on the services of satellite broadcasting, digital radio, and mobile communication. NBGAS uses different wide reference stations (RSs) than GBAS, and pays significant amounts of attention to position accuracy. Obviously, NBGAS cannot support the current ICAO GBAS standards or provide service for civil aircrafts as does GBAS. NBGAS are composed of BeiDou augmentation RSs which are distributed all over China, data processing systems, data broadcasting systems, and user terminals. BeiDou wide area RSs and regional RSs collect the GNSS signals and forward these data to the corresponding data processing systems. NBGAS data processing systems include industrial data processing systems and national data processing systems, both of which output high accuracy real time (RT) products from the meter level to the millimeter level, which is sent to users through data broadcasting systems. Finally, users such as the cars drivers and pilots can obtain NBGAS services. Workflow of NBGAS The real-time positioning accuracy for NBGAS was tested at the meter level, decimeter level, centimeter level, and post-processing millimeter level augmentation by Cai Yi. Results showed that the positioning accuracy of NBGAS meets or exceeds the design performance. According to the service capability tests for real-time positioning accuracy at the meter level to the post-processing millimeter level, the service capability of NBGAS's positioning accuracy meets and is superior to that indicated by the system design (Cai 2019). With the rapid development and improvement of BDS, the system is being widely applied in many areas. As an important navigation system, the ground-based augmentation system has begun to provide services for all kinds of users by combination with BDS. Three of the typical ground-based augmentation systems are under discussion: the civil aviation ground-based augmentation system, the Chinese ground-based augmentation system, and the national BeiDou ground-based augmentation system. The civil aviation ground-based augmentation system represents the traditional type of GBAS that is focused on civil aviation. The Chinese ground-based augmentation system represents the GBAS that is focused on aviation which was developed in China. The national BeiDou ground-based augmentation system refers to NBGAS, which provides service for trains and cars, etc. The Chinese ground-based augmentation system has passed technical reviews and system tests so far, and has reached the stage of verification flight. To implement the CAT II/III research, the first verification flight of the Chinese GBAS was performed in April 2019, and further GAST-D technical tests are being carried out during the second half of 2019 and into 2020. NBGAS officially started work in 2014 and provided real-time accuracy at the centimeter and the post-processing millimeter level in 2016. In 2017, NBGAS service performance specification version 1.0 was released and NBGAS began to undergo thorough testing in 2018. The augmentation for the real-time positioning accuracy at the meter level, decimeter level, centimeter level, and post-processing millimeter level of NBGAS was tested, with results indicating that the positioning accuracy of NBGAS either meets or exceeds the design performance. NBGAS can provide real time accuracy at the meter and decimeter level as of 2019. In summary, the development of GBAS in China has reached an early stage. The research of GBAS is considered significant, and has practical value in terms of aerospace missions in the future. The research into NBGAS with civil aviation has caught the attention of many researchers and is increasingly popular. Satellite segment technologies Satellite segment is the space part of the satellite navigation system and the key to constructing BDS/GNSS augmentation systems, which are composed of GNSS satellites and SBAS satellites. GNSS satellites mainly include GPS, GLONASS, Galileo, and BDS satellites, which are used to broadcast navigation messages and ranging signals, provide daily services for users all over the world, and play a pivotal part in daily life. SBAS satellites include the GEO and IGSO satellites which are used to transmit differential corrections and integrity information to civil aviation users and therefore have a significant effect on flight safety. Satellite segment technologies are therefore investigated and summarized in this paper. Technologies of LEO satellite segment Signal augmentation technologies refer to the technology of signal extension which provides a signal source that is capable of transmitting ranging signals to expand and extend GNSS. Through signal augmentation technologies, users can obtain accurate and reliable locations in scenarios where GNSS cannot be used or does not work well (Ma 2018; Reid et al. 2016; Wang et al. 2018a). To overcome the vulnerability of the GNSS system from the perspective of availability, reliability, and anti-interference, some researchers both in China and abroad have proposed that LEO satellites broadcast a ranging signal to augment the performance of the satellite navigation system that is composed of Medium Earth Orbit (MEO) and High Earth Orbit (HEO) satellites. Therefore, LEO augmentation technologies are under development, attracting the attention of many researchers. There are many advantages of using LEO satellites for satellite navigation signal augmentation in addition to the increase in visibility and the improvement to accuracy (Wang et al. 2018b), including the following: (1) Leo satellites move fast which helps to speed up the convergence of the accuracy positioning filter. Results show that the convergence time of precise point positioning is shortened from 30 to 1–2 min when using LEO; (2) As LEO satellites are at low altitude, the signal-in-space is stronger than that of the MEO or HEO satellites. (3) The range covered is larger than that of GBAS. (4) LEO satellites can provide global signal augmentation by constructing a global LEO satellite platform (Ma 2018; Reid et al. 2016). There are several companies which have proposed a plan to construct LEO satellites, such as SpaceX, the China Aerospace Science and Technology Corporation, and the China Aerospace Science and Industry Corporation (2018). On January 8, 2017, SpaceX adopted a ten-satellite solution to send the first 10 LEO satellites from the second generation of the Iridium plan into space. The LEO satellites made by Hongyan and Hongyun have entered the stage of research and development and the whole constellation will be completed in 2023. Some academic research has also been conducted concerning LEO satellites. The Luojia-1A satellite that was developed by Wuhan University was successfully launched on June 2, 2018. The Luojia-1A satellite has the main function of providing precise measurements of orbit and timing, storing GNSS onboard observations and downloading to the ground, and LEO augmentation experiments. The Luojia-1A satellite was used to carry out experiments investigating the navigation signal augmentation based on the LEO satellite platform in China for the first time (Wang et al. 2018a). The LEO augmentation system onboard the Luojia-1A satellite receives the dual frequency signal from BDS and GPS for real-time autonomous precise orbit determination and timing, and generates dual frequency ranging augmentation signals that are sent to the user on the ground as shown in Fig. 9. Users can simultaneously receive a dual frequency signal from BDS, GPS, and Luojia-1A in order to achieve the hybrid satellite constellation positioning of the HEO, MEO, and LEO satellites, improving the performance in navigation and positioning (Wang et al. 2018b). As the frequency of the augmentation signal from Luojia-1A is different from that of the existing GNSS signal, a modified receiver with specific receiving antenna is installed onto the equipment of a user to collect the signals from both GNSS and Luojia-1A. The signal augmentation experiments carried out with Luojia-1A reveal that the accuracy of the zero-baseline single difference signal reaches approximately 1 m and 4 mm for the code and carrier phase, respectively. The attenuation of the space signal from LEO satellites changes significantly during a single transit and the large variation in space signal loss needs to be compensated for by specially designed signal transmitting antennas such as the ISO-flux antennas. Conclusions can be drawn that the signal attenuation can be resolved by the antenna array through beam synthesis and that the ISO-flux radiation figure is then formed by the antenna array. The significant doppler and acceleration variation in the LEO signal provides both challenges and opportunities for receiver design. Balancing dynamics with code accuracy is still difficult (Wang et al. 2018a, 2019). Principle of the Luojia-1A augmentation system The LEO satellite moves faster than the MEO or HEO satellites. The phase center variation (PCV) of the satellite antenna observed by users changes rapidly, along with the azimuth and elevation angle. The PCV therefore needs to be measured at a higher sampling rate than the GNSS. PCV calibration also has to be conducted in order to improve the accuracy of the satellite antenna. Lei Wenying, an engineer at the China Academy of Space Technology (Xi'an) conducted research into LEO navigation augmentation to address this problem. For LEO navigation augmentation, the specifications of the PCV of the LEO satellite antenna at the decimeter level with precise point positioning are studied when PCV correction for the LEO satellite antenna is not required. This analysis can provide support and reference for the design of satellite antenna for LEO systems (Lei et al. 2019). Considering the influence of broadcast ephemeris error on the orbital determination of LEO satellite-borne GPS, the theory of satellite-based augmentation is applied to the real time orbit determination of LEO satellite-borne GPS. At first, GPS real-time precise orbit determination is resolved with iGS, MGEX, and Unistrong data, and then high accuracy orbit and clock corrections are calculated using real-time precise orbit and clock offset. Finally, the LEO satellite is used to determine the accuracy of the correction parameters. The experimental observation of the satellite-borne GPS of the LEO satellites SWRAM A, B, and C is combined with the broadcast ephemeris and high precious ephemeris correction to determine the orbit. The accuracy of the real-time orbit determination of satellite-borne GPS assisted by state space representation can reach 10–15 cm. The real-time correction information for LEO satellites is quite useful for determining the orbit of satellite-borne GPS (Wang 2019c). To further improve the performance of GNSS positioning, LEO augmentation technologies are proposed to provide more signal sources for users. With the advantages of fast movement and strong signal strength, the LEO augmentation technology has been widely focused by researchers. Wuhan University both launched the Luojia-1A satellite and performed the navigation signal augmentation experiment based on the LEO satellite platform for the first time in China. The phase center variation (PCV) of the satellite antenna is calibrated to improve the antenna accuracy of LEO satellites. Satellite corrections are used to improve the real-time orbit determination accuracy of LEO satellites for satellite-borne GPS. The research on LEO augmentation technologies has emerged rapidly and massively. Technologies of GEO satellite segment GEO satellites tend to work as the information channel of the differential corrections and integrity information of SBAS, which are uploaded by navigation earth stations. GEO satellites receive SBAS augmentation data and transfer it to the ground user through a downlink. The GEO satellites of WAAS have been broadcasting WAAS messages for more than 15 years. However, details of the implementation of SBAS functions on the existing commercial GEO satellites are not public because they are proprietary to the satellite manufacturers and operators. It is therefore necessary to introduce more details of the GEO satellite segment. In China, some scholars begin to report on the BDSBAS GEO satellites in 2019. Liu Tianxiong, a research fellow at the China Academy of Space Technology, described the BDSBAS space segment. The BDSBAS PRN 130 was successfully launched on November 1, 2018 and the other two GEO satellites will be launched soon. The BDSBAS payload of the GEO satellite utilizes the processing transponder, as shown in Fig. 10. The augmentation signals from BDSBAS are generated by the mission navigation unit (MDU) which is composed of a navigational data unit and a navigational baseband. The process by which signals are broadcast by BDSBAS is the same as that of BDS RNSS. Navigation data is uploaded from the control segment. The receiver recovers the uplink navigation data and the navigation data is then transferred to the MDU on the BDSBAS payload via the 1553B bus. MDU generates the navigation message and the augmentation signal which is broadcast by the B1, B2, and B3 array antenna. The augmentation signal and RNSS signal are broadcast through the same RF link (Liu 2019). Design of the BDSBAS GEO payload Tests show that the radio frequency characteristics of both the single frequency service and the dual frequency multi-constellation service meet the requirements of RTCA MOPS 229 and ICAO SBAS ICD. The GEO-2 and GEO-3 satellite of BDS-3 will be launched in the near future. BDSBAS will initially provide the APV-I service, and finally the CAT-I service for civil aviation in China. Wang Binghao, a Ph.D. candidate at the Information Engineering University introduced the influence of the GEO orbit error fluctuation on the BDS Wide Area Differential Service (WADS). Fluctuation is found frequently in GEO orbit errors. The fluctuation in the cross component of the GEO orbit errors is similar to a sinusoidal waveform. The fluctuation in the BDS GEO orbit error of the broadcast ephemeris degrades the BDS WADS PPP performance. With accurate estimation and correction of the fluctuation, the fluctuation can be effectively smoothed, and the performance of WADS PPP can be improved. After correction, the effective range of the zone correction can be further expanded (Wang 2019a). GEO satellites are usually used as an information channel for differential corrections and the integrity information of SBAS. In China, a few scholars began to introduce BDSBAS GEO satellites in 2019. The BDSBAS GEO-1 satellite (SBAS PRN 130) was successfully launched on November 1, 2018 and the other two GEO satellites will be launched in the near future. The BDSBAS signal was broadcast from November 9, 2018. Research on the SBAS satellite segment is still popular. Propagation segment technologies The propagation segment is the path of the information interaction taken between the satellite and the ground. The propagation segment technology mainly includes tropospheric delay estimation technologies, ionospheric delay estimation technologies, and the multipath mitigation technologies that will be introduced later. Technologies of tropospheric delay estimation The troposphere is located at the bottom of the atmosphere; the top of troposphere lies at approximately 40 km above the ground. When the GNSS signal passes through the troposphere, the signal propagation time is prolonged leading to tropospheric delay. Tropospheric delay is an error source that affects the performance of GNSS. In order to service users with the high accuracy requirements of GNSS, it is necessary to accurately estimate the tropospheric delay. Unlike signal propagation in the ionosphere, the speed and direction of propagation do not change in the troposphere. The propagation velocity of a GNSS signal in the troposphere is only related to the meteorological conditions. Tropospheric delay is therefore usually determined using the empirical weather model (Elliott and Christopher 2006). Tropospheric delay of SBAS is split into two parts and takes the form (SC-159 2013): $${\text{TC}}_{i} = - \left( {d_{hyd}^{2} + d_{wet}^{2} } \right) \cdot m\left( \theta \right)$$ where the variables \(d_{hyd} ,d_{wet}\) denote the estimated range delays for a satellite at an elevation angle of 90°, caused by atmospheric gases in hydrostatic equilibrium and water vapor, respectively. The variable \(\theta\) denotes the mapping function used to scale the delays to the actual satellite elevation angle \(m\left( \theta \right)\). The two variables \(d_{hyd} ,d_{wet}\) are a function of the meteorological parameters which can be calculated via linear interpolation (SC-159 2013). The tropospheric delay for GBAS is estimated by (SC-159 2004): $${\text{TC}}\left( \theta \right) = {\text{N}}_{R} h_{0} \frac{{10^{ - 6} }}{{\sqrt {0.002 + \sin^{2} \theta } }}\left( {1 - e^{{\frac{\varLambda h}{{h_{0} }}}} } \right)$$ where the variables \({\text{N}}_{R} ,\varLambda h,h_{0}\) denote the refractivity index transmitted by the ground subsystem, the difference in altitude between the airborne and the ground subsystems, and the troposphere scale height transmitted by the ground subsystem, respectively (SC-159 2013). The meteorological phenomena in the troposphere are complex, and the value for tropospheric delay changes with different seasons and latitudes. The tropospheric delay can be estimated via the empirical weather model by using the corresponding meteorological parameters. Technologies of ionospheric delay estimation The ionosphere is an important atmospheric layer in the space environment of the Earth. Atmospheric molecules in the ionosphere are decomposed into atmospheric ions and electrons under the influence of light from the Sun (Elliott and Christopher). When the GNSS signal passes through the ionosphere, which is filled with electrons, its propagation speed and direction will change, and the signal propagation time is prolonged. This phenomenon is known as ionospheric delay, which is an error source that can affect the performance of GNSS, especially single frequency GNSS. Ionospheric delay is the biggest source of error in signal propagation and the multipath effect is a complex phenomenon in signal propagation. To service users with the high accuracy requirements of GNSS, it is necessary to accurately estimate ionospheric delay. The estimation of ionospheric delay is not straightforward, especially when ionosphere storms and scintillation occurs. Some research has attempted to determine ionospheric delay for satellite navigation users. For single frequency GNSS users, the Klobuchar model is commonly adopted to estimate ionospheric delay using eight parameters; ionospheric delay can be eliminated by approximately 60% using this method (Elliott and Christopher 2006) In the Klobuchar model, the ionospheric delay between midnight and early morning is described by a constant and the ionospheric delay during the day is described by the sum of half the cosine function and the constant. With the increasing requirements of users in terms of accuracy, the Klobuchar model is no longer considered competent for high accuracy applications. Yaqi Peng from the Beijing Institute of Technology established an error prediction model for Klobuchar ionospheric delay based on the Takagi–Sugeno fuzzy neural network (TS-FNN). TS-FNN combines neural networks and the TS fuzzy theory which reveals an ability for self-learning, parallel processing, and processing uncertain information. The TS-FNN-based Klobuchar model has good fitting ability and prediction effects on the Klobuchar ionospheric delay error. Using this model to provide compensation for ionospheric delay, the error can be reduced by approximately 20% relative to the error after application of the Klobuchar method, which is of great significance for improving navigation positioning accuracy (Peng et al. 2019). The SBAS master stations collect the information concerning the ionospheric pierce points (IPPs) and adopt either the Kriging method (Blanch 2002, 2003) or the inverse distance weighted (IDW) method (Prasad and Sarma 2004) to determine the grid ionospheric vertical delay (GIVD) and GIVE at each IGP. The augmentation information of GIVD and GIVE is broadcast to users through the SBAS GEO satellites. For SF SBAS users, the grid Ionospheric correction method (GIM) is used to provide augmentation information. The ionospheric error can be mitigated by approximately 75–80% using this method (Elliott and Christopher 2006). GIM is used by WAAS users, for which the ionospheric augmentation information is provided by IGPs and broadcast by messages of type 18 and 26. The IPP is determined according to the intersection between the ionospheric thin shell and the imaginary line between user and satellite. The IGPs nearest the IPP is used to obtain the ionospheric augmentation information by triangular or rectangular interpolation. The corresponding ionospheric pseudorange error is corrected using the augmentation information from the IPP (SC-159 2013). The process of GIM is as shown in Fig. 11. Process of grid ionospheric correction method for single frequency SBAS users The correction of ionospheric delay and the integrity monitoring of the ionosphere are the main problems in the construction of SBAS. Compared with America and Europe, China spans a significant number of latitudes with complex ionospheric characteristics. The ionospheric characteristics of China have yet to be analyzed. From the point of view of space weather, in addition to ionospheric scintillation (Crane 1977) and ionospheric storms (Buonsanto 1999), solar radio burst (SRB) events are also important phenomena that impact GNSS application. SRB events tend to generate instantaneous wideband radio noise, causing degradation of GPS signals and increasing the ionospheric delay, as shown in Fig. 12, where SID denotes a sudden ionospheric disturbance. As shown in Fig. 12, large amounts of Radio irradiation, high energy particles, and plasma clouds are ejected during a solar burst event, posing a potential threat to GNSS applications. SRBs radiate strong electromagnetic noise. If this electromagnetic radiation covers the working band of GNSS and is strong enough, it can inflict radio frequency (RF) like interference on GNSS receivers (Liu et al. 2019). The effect of solar radio bursts on GNSS receivers is essentially an electromagnetic interference. The solar radio burst interference is directional to the GNSS receiver, injecting interfering energy at the solar zenith angle of the receiver position (Liu et al. 2019). Possible SRB events therefore need to be predicted. Liu Dun, a research fellow from the 22nd Research Institute, CETC developed a method that could be used to find possible SRB events, as shown in in Fig. 13. Like the ionospheric amplitude scintillation index S4 that is used to measure the degree of variation in amplitude, a SRB event is determined by a SRB Index (SRBI), the period for which an event is expected to last, and the zenith angle. The SRBI is defined as (Liu et al. 2019) $${\text{SRBI = }}\sqrt {\frac{{\sum\nolimits_{i = 1}^{N} {\left( {\frac{C}{{N_{0} }}} \right)_{i}^{2} - \left( {\sum\nolimits_{i = 1}^{N} {\left( {\frac{C}{{N_{0} }}} \right)_{i} } } \right)^{2} } }}{{\left( {\sum\nolimits_{i = 1}^{N} {\left( {\frac{C}{{N_{0} }}} \right)_{i} } } \right)^{2} }}}$$ where \(\frac{C}{{N_{0} }}\) represents the de-trended carrier-to-noise ratio of the GNSS signal and N is the number of data in a period of time. After SRB events are discovered, analysis of the GNSS signal fading model for the SRB event is carried out. Results show that only strong SRB events that cover the L band with high intensity (40,000 solar flux units (SFU), SFU = 10−22 W m−2 Hz−1) will seriously interfere with GNSS. A solar burst will affect GNSS receivers located in daytime areas instantaneously. There are some factors that could degrade the model performance: saturation in the fading depth, the quality of the GNSS measurement, SRB events of less intensity, and sites at low incidental angles (Liu et al. 2019). Impacts of solar burst (Liu et al. 2019) Method of discovering a possible SRB event For dual frequency users, ionospheric delay can be calculated via the dual frequency signal directly and the ionospheric error can be removed by up to 95%. The user with a dual frequency receiver can still pick up the dual frequency signal. This is because ionospheric delay can be taken as approximately inversely proportional to the square of the frequency. According to the relationship between the dual frequency and the pseudorange, the pseudorange after revision by ionospheric delay estimation can be expressed as a linear combination of the dual frequency pseudoranges (Elliott and Christopher 2006). Ionospheric delay is the focus of research in many countries at present. Each country faces problems when constructing SBAS and GBAS, or in applying GNSS to other areas. Two phenomena, namely ionospheric storms and ionospheric scintillations, affect the performance of GNSS and the augmentation systems. Ionospheric scintillation can cause signal loss at the receiver and ionospheric storms can badly affect the performance of SBAS. Recently, solar radio burst events that can lead to ionospheric scintillation are taken as factors that can impact GNSS application. China is still constructing its BDSBAS and civil aviation GBAS. As the ionosphere in southern China is relatively active, the ionospheric characteristics of southern China need to be studied further. Technologies of multipath mitigation In the field of wireless communications, the term multipath refers to the propagation of radio signals from a transmitter through multiple paths to a receiver. The scattering of electric waves by the atmosphere, the reflection and refraction of the ionosphere by radio waves, and the reflection of electric waves by surface objects such as mountains and buildings all cause multipath effects. GNSS provides a good outdoor positioning service; however its service performance degrades noticeably indoors. The multipath effect is a problem in indoor positioning. To provide indoor positioning, pseudolite is introduced. Pseudolite systems such as Locata are developed with the installation of a transmitter above the ground which can transmit signals similar to that of GPS. The receiver of the pseudolite system receives this signal, determines the distance between the transmitter and receiver, and takes this location as the user's position. Pseudolite systems work in the same manner as local GPS. Pseudolite systems face many difficulties because of the complicated structures of many buildings and the changeable environments during indoor positioning. The multipath effect is a complex phenomenon of signal propagation and is always one of the main factors affecting the accuracy of pseudolite positioning (Wang 2019d). Using antenna arrays to generate strong directional signals as an advanced multipath suppression method can be applied to indoor positioning, which generates a high gain in the direction of the arrival of the direct signal and attenuates interfering signals from other directions, suppressing the multipath signal (Wang 2019d). Wang Xinyi from Southeast University conducted research on the indoor pseudolite signal propagation effect with multipath statistical modeling. A pseudolite array antenna structure and a pseudolite array indoor positioning algorithm are developed for a pseudolite signal system. Results reveal that the lobes of the two array-type pseudolite antennas are denser than that of isotropic antenna, the energy can be gathered more at the side lobes relative to the main lobe, and the radiation range is relatively larger. The two array antenna gain radiation from the direction of the arrival of a signal if the power carried by the signals is high enough, up to − 40 dB, and the interference signal is thus attenuated, providing a solution for the remote pseudolite transmission link (Wang 2019d). The multipath effect still needs to be considered and mitigated against for receiver positioning, especially for indoor application. The multipath effect is quite complex, and it is difficult to find an appropriate model to describe multipath characteristics. The multipath effect therefore has great potential for further research. User segment technologies Satellite navigation systems are established to provide positioning and navigation services for users. User segment technologies are key in the application of Satellite navigation systems. The General user can determine their location and positioning domain parameters by receiving a GNSS signal. Civil aviation users require more information or signals to improve the accuracy, integrity, continuity and availability of the GNSS in order to reach aviation standards. Satellite navigation augmentation systems are constructed to provide more information and signals for users. With the development and improvement of various satellite navigation systems, the number of satellites in the view of the user is increasing significantly, and the geometric structure of the satellites in the view of the user has been improved. If all visible satellites are used for positioning, redundant signal sources increase the signal processing burden of the receiver and it will be difficult to implement real-time positioning. Therefore, it is necessary to develop user segment technology to find an optimal performance for users. In this section, user segment technologies are divided into two parts: ABAS user technologies and other user technologies. ABAS user technologies ABAS users mainly include Airborne Autonomous Integrity Monitoring (AAIM), Receiver Autonomous Integrity Monitoring (RAIM), and Advanced Receiver Autonomous Integrity Monitoring (ARAIM). AAIM usually combines an inertial navigation unit with a barometric altimeter or radio altimeter for integrity monitoring and performance improvement. RAIM utilizes the redundant observation of the receiver for integrity monitoring. RAIM is one of the important means by which integrity can be ensured, and its fault detection is autonomously performed by the airborne receiver (Zhang et al. 2019). RAIM compares the observation of each GNSS satellite with other available satellite observations so that faulty satellites are detected by the currently available satellites. RAIM is commonly used for auxiliary navigation in the en route and terminal area and supports horizontal navigation in the approach phase (Wang and Yang 2019; Wang et al. 2017a). Based on the multi-frequency multi-constellation navigation source, ARAIM is an integrity monitoring method combined with an integrity support message that is provided by the ground monitor station to achieve the vertical navigation performance of LPV200. ARAIM is an upgraded version of RAIM. The initial operation of BDS-3 at was announced at the state council information office of China in early 2019, and BDS can provide basic navigation services globally. As the applications for which BDS can be used increase, investigation surrounding RAIM has become more popular. Fault detection and exclusion is a pivotal part of RAIM. The consistency check on satellite observation is performed and the test statistic is compared with the detection threshold to identify the specific faults. The existing integrity monitoring algorithms do not take into account the impact of the navigation signal itself on integrity monitoring. As the baseband signal can be used to capture information that is not used for integrity monitoring during the tracking process, the signal-to-noise ratio and the related peaks from the process of acquiring and tracking are used to adjust the asymptotic covariance matrix and then construct the test statistic. The baseband signal-based algorithm for autonomous integrity monitoring reveals a high fault detection rate and a high sensitivity. Nevertheless, the stability of this method is lower than the weighted least squares methods (Wang and Yang 2019). Most of the BDS RAIM algorithms are designed for BDS-2. These algorithms are based on false alarms and missed detection and are not suitable for integrity and continuity, and hence the detection threshold needs to be updated according to the features of BDS-3 and applied to the receivers of BDS-3. Usually, the performance of RAIM algorithms is closely related to constellation geometry, the observation quality, and the number of visible satellites. Compared with BDS-2, BDS-3 shows better geometry, better code rates, and more abundant frequencies. A RAIM method in the situation that the missed detection probability is allocated to each visible satellite is taken as a solution for BDS-3 receivers. Compared with traditional methods, this RAIM method, which is based on the equal allocation of missed detection probability, reveals less integrity risk, and provides better protection levels and higher availability (Zhang et al. 2019). Since the GNSS signal is susceptible to high-rise building barriers, electromagnetic interference, and geomagnetic interference, the performance of GNSS is affected by those factors. The single epoch parity vector is adopted to construct a test statistic with which the large pseudorange deviation faults can be detected. The normalized and post-normalized constructive test statistic is used to detect the fault of a slowly varying pseudorange deviation. The loop detection of a single epoch to multi-epoch parity vector using a sliding window can detect small stepped faults and small gradual faults. In detail, the observation from the satellite is obtained and each epoch observation parity vector is determined. The current test statistic and detection threshold is then constructed. Finally, the test statistics and detection thresholds are compared to detect faults. The cumulative improvement algorithm of the parity vector sliding window detects the faults of isolated satellites in a timely manner and guarantees the accuracy and reliability of satellite navigation systems (Li et al. 2019b). As BDS provides services for global users, some research has been carried out with RAIM to improve the performance of the user domain positioning. RAIM can be applied to the fault detection during different phases of flight and provide real-time integrity monitoring of GNSS. On this basis, some work has been conducted to apply RAIM to the BDS-3 receivers. BDS-3 receiver autonomous integrity monitoring is a popular topic at present. More research on AAIM and ARIAM with the BDS-3 navigation system is expected to be carried out in the near future. Other user technologies With the globalization of the BDS-3 satellite system and the construction of BDSBAS, there has also been some research carried out investigating other user technologies. The processing center of GBAS has adopted the technology of Precise Point Positioning (PPP) ambiguity resolution (AR) to deduce the carrier phase integer ambiguities of the virtual reference station (VRS) (Xu et al. 2019). Subsequently, the carrier phase without ambiguities in VRS is broadcast to users. Based on this, a VRS-based rapid PPP integer ambiguity resolution is assisted using RTK technology to determine integer ambiguity. By performing single difference RTK between the rover station and the VRS, single difference ambiguities between the two can be solved. Considering that the observation of a VRS carrier phase does not include the phase ambiguities, the single difference ambiguities can be taken as the non-difference phase ambiguities of the rover station. The single difference ambiguities can be fixed like RTK and PPP AR can be performed at the rover station. The performance of PPP is improved with high convergence accuracy (Xu et al. 2019). As a promising field, unmanned aerial vehicle (UAV) delivery presents the advantages of low cost and high flexibility and has thus been adopted by logistics companies. The key issues for UAV delivery include accuracy, safety, and efficiency. However, the restrictions and regulations of low air navigation for small aircraft such as drones remain to be determined or has not been released. A UAV navigation monitoring requirements (UNMR) framework based on the required navigation performance (RNP) has been designed for accuracy and integrity monitoring in UAV delivery (Cheng 2019). Taking into account navigation system errors (NSEs), flight technical errors (FTEs), and path definition errors (PDEs), the total system errors (TSEs) of UNMR have been defined (Cheng 2019). To optimize the UNMR framework, it is essential to model all the aforementioned errors where those errors need to be associated with their factors in order to consider the accuracy and reliability. The reliable en-route width and vertical separation design can be determined based on the refined framework. A TSE model has been designed for the UAV en-route, and the impact of the wind model and the operating performance for the UAV FTE have been analyzed. The sensitivity analysis of the horizontal and vertical TSE has been carried out with various wind models and different flight scenarios. The TSE model will be a benefit for the design of UAV flight routes and formulation of the operating manual (Cheng 2019). With GNSS widely applied to many fields, it is necessary to analyze the instantaneous availability of GNSS in a local area. Existing availability researches are mainly focused on long-term fault monitoring and the availability of the GNSS constellation; they pay little attention to the availability for local instantaneous situations. Gao Xi from Beihang University investigated the availability of the global navigation signal to the local area navigation signal and the regional instantaneous availability on satellite navigation systems was analyzed and evaluated (Gao et al. 2019; Wang et al. 2017b). The signal availability under the impact of power anomalies and related peak distortions is presented. Based on this, the number of visible satellites and position dilution of precision (PDOP) are analyzed for a service area. The theory of local instantaneous availability evaluation is developed and of great significance to the availability evaluation of GNSS (Gao et al. 2019; Wang et al. 2017b). As GNSS is used in all kinds of applications, the navigation metrics of accuracy, availability, continuity, and integrity are closely scrutinized. When GNSS is used for autonomous applications in planes, trains, and automobiles, some problems need to be considered. Automation means that significant amounts of trust in GPS/GNSS systems for use with automobiles and railways is critical. With high levels of trust, GNSS can also aid navigation at the lane level. However, because of multipath and radio frequency interference (RFI), major challenges occur. The major challenge in using GNSS safely is integrity. A 10 m integrity bound is achievable with a system similar to aviation and PPP may be good enough for a 1 m level. High integrity GNSS for use with trains and automobiles is an upcoming requirement and the major challenges to satisfy this goal are radio frequency interference and multipath effects (Sher 2019). The flow of air traffic has increased dramatically with the rapid development of civil aviation, and flight safety has become increasingly important. As a solution, the Automatic Dependent Surveillance-Broadcast (ADS-B) is proposed to guarantee flight safety. ADS-B can provide air–air and ground-air surveillance to improve flight safety and thus ADS-B is widely used in civil aviation transportation and flight training processes. ADS-B will become the mainstream monitoring technology in the future according to plans by the CAACs. Currently, the GNSS navigation data, whose accuracy and reliability need to be improved, is indispensable to ADS-B. To provide the navigation performance required for ADS-B, the CORS differential ground-based augmentation positioning based on BDS-3 can be adopted to produce accurate position data. The new ADSB system can perform stably with accuracy at the decimeter level or even the centimeter level reliably, which can provide support for the performance optimization and airworthiness industrialization of the current ADS-B. BDS is currently applied to general aviation navigation and surveillance and provides navigational services for many airports and airlines (Yang 2019). General aviation refers to all aviation activities other than military aviation and public air transportation. Those activities include operations in industry, agriculture, forestry, fisheries, construction, disaster relief, meteorological detection, marine monitoring, scientific experiments, sports, education and training, medical and health care, flight in culture, etc. (Wang 2019b). Under the support of the government, general aviation ushers in new opportunities. General aviation navigation and surveillance technology has become a popular topic from the following perspectives (Wang 2019b): (1) General aviation aircraft navigation and surveillance systems based on multi-network convergence; (2) Distributed network data sharing technology for general aviation aircraft navigation surveillance; (3) BeiDou short message information coding and compression technology; (4) Multi-network fusion portable airborne navigation surveillance integrated devices; and (5) BeiDou general aviation navigation surveillance application. A number of technical problems in the navigation surveillance of low altitude airspace navigation vehicles that restricted the development of navigation technology have been solved. BeiDou general aviation navigation surveillance can provide communication, navigation, and surveillance with the multi-network integration of satellites, airspace, and the land, and provides remarkable economic and social benefits. The engineering application of BeiDou general aviation navigation surveillance covers flight monitoring, flight test, plant protection with UAVs, and the measurement of the temperature of nuclear power plant seawater with UAVs (Wang 2019b). It is a national strategy to develop an integrated positioning, navigation, and timing (PNT) system based on the BDS which can be used to provide service all over China. However, in the challenging situations of high buildings, the indoors, 3D transportation, shopping mall, and underground development, the precise positioning requirements cannot be met. To address this problem, cooperative real-time positioning (CRP) technology is proposed as a technology that can provide intelligent navigation and positioning for public application. CRP is a technology based on BDS and multiple positioning resources to obtain high precision positioning either separately or jointly to provide accurate, rapid, and ubiquitous location-based services to mass users through augmentation technologies such as information augmentation, signal augmentation multiple-source fusion, and location sharing (Lou 2019). CRP is focused on multisource collaboration, station-network collaboration, cloud-end collaboration, and end–end collaboration. To build a cooperative real-time positioning service platform, the database of a high precision beacon and fingerprint is built in the main areas of more than 200 large scale shopping malls or transportation hubs (such as airports and railway stations) in the major cities of China. This platform responds to the requests of 50 billion collaborative positionings each day. Three systems are utilized: the GNSS real-time high precision cloud processing system, the A-BDS/GNSS location service system, and the multi-source location data collaborative processing system (Deng et al. 2018; Lou 2019). The integrated PNT systems are developed based on BDS and the collaborative precision positioning that is based on the cloud platform has made great progress (Deng et al. 2018) and been applied to many fields such as public users, intelligent transportation, and UAVs (Deng et al. 2018). The application of BDS/GNSS has been explored from many different perspectives to provide solutions for application of the BDS/GNSS augmentation system. As the globalization of BDS continues, BDS/GNSS augmentation technologies have been applied to many areas and scholars from different companies are providing BDS for use in everyday life. BDS and BDSBAS can be used in aerospace, maritime affairs, transportation, and agriculture, among others. BDS/GNSS technologies, together with their augmentation technologies, are therefore a promising industry. Conclusions and discussions Satellite navigation augmentation technologies are an important part of satellite navigation systems and a popular topic for research in the field of satellite navigation technology. This paper comprehensively discusses satellite navigation augmentation technologies and shares the latest academic achievements of those fields. It aims at promoting cooperation among scholars at home and abroad, helping researchers to understand the progress and requirements of satellite augmentation technology from different perspectives. In detail, system segment technologies, satellite segment technologies, propagation segment technologies, and user segment technologies are described to introduce the theory and application of BDS/GNSS augmentation technologies. Each section gives a short summary of the corresponding technology. The theory and application of satellite navigation augmentation technology involves many aspects of GNSS, not only including the construction of BDSBAS and NBGAS, but also covering the application of those systems. Through these technologies, the status and trend of the development of BDS/GNSS augmentation technologies can be summarized as follows. Popular topics for research This paper focuses on the technology and applications of BDS/GNSS augmentation technologies, covering satellite based augmentation systems, LEO augmentation systems, ground-based augmentation systems, ionospheric technologies, aviation applications, and UAV applications for GNSS, satellite autonomous integrity monitoring and integrity onboard testing, and user integrity technologies. These include: The construction, testing, and verification of the BeiDou satellite-based augmentation system and the Algerian AL-SBAS. The BeiDou satellite-based augmentation system and the Algerian AL-SBAS were developed to provide a safe service for civil aviation. It is necessary to promote the construction, testing, and verification of these SBASs and apply them to civilian aircraft before the receiver of another SBASs dominates the Chinese civil aircraft market. The national BeiDou ground-based augmentation system and its application. The national BeiDou ground-based augmentation system is an important part of the BeiDou satellite navigation system which was developed to service all kinds of fields in China. After the system test of the national BeiDou ground-based augmentation system in 2018, it will be able to provide position services at the meter and even the decimeter-level. The national BeiDou ground-based augmentation system and its application are highly significant for engineering applications. GNSS receiver autonomous integrity monitoring. China is promoting its BDS-3 to service many kinds of industries. The receiver autonomous integrity monitoring for BDS-3 is proposed to serve in civil aviation. It is a pivotal action to provide navigation information to Chinese aircraft with BDS-3. GNSS receiver autonomous integrity monitoring is a popular topic in China at present. Research and development frontiers There are two aspects in research and development frontiers. The collaborative precision positioning based on the cloud platform has presented great progress and been applied to many fields, such as intelligent transportation and unmanned aerial vehicles. With four collaborations (multi-source, station network, cloud-end, and end–end), collaborative precision positioning plays a key role in both national strategies and livelihoods. Combined with a public location-based service, intelligent transportation, new artificial intelligence, and the next generation communication network, collaborative precision positioning becomes the academic frontier for positioning, navigation, and timing. On the other hand, LEO augmentation has received much attention. With the advantages of a great signal strength, a short delay between satellite and user, and the fast satellite geometry variation, LEO satellites are used to provide more signals in service areas. The design and distribution of the LEO constellation, navigation signal analysis, and the phase center variation calibration has come into the focus of researchers. This paper mainly discusses the key technology used in satellite navigation augmentation systems in China. There have been many achievements in BDS, which mainly include the following: The BeiDou satellite-based augmentation system and Algerian AL-SBAS are the latest achievements of the Chinese satellite-based augmentation system, for which provide preliminary test results have been provided, revealing the great progress made by Chinese researchers in overcoming the difficulties in the development of satellite systems. The monitor stations of the national BeiDou ground based augmentation system have been distributed all over China and the construction of the national BeiDou ground-based augmentation system is basically finished. The system can provide preliminary experimental results for most of China. According to the results of the national BeiDou ground-based augmentation system for the real-time positioning accuracy at the meter level, decimeter level, centimeter level, and post-processing millimeter level, the service capability of the national BeiDou ground-based augmentation system for positioning accuracy satisfies, and is superior to, the design indicators. The civil aviation ground-based augmentation system has completed both flight inspections and testing, and the certification scheme for civil aviation ground-based augmentation system has been issued which suggests the rapid development of the Chinese civil aviation ground-based augmentation system. The equipment Lgf-1a of the civil aviation ground-based augmentation system is capable of providing aircraft with information concerning precision approach, automatic landing, and taxi guidance services which suit the requirements of the current regulatory specifications. The development of satellite navigation augmentation technologies in China is still in its early stages. Much attention has centered on the design of augmentation systems, integrity monitoring and alarms, anti-spoof solutions, and the improvement of receiver performance. China has to promote the construction and application of the BDS augmentation systems in order to reach the progress level of America and Europe. Overall, BDS, BDSBAS, and NGBAS are approaching the state of full operation. With the continuous improvement seen in the performance of BDS, this system can be quickly applied to high-end users such as civil aviation, promoting the globalization of BDS. Availability of data and material First, in the first paragraph of this article, the sentence "In recent years, China has been actively promoting the construction and development of the BeiDou Navigation Satellite System (BDS), and by the end of the year 2000 the construction of BDS-1 was complete and BDS-1 began to provide GPS services for China." should be changed into "In recent years, China has been actively promoting the construction and development of the BeiDou Navigation Satellite System (BDS), and by the end of the year 2000 the construction of BDS-1 was completed and BDS-1 began to provide services for China." The word "GPS" should be removed. Anonymous, (2006). ICAO SARPS annex 10: International standards and recommended practices, aeronautical telecommunications. Montreal: International Civil Aviation Organization. Blanch, J. (2002). An ionosphere estimation algorithm for WAAS based on kriging (pp. 1–5). Oregon: ION GPS. Blanch, J. (2003). Using kriging to bound satellite ranging errors due to the ionosphere (pp. 1–80). Stanford: Stanford University. Buonsanto, M. (1999). Ionospheric storms: A review. Space Science Reviews,88(3–4), 563–601. Cai, Y. (2019). Positioning precision test of national BeiDou ground-based augmentation system. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–58). Chen, J. (2019). Design and preliminary analysis of BDS satellite-based augmentation system. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–22). Chen, J., Huang, Z., & Li, R. (2017). Computation of satellite clock-ephemeris corrections using a priori knowledge for satellite-based augmentation system. GPS Solutions,21(2), 663–673. Chen, S., Jin, B., Li, D., et al. (2019). Study on the prediction method of single and dual frequency service area for BDSBAS. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 228–237). Cheng, Q. (2019). Sensitivity analysis of the RNP total system error for the UAV delivery. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 1–25). Crane, R. (1977). Ionospheric scintillation. Proceedings of the IEEE,65(2), 180–199. Deng, H., Arif, U., Fu, Q., et al. (2018). Visual-inertial estimation of velocity for multicopters based on vision motion constraint. Robotics and Autonomous Systems,107, 262–279. Elliott, D., & Christopher, J. (2006). Understanding GPS principles and applications (2nd ed., pp. 1–234). Norwood, MA: Artech House Inc. Gao, X., Sun, C., Zhao, H., et al. (2019). Research on regional instantaneous availability evaluation method of satellite navigation system. In China Satellite Navigation Conference, Springer, Singapore (pp. 286–297). Ge, H., Li, B., Ge, M., et al. (2018). Initial assessment of precise point positioning with LEO enhanced global navigation satellite systems (LeGNSS). Remote Sensing,10(7), 1–7. Geng, Y. (2019). R&D of ground-based augmentation system (GBAS). In The 10th China Satellite Navigation Conference, Beijing, (pp. 1–38). Guo, S., Liu, C., Gao, W., et al. (2019). Construction and development of satellite navigation augmentation system Construction. GNSS World of China,44(2), 1–12. Lei, W., Liu, W., Han, H., et al. (2019). LEO navigation augmentation: Satellite antenna PCV requirement for precise positioning. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 187–195). Li, D. (2019). Performance test and assessment of augmentation system based on ALCOMSAT-1. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–45). Li, B., Ge, H., Ge, M., et al. (2019a). LEO enhanced global navigation satellite system (LeGNSS) for real-time precise positioning services. Advances in Space Research,63(1), 73–93. Li, C., Wang, R., Liu, J., et al. (2019b). Research on fault detection algorithm based on cumulative improvement of parity vector sliding window. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 275–285). Li, R., Zheng, S., Wang, Z., et al. (2018). Evaluation of GNSS signal-in space continuity: A Weibull-distribution-based method. Chinese Journal of Electronics,27(3), 634–640. Liu, T. (2019). Status of BDSBAS space segment. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–31). Liu, D., Han, C., Jin, R., et al. (2019). Analysis and modeling on interference of solar radio burst on GNSS signal. In The 10th China Satellite Navigation Conference, Springer, Singapore, (pp. 196–206). Lou, Y. (2019). Research progress in collaborative precision positioning technology. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–24). Ma, F. (2018). Research on the key technologies for GNSS precise positioning augmented with LEO constellation (pp. 1–102). Wuhan: Wuhan University. Parkinson, B., Enge, P., Axelrad, P., et al. (1996). Global positioning system: Theory and applications (Vol. II, pp. 1–500). Reston, VA: American Institute of Aeronautics and Astronautics. Peng, Y., Xu, C., Niu, F., et al. (2019). Error prediction model of Klobuchar ionospheric delay based on TS fuzzy neural network. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 266–274). Prasad, N., & Sarma, A. (2004). Ionospheric time delay estimation using IDW grid model for GAGAN. Indian Geophysical Union,8(4), 319–327. Reid, T., Neish, A., Walter, T., et al. (2016). Leveraging commercial broadband LEO constellations for navigation. In Proceedings of the ION GNSS (pp. 2300–2314). SC-159. (2004). Minimum aviation system performance standards for the local area augmentation system (LAAS). RTCA DO-245A (pp. 1–210). SC-159. (2013). Minimum operational performance standards for global positioning system/satellite-based augmentation system airborne equipment. RTCA DO-229E (pp. 1–401). Shao, B. (2019). The calculation method of SBAS dual-frequency range error (DFRE) integrity parameter. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–8). Shao, B., Liu, J., Huang, Z., et al. (2011). A user differential range error calculating algorithm based on analytic method. Chinese Journal of Aeronautics,24(6), 762–767. Shao, B., Liu, J., Huang, Z., et al. (2012). A particle swarm optimization based algorithm for the calculation of user differential range error. Chinese Journal of Electronics,21(1), 64–68. Shen, N., Chen, L., Liu, J., et al. (2019). A review of global navigation satellite system (GNSS)-based dynamic monitoring technologies for structural health monitoring. Remote Sensing,11(9), 1001–10049. Sher, M. (2019). Using GNSS for autonomous applications: From planes to trains & automobiles. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–32). Tsai, Y. (1999). Wide area differential operation of the global positioning system: Ephemeris and clock algorithms (pp. 1–101). California: Department of Mechanical Engineering, Stanford University. Wang, B. (2019a). Influence of GEO satellite orbit error fluctuation correction on BDSWADS zone correction. In The 10th China Satellite Navigation Conference, Springer, Singapore, (pp. 1–6). Wang, E. (2019b). BeiDou general aviation navigation and surveillance application. In The 10th China Satellite Navigation Conference, Beijing (pp. 1–22). Wang, J. (2019c). Analysis of space-borne GPS real-time orbit determination based on real-time augmentation products. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 1–6). Wang, X. (2019d). Research on indoor pseudolite signal propagation effect and multipath statistical modeling. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 1–26). Wang, Y. (2019e). Real time integrity monitoring for wide area precise positioning system. In China Satellite Navigation Conference, Springer, Singapore (pp. 1–6). Wang, L., Chen, R. Z., Li, D., et al. (2018a). Quality assessment of the LEO navigation augmentation signals from Luojia-1A satellite. Journal of Wuhan University,43(12), 1–4. Wang, L., Chen, R., Li, D., et al. (2018b). Initial assessment of the LEO based navigation signal augmentation system from luojia-1A satellite. Sensors,18(11), 1–10. Wang, L., Chen, R., Xu, B., et al. (2019). The challenges of LEO based navigation augmentation system: Lessons learned from luojia-1A satellite. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 298–310). Wang, E., Jia, C., Tong, G., et al. (2017a). Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm. Advances in Space Research,61, 1260–1272. Wang, E., & Yang, D. (2019). Research on receiver autonomous integrity monitoring technology based on GNSS baseband signal. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 207–218). Wang, E., Zhang, Q., Tong, G., et al. (2017b). Monitoring and evaluation algorithm of GNSS signal in space availability. Advances in Space Research,59, 786–793. Xu, J., Jiang, Z., & Chen, L. (2019). Rapid PPP integer ambiguity resolution assisted with RTK technology based on VRS[C]. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 219–227). Yang, Y. (2019). Design and implementation of ADS-B system based on BDS III ground-based augmentation technique. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 1–8). Zhang, J., Li, L., Yang, F., et al. (2019). Improvement for BDS-3 receiver autonomous integrity monitoring with missed detection probability equal allocation. In The 10th China Satellite Navigation Conference, Springer, Singapore (pp. 238–253). This research is supported by the National Natural Science Foundation of China (No. 61571309), Talent Project of Revitalization Liaoning (No. XLYC1907022), Natural Science Foundation of Liaoning Province (No. 2019-MS-251) and High-Level Innovation Talent Project of Shenyang (No. RC190030). School of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, China Rui Li & Shuaiyong Zheng Suzhou Institute, Beihang University, Suzhou, 215200, China Rui Li College of Electronic and Information Engineering, Shenyang Aerospace University, No. 37 Daoyi South Street, Daoyi District, Shenyang, China Ershen Wang Beijing Satellite Navigation Center, No. 22 Beiqing Road, Haidian District, Beijing, China Jinping Chen Qianxun Spatial Intelligence Inc., C5 Building, No. 1688-38 North Guoquan Road, Yangpu District, Shanghai, China Shaojun Feng Space Star Technology Co., Ltd., Blue Lake Industry Park, Haidian District, Beijing, China Dun Wang John Deere - Intelligent Solution Group, 20780 Madrona Avenue, Torrance, CA, USA Liwen Dai Shuaiyong Zheng Conceptualization: RL, JC, SF, DW and LD; Methodology: RL and EW; Writing original draft: SZ and EW; Editing: EW, SF and RL; Review: RL, JC, SF, DW and LD. All authors read and approved the final manuscript. Correspondence to Ershen Wang. Li, R., Zheng, S., Wang, E. et al. Advances in BeiDou Navigation Satellite System (BDS) and satellite navigation augmentation technologies. Satell Navig 1, 12 (2020). https://doi.org/10.1186/s43020-020-00010-2 BeiDou Navigation Satellite System Satellite navigation augmentation systems
CommonCrawl
Ruth I. Michler Ruth I. Michler (March 8, 1967 to November 1, 2000)[1][2][3][4] was an American-born mathematician of German descent who lived and worked in the United States. She earned her Ph.D. in Mathematics from the University of California, Berkeley,[5] and she was a tenured associate professor at the University of North Texas. She died at the age of 33 while visiting Northeastern University, after which at least three memorial conferences were held in her honor, and the Ruth I. Michler Memorial Prize was established in her memory. Ruth I. Michler BornMarch 8, 1967 Ithaca, New York DiedNovember 1, 2000 NationalityAmerican Alma materUniversity of California, Berkeley Scientific career Fieldscommutative algebra, algebraic geometry InstitutionsUniversity of North Texas Doctoral advisorArthur Ogus, Mariusz Wodzicki Early years Michler was the daughter of German mathematician Gerhard O. Michler and was born in Ithaca, New York while her family was visiting Cornell University from Germany.[1] She grew up in Germany, living in Tübingen, Giessen, and Essen.[2] She completed her undergraduate studies in 1988 at the University of Oxford, graduating summa cum laude.[3][6] Doctoral studies and research Michler earned her Ph.D. in Mathematics in 1993 from the University of California, Berkeley. Her dissertation is titled "Hodge components of cyclic homology of affine hypersurfaces."[5][7] Her advisors were Mariusz Wodzicki and Arthur Ogus. She spent the academic year 1993-1994 as a postdoc at Queen's University working with Leslie Roberts. In 1994, she joined the tenure-track faculty at the University of North Texas where she earned tenure in 2000. She was the author of eleven (11) research articles in commutative algebra and algebraic geometry.[8][9] She organized several special sessions at meetings of the American Mathematical Society.[10][11][12] The session in San Antonio resulted in a conference proceedings which Michler co-edited.[13] In 2000 she was awarded a National Science Foundation POWRE grant to visit Northeastern University.[14] Memorial conferences and prize Michler was killed in an accident in Boston on November 1, 2000, when she was struck by a construction vehicle while riding her bicycle.[4][15][6] Several conferences were organized in her honor.[16][17] Two conferences resulted in a volume of papers dedicated to her memory[18][19] which includes a dedicatory article[20] and an article describing her research.[21] In 2007 the Association for Women in Mathematics inaugurated the Ruth I. Michler Memorial Prize which is "awarded annually to a woman recently promoted to Associate Professor or an equivalent position in the mathematical sciences".[22] References 1. "Ruth Michler biography". www-history.mcs.st-andrews.ac.uk. Retrieved 2019-01-26. 2. "Cornell Math - About Ruth Michler". pi.math.cornell.edu. Retrieved 2019-01-26. 3. "Commemorating Dr. Ruth Michler". web.northeastern.edu. Retrieved 2019-01-26. 4. "The Valuation Theory Home Page: Very Sad News. Includes memorial articles from Boston Globe, Boston Herald, and Texas Star". math.usask.ca. Retrieved 2019-01-26. 5. "Ruth Michler - The Mathematics Genealogy Project". genealogy.math.ndsu.nodak.edu. Retrieved 2019-01-26. 6. "Association for Women in Mathematics Newsletter Jan-Feb 2001, In Memoriam". www.drivehq.com. Retrieved 2019-01-26. 7. "Math Reviews review for 'Hodge-components of cyclic homology of singular affine hypersurfaces.'". MR 2690218. {{cite journal}}: Cite journal requires |journal= (help) 8. "Math Reviews author page: Michler, Ruth". mathscinet.ams.org. Retrieved 2019-01-26. 9. "Dr. Ruth Michler: Recent Publications". web.northeastern.edu. Retrieved 2019-01-26. 10. "American Mathematical Society meeting special session Washington, DC". jointmathematicsmeetings.org. Retrieved 2019-01-26. 11. "American Mathematical Society meeting special session San Antonio". jointmathematicsmeetings.org. Retrieved 2019-01-26. 12. "American Mathematical Society meeting special session San Francisco". www.ams.org. Retrieved 2019-01-26. 13. "Math Review: edited volume, 'Singularities in algebraic and analytic geometry'". MR 1792143. {{cite journal}}: Cite journal requires |journal= (help) 14. "NSF Award Search: Award#0075057 - POWRE: Differentials, Singularities and Applications". www.nsf.gov. Retrieved 2019-01-26. 15. "Northeastern University, Department of Mathematics 'Tragic Accident'". mathserver.neu.edu. Retrieved 2019-01-26. 16. "Conferences Commemorating Dr. Ruth Michler". web.northeastern.edu. Retrieved 2019-01-26. 17. "AWM at JMM 2011". Association for Women in Mathematics (AWM). Retrieved 2019-01-26. 18. "Math Review 'Topics in algebraic and noncommutative geometry'". MR 2017395. {{cite journal}}: Cite journal requires |journal= (help) 19. "Topics in Algebraic and Noncommutative Geometry: Proceedings in Memory of Ruth Michler". bookstore.ams.org. Retrieved 2019-01-26. 20. "Math Review 'Dedication [to Dr. Ruth Ingrid Michler]'". MR 2017396. {{cite journal}}: Cite journal requires |journal= (help) 21. "Math Review 'Dr. Ruth I. Michler's research'". MR 1986110. {{cite journal}}: Cite journal requires |journal= (help) 22. "Ruth I. Michler Prize". Association for Women in Mathematics (AWM). Retrieved 2019-01-26. Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
viXra.org > Geometry Abstracts Authors Papers Full Site Previous months: 2010 - 1003(11) - 1004(4) - 1005(2) - 1006(7) - 1007(2) - 1008(5) - 1009(7) - 1010(5) - 1011(1) 2011 - 1101(4) - 1102(3) - 1103(6) - 1104(18) - 1106(2) - 1107(4) - 1108(1) - 1110(2) - 1111(1) 2012 - 1201(2) - 1202(1) - 1203(2) - 1204(2) - 1205(3) - 1208(1) - 1209(1) - 1211(4) 2013 - 1301(1) - 1303(3) - 1304(1) - 1305(2) - 1306(9) - 1307(2) - 1308(2) - 1309(2) - 1310(2) - 1311(2) - 1312(5) 2014 - 1401(2) - 1404(3) - 1405(5) - 1406(2) - 1407(3) - 1408(4) - 1409(3) - 1410(2) - 1411(3) 2015 - 1502(5) - 1504(2) - 1507(2) - 1508(8) - 1510(1) - 1511(4) - 1512(4) 2016 - 1601(1) - 1602(6) - 1604(1) - 1605(4) - 1606(2) - 1607(26) - 1608(4) - 1609(2) - 1610(3) - 1611(2) - 1612(1) 2018 - 1801(7) - 1802(8) - 1803(4) - 1804(5) - 1805(1) - 1806(1) - 1807(3) - 1808(3) - 1809(3) - 1810(8) - 1811(5) - 1812(6) 2019 - 1901(4) - 1902(5) - 1903(9) - 1904(8) - 1905(8) - 1906(5) Any replacements are listed farther down [370] viXra:1906.0404 [pdf] submitted on 2019-06-20 19:17:33 Via Geometric Algebra: Direction and Distance Between Two Points on a Spherical Earth Authors: James A. Smith Comments: 17 Pages. As a high-school-level example of solving a problem via Geometric (Clifford) Algebra, we show how to calculate the distance and direction between two points on Earth, given the locations' latitudes and longitudes. We validate the results by comparing them to those obtained from online calculators. This example invites a discussion of the benefits of teaching spherical trigonometry (the usual way of solving such problems) at the high-school level versus teaching how to use Geometric Algebra for the same purpose. Category: Geometry A Trigonometric Proof of Oppenheim's and Pedoe Inequality Authors: Israel Meireles Chrisostomo Comments: 7 Pages. This problem first appeared in the American Mathematical Monthly in 1965, proposed by Sir Alexander Oppenheim. As a matter of curiosity, the American Mathematical Monthly is the most widely read mathematics journal in the world. On the other hand, Oppenheim was a brilliant mathematician, and for the excellence of his work in mathematics, obtained the title of " Sir ", given by the English to English citizens who stand out in the national and international scenario.Oppenheim is better known in the academic world for his contribution to the field of Number Theory, known as the Oppenheim Conjecture. A Trigonometric Proof of Oppenheim's Inequality The SUSY Non-Commutativ Geometry Authors: Antoine Balan Comments: 1 page, written in english We define the notion of SUSY non-commutativ geometry as a supersymmetric theory of quantum spaces. Relation Between Mean Proportionals of Parts and the Whole of a Line Segment Authors: Radhakrishnamurty Padyala Comments: 4 Pages. 4 Galileo derived a result for the relation between the two mean proportionals of the parts and the whole of a given line segment. He derived it for the internal division of the line segment. We derive in this note, a corresponding result for the external division of a given line segment. About One Geometric Variation Problem Authors: Emanuels Grinbergs Comments: 13 Pages. Translated from Latvian by Dainis Zeps Translation of the article of Emanuels Grinbergs, ОБ ОДНОЙ ГЕОМЕТРИЧЕСКОЙ ВАРИАЦИОННОЙ ЗАДАЧЕ that is published in LVU Zinātniskie darbi, 1958.g., sējums XX, izlaidums 3, 153.-164., in Russian https://dspace.lu.lv/dspace/handle/7/46617. From Pythagorean Theorem to Cosine Theorem. Authors: Jesús Álvarez Lobo Easy and natural demonstration of the cosine theorem, based on the extension of the Pythagorean theorem. Solution of a Vector-Triangle Problem Via Geometric (Clifford) Algebra As a high-school-level application of Geometric Algebra (GA), we show how to solve a simple vector-triangle problem. Our method highlights the use of outer products and inverses of bivectors. The Flow of Chern We propose a flow over a Kaehler manifold, called the Chern flow. Three, Four and N-Dimensional Swastikas & their Projections Authors: Sascha Vongehr Comments: 10 pages, Six Figures, Keywords: Higher Dimensional Geometry; Hyper Swastika; Reclaiming of Symbols; Didactic Arts Difficulties with generalizing the swastika shape for N dimensional spaces are discussed. While distilling the crucial general characteristics such as whether the number of arms is 2^N or 2N, a three dimensional (3D) swastika is introduced and then a construction algorithm for any natural number N so that it reproduces the 1D, 2D, and 3D shapes. The 4D hyper swastika and surfaces in its hypercube envelope are then presented for the first time. Via Geometric (Clifford) Algebra: Equation for Line of Intersection of Two Planes As a high-school-level example of solving a problem via Geometric Algebra (GA), we show how to derive an equation for the line of intersection between two given planes. The solution method that we use emphasizes GA's capabilities for expressing and manipulating projections and rotations of vectors. Foundations of Conic Conformal Geometric Algebra and Simplified Versors for Rotation, Translation and Scaling Authors: Eckhard Hitzer, Stephen J. Sangwine Comments: 15 Pages. submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of FTHD 2018, 21 Feb. 2019, 1 table, 1 figure. This paper explains in algebraic detail how two-dimensional conics can be defined by the outer products of conformal geometric algebra (CGA) points in higher dimensions. These multivector expressions code all types of conics in arbitrary scale, location and orientation. Conformal geometric algebra of two-dimensional Euclidean geometry is fully embedded as an algebraic subset. With small model preserving modifications, it is possible to consistently define in conic CGA versors for rotation, translation and scaling, similar to [https://doi.org/10.1007/s00006-018-0879-2], but simpler, especially for translations. Keywords: Clifford algebra, conformal geometric algebra, conics, versors. Mathematics Subject Classification (2010). Primary 15A66; Secondary 11E88, 15A15, 15A09. Cubic Curves and Cubic Surfaces from Contact Points in Conformal Geometric Algebra Authors: Eckhard Hitzer, Dietmar Hildenbrand Comments: 11 Pages. accepted for M. Gavrilova et al (eds.), Proceedings of Workshop ENGAGE 2019 at CGI 2019 with Springer LNCS, April 2019, 1 table. This work explains how to extend standard conformal geometric algebra of the Euclidean plane in a novel way to describe cubic curves in the Euclidean plane from nine contact points or from the ten coefficients of their implicit equations. As algebraic framework serves the Clifford algebra Cl(9,7) over the real sixteen dimensional vector space R^{9,7}. These cubic curves can be intersected using the outer product based meet operation of geometric algebra. An analogous approach is explained for the description and operation with cubic surfaces in three Euclidean dimensions, using as framework Cl(19,16). Keywords: Clifford algebra, conformal geometric algebra, cubic curves, cubic surfaces, intersections The Flow of Hermite-Ricci Comments: 1 page, written in french We define a flow for hermitian manifolds. We call it the Hermite-Ricci flow. Expanding Polynomials with Regular Polygons Authors: Timothy W. Jones Expanding the root form of a polynomial for large numbers of roots can be complicated. Such polynomials can be used to prove the irrationality of powers of pi, so a technique for arriving at expanded forms is needed. We show here how roots of polynomials can generate regular polygons whose vertices considered as roots form known expanded polynomials. The product of these polynomials can be simple enough to yield the desired expanded form. Computing a Well-Connected Midsurface Authors: Yogesh H. Kulkarni, Anil D. Sahasrabudhe, Muknd S. Kale Computer-aided Design (CAD) models of thin-walled parts such as sheet metal or plastics are often reduced dimensionally to their corresponding midsurfaces for quicker and fairly accurate results of Computer-aided Engineering (CAE) analysis. Generation of the midsurface is still a time-consuming and mostly, a manual task due to lack of robust and automated techniques. Midsurface failures manifest in the form of gaps, overlaps, not-lying-halfway, etc., which can take hours or even days to correct. Most of the existing techniques work on the complex final shape of the model forcing the usage of hard-coded heuristic rules, developed on a case-by-case basis. The research presented here proposes to address these problems by leveraging feature-parameters, made available by the modern feature-based CAD applications, and by effectively leveraging them for sub-processes such as simplification, abstraction and decomposition. In the proposed system, at first, features which are not part of the gross shape are removed from the input sheet metal feature-based CAD model. Features of the gross-shape model are then transformed into their corresponding generic feature equivalents, each having a profile and a guide curve. The abstracted model is then decomposed into non-overlapping cellular bodies. The cells are classified into midsurface-patch generating cells, called 'solid cells' and patch-connecting cells, called 'interface cells'. In solid cells, midsurface patches are generated either by offset or by sweeping the midcurve generated from the owner-feature's profile. Interface cells join all the midsurface patches incident upon them. Output midsurface is then validated for correctness. At the end, real-life parts are used to demonstrate the efficacy of the approach. The Ricci Flow for Connections We define a natural Ricci flow for connections over a Riemannian manifold. A Special Geometry - and its Consequences Authors: Ulrich E. Bruchholz It is explained why the geometry of space-time, first found by Rainich, is generally valid. The equations of this geometry, the known Einstein-Maxwell equations, are discussed, and results are listed. We shall see how these tensor equations can be solved. As well, neutrosophics is more supported than dialectics. We shall find even more categories than described in neutrosophics. A Note on the Arbelos in Wasan Geometry, Satoh's Problem and a Circle Pattern Authors: Hiroshi Okumura We generalize a problem in Wasan geometry involving an arbelos, and construct a self-similar circle pattern. Dos Cuestiones de Geometría Authors: Edgar Valdebenito En esta nota mostramos dos cuestiones elementales de geometría. A Closed 3-Form in Spinorial Geometry Comments: 2 pages, written in english We define a 3-form over a spinorial manifold by mean of the curvature tensor and the Clifford multiplication. Division by Zero Calculus in Trigonometric Functions Authors: Saburou Saitoh Comments: 19 Pages. In this paper, we will introduce the division by zero calculus in triangles and trigonometric functions as the first stage in order to see the elementary properties. In this paper, we will introduce the division by zero calculus in triangles and trigonometric functions as the first stage in order to see the elementary properties. Refutation of Euclidean Geometry Embedded in Hyperbolic Geometry for Equal Consistency Authors: Colin James III Comments: 2 Pages. © Copyright 2019 by Colin James III All rights reserved. Respond to author by email only: info@cec-services dot com. See updated abstract at ersatz-systems.com. (We warn troll Mikko at Disqus to read the article four times before hormonal typing.) The following conjecture is refuted: "[An] n-dimensional Euclidean geometry can be embedded into (n+1)-dimensional hyperbolic non Euclidean geometry. Therefore hyperbolic non Euclidean geometry and Euclidean geometry are equally consistent, that is, either both are consistent or both are inconsistent." Hence, the conjecture is a non tautologous fragment of the universal logic VŁ4. Denial of Consistency for the Lobachevskii Non Euclidean Geometry We prove two parallel lines are tautologous in Euclidean geometry. We next prove that non Euclidean geometry of Lobachevskii is not tautologous and hence not consistent. What follows is that Riemann geometry is the same, and non Euclidean geometry is a segment of Euclidean geometry, not the other way around. Therefore non Euclidean geometries are a non tautologous fragment of the universal logic VŁ4. A New Tensor in Differential Geometry We propose a 3-form in differential geometry which depends only of a connection over the tangent fiber bundle. A Note of Differential Geometry Authors: Abdelmajid Ben Hadj Salem In this note, we give an application of the Method of the Repère Mobile to the Ellipsoid of Reference in Geodesy using a symplectic approach. Refutation of Riemannian Geometry as Generalization of Euclidean Geometry From the classical logic section on set theory, we evaluate definitions of the atom and primitive set. None is tautologous. From the quantum logic and topology section on set theory, we evaluate the disjoint union (as equivalent to the XOR operator) and variances in equivalents for the AND and OR operators. None is tautologous. This reiterates that set theory and quantum logic are not bivalent, and hence non-tautologous segments of the universal logic VŁ4. The assertion of Riemannian geometry as generalization of Euclidean geometry is not supported. Proceedings on Non Commutative Geometry. Authors: Johan Noldus Non commutative geometry is developed from the point of view of an extension of quantum logic. We provide for an example of a non-abelian simplex as well as a non-abelian curved Riemannian space. Solution Proof of Bellman's Lost in the Forest Problem for Triangles From the area and dimensions of an outer triangle, the height point of an inner triangle implies the minimum distance to the outer triangle. This proves the solution of Bellman's Lost in the forest problem for triangles. By extension, it is the general solution proof for other figures. Cluster Packaging of Spheres Versus Linear Packaging of Spheres Authors: Helmut Söllinger Comments: 10 Pages. language: German The paper analyses the issue of optimised packaging of spheres of the same size. The question is whether a linear packaging of spheres in the shape of a sausage or a spatial cluster of spheres can minimise the volume enveloping the spheres. There is an assumption that for less than 56 spheres the linear packaging is denser and for 56 spheres the cluster is denser, but the question remains how a cluster of 56 spheres could look like. The paper shows two possible ways to build such a cluster of 56 spheres. The author finds clusters of 59, 62, 65, 66, 68, 69, 71, 72, 74, 75, 76, 77, 78, 79 and 80 spheres - using the same method - which are denser than a linear packaging of the same number and gets to the assumption that all convex clusters of spheres of sufficient size are denser than linear ones. On the Equivalence of Closed Figures and Infinitely Extended Lines and the Conclusions Drawn from it Authors: Madhur Sorout The equivalence of closed figures and infinitely extended lines may lead us to understand the physical reality of infinities. This paper doesn't include what infinities mean in the physical world, but the paper is mainly focused on the equivalence of closed figures and infinitely extended lines. Using this principle, some major conclusions can be drawn. The equivalence of closed figures and infinitely extended lines is mainly based on the idea that closed figures and infinitely extended lines are equivalent. One of the most significant conclusions drawn from this equivalency is that if any object moves along a straight infinitely extended line, it will return back to the point, where it started to move, after some definite time. Three-Dimensional Quadrics in Conformal Geometric Algebras and Their Versor Transformations Authors: Eckhard Hitzer Comments: 15 Pages. Submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of AGACSE 2018, 23 Feb. 2019. This work explains how three dimensional quadrics can be defined by the outer products of conformal geometric algebra points in higher dimensions. These multivector expressions code all types of quadrics in arbitrary scale, location and orientation. Furthermore a newly modified (compared to Breuils et al, 2018, https://doi. org/10.1007/s00006-018-0851-1.) approach now allows not only the use of the standard intersection operations, but also of versor operators (scaling, rotation, translation). The new algebraic form of the theory will be explained in detail. The G-connections We define the notion of G-connections over vector fiber bundles with action of a Lie group G. The Symplectic Laplacian We construct a symplectic Laplacian which is a differential operator of order 1 depending only on a connection and a symplectic form. A Generalized Clifford Algebra We propose a generalization of the Clifford algebra. We give application to the Dirac operator. Blow-up of Feuerbach's Theorem Authors: Alexander Skutin In this short note we introduce the blow-up of the Feuerbach's theorem. We define a closed 2-form for any spinorial manifold. We deduce characteristic classes. Archimedean Incircle of a Triangle We generalize several Archimedean circles, which are the incircles of special triangles. Fractales : La Geometría del Caos Esta nota muestra una colección de fractales. Elementary Fractals : Part V Comments: 108 Pages. This note presents a collection of elementary fractals. Solution of a Sangaku ``Tangency" Problem via Geometric Algebra Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to solve one of the beautiful \emph{sangaku} problems from 19th-Century Japan. Among the GA operations that prove useful is the rotation of vectors via the unit bivector i. Refutation of the Planar Eucleadian R-Geometry of Tarski Comments: 3 Pages. © Copyright 2016-2018 by Colin James III All rights reserved. Updated abstract at ersatz-systems.com . Respond to the author by email at: info@ersatz-systems dot com. We evaluate the axioms of the title. The axiom of identity of betweenness and axiom Euclid are tautologous, but the others are not. The commonplace expression of the axiom of Euclid does not match its other two variations which is troubling. This effectively refutes the planar R-geometry. Geometries of O Authors: Hannes Hutzelmeyer Geometries of O adhere to Ockham's principle of simplest possible ontology: the only individuals are points, there are no straight lines, circles, angles etc. , just as it was was laid down by Tarski in the 1920s, when he put forward a set of axioms that only contain two relations, quaternary congruence and ternary betweenness. However, relations are not as intuitive as functions when constructions are concerned. Therefore the planar geometries of O contain only functions and no relations to start with. Essentially three quaternary functions occur: appension for line-joining of two pairs of points, linisection representing intersection of straight lines and circulation corresponding to intersection of circles. Functions are strictly defined by composition of given ones only. Both, Euclid and Lobachevsky planar geometries are developed using a precise notation for object-language and metalanguage, that allows for a very broad area of mathematical systems up to theory of types. Some astonishing results are obtained, among them: (A) Based on a special triangle construction Euclid planar geometry can start with a less powerful ontological basis than Lobachevsky geometry. (B) Usual Lobachevsky planar geometry is not complete, there are nonstandard planar Lobachevsky geometries. One needs a further axiom, the 'smallest' system is produced by the proto-octomidial-axiom. (C) Real numbers can be abandoned in connection with planar geometry. A very promising conjecture is put forward stating that the Euclidean Klein-model of Lobachevsky planar geometry does not contain all points of the constructive Euclidean unit-circle. Elementary Fractals: Part IV Elementary Fractals: Part III Elementary Fractals: Part II Elementary Fractals: Part I The Arbelos in Wasan Geometry, Problems of Izumiya and Nait\=o We generalize two sangaku problems involving an arbelos proposed by Izumiya and Nait\=o, and show the existence of six non-Archimedean congruent circles. Possbile Cubic to Spherical Transfomaions Authors: Adham Ahmed Mohamed Ahmed Comments: 1 Page. this paper talks about a hypothesis between the cube and the sphere which is inside the cube and the excess volume of the cube than the sphere and the excess volume of the sphere where the cube is inside of the sphere What If you spin a cube around an axis passing through its midpoint of the cube would the cylinder formed have an excess in volume than the sphere equal to the excess in volume of the cylinder than he cube? Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Authors: Martin Peter Neuenhofen We present an NLP solver for nonlinear optimization with quadratic penalty terms and logarithmic barrier terms. The method is suitable for large sparse problems. Each iteration has a polynomial time-complexity. The method has global convergence and local quadratic convergence, with a convergence radius that depends little on our method but rather on the geometry of the problem. От ошибки Гильберта к исчислению сфер Authors: Франц Герман Что такое след проективной плоскости и как можно его увидеть рассказывается в этой заметке. Exact and Intuitive Geometry Explanation: Why Does a Half-angle-rotation in Spin or Quaternion Space Correspond to the Whole-angle-rotation in Normal 3D Space? Authors: Hongbing Zhang Comments: 18 Pages. Please Indicate This Source From Hongbing Zhang When Cite the Contents in Works of Sience or Popular Sience Why does a half-angle-rotation in quaternion space or spin space correspond to a whole-angle-rotation in normal 3D space? The question is equivalent to why a half angle in the representation of SU(2) corresponds to a whole angle in the representation of SO(3). Usually we use the computation of the abstract mathematics to answer the question. But now I will give an exact and intuitive geometry-explanation on it in this paper. К вопросу о представлении дельта-функции Дирака Comments: Pages. В данной заметке мы покажем представление дельта-функции Дирака, которое будем назвать естественным. Существующие способы представления дельта-функции Дирака носят в общем-то искусственный характер. Smoothing using Geodesic Averages Authors: Jan Hakenberg Geodesic averages have been used to generalize curve subdivision and Bézier curves to Riemannian manifolds and Lie groups. We show that geodesic averages are suitable to perform smoothing of sequences of data in nonlinear spaces. In applications that produce temporal uniformly sampled manifold data, the smoothing removes high-frequency components from the signal. As a consequence, discrete differences computed from the smoothed sequence are more regular. Our method is therefore a simpler alternative to the extended Kalman filter. We apply the smoothing technique to noisy localization estimates of mobile robots. The Seiber-Witten Equations for Vector Fields By similarity with the Seiberg-Witten equations, we propose a set of two equations, depending of a spinor and a vector field. An Introduction to Non-Commutative (Pseudo) Metric Geometry. We introduce the reader to the problematic aspects of formulating in concreto a suitable notion of geometry. Here, we take the canonical approach and give some examples. Теорема о поляре треугольника Сформулирована и доказана теорема, ранее не встречавшаяся в литературе по проективной геометрии. На основании «теоремы о поляре трёхвершинника» открывается целый класс задач на построение. Теорема может быть полезна студентам математических факультетов педагогических вузов, а также учителям математики средней школы для проведения факультативных занятий. Fractales Del Tipo Newton Asociados al Polinomio: P(z)=z^9+3z^6+3z^3-1,z Complejo. En esta nota mostramos algunos fractales del tipo Newton asociados al polinomio: p(z)=z^9+3z^6+3z^3-1,z complejo. A Generalization of the Levi-Civita Connection We define here a generalization of the well-know Levi-Civita connection. We choose an automorphism and define a connection with help of a (non-symmetric) bilinear form. A Generalization of the Clifford Algebra We propose here a generalization of the Clifford algebra by mean of two endomorphisms. We deduce a generalized Lichnerowicz formula for the space of modified spinors. Definition of the Term "DIRECTION" Определение термина «НАПРАВЛЕНИЕ» Authors: Somsikov A.I. One of initial or primary concepts which is considered to the "protozoa" (who aren't expressed through other concepts) is considered. The structure of this concept is revealed. Algebraic and geometrical consequences are found. Рассмотрено одно из исходных или первичных понятий, считающееся «простейшим» (не выражаемым через другие понятия). Выявлена структура этого понятия. Найдены алгебраические и геометрические следствия. (Ip-GP Version 1.0 15 Pages 14.08.2018) on the Intrinsic Paradox of the Geometric Point Definition (Solved Using the Included Middle Logic) as the Main Cause of Euclid's Postulate "inaccuracy", Allowing the Existence not Only of Non-Euclidean Geomet Authors: Andrei Lucian Dragoi This paper brings to attention the intrinsic paradox of the geometric point (GP) definition, a paradox solved in this paper by using Stéphane Lupasco's Included Middle Logic (IML) (which was stated by Basarab Nicolescu as one of the three pillars of transdisciplinarity [TD]) and its extended form: based on IML, a new "t-metamathematics" (tMM) (including a t-metageometry[tMG]) is proposed, which may explain the main cause of Euclid's parallel postulate (EPP) "inaccuracy", allowing the existence not only of non-Euclidean geometries (nEGs), but also the existence of new EPP variants. tMM has far-reaching implications, including the help in redefining the basics of Einstein's General relativity theory (GRT), quantum field theory (QFT), superstring theories (SSTs) and M-theory (MT). KEYWORDS (including a list of main abbreviations): geometric point (GP); Stéphane Lupasco's Included Middle Logic (IML); Basarab Nicolescu, transdisciplinarity (TD); "t-metamathematics" (tMM); t-metageometry (tMG); Euclid's parallel postulate (EPP); non-Euclidean geometries; new EPP variants; Einstein's General relativity theory (GRT); quantum field theory (QFT); superstring theories (SSTs); M-theory (MT); The Balan-Killing Manifolds We define here the notion of Balan-Killing manifolds which are solutions of differential equations over the metrics of spin manifolds. Curve Subdivision in SE(2) We demonstrate that curve subdivision in the special Euclidean group SE(2) allows the design of planar curves with favorable curvature. We state the non-linear formula to position a point along a geodesic in SE(2). Curve subdivision in the Lie group consists of trigonometric functions. When projected to the plane, the refinement method reproduces circles and straight lines. The limit curves are designed by intuitive placement of control points in SE(2). Study of Transformations Authors: Yeray Cachón Santana This paper covers a first approach study of the angles and modulo of vectors in spaces of Hilbert considering a riemannian metric where, instead of taking the usual scalar product on space of Hilbert, this will be extended by the tensor of the geometry g. As far as I know, there is no a study covering space of Hilbert with riemannian metric. It will be shown how to get the angle and modulo on Hilbert spaces with a tensor metric, as well as vector product, symmetry and rotations. A section of variationals shows a system of differential equations for a riemennian metric. Making Sense of Bivector Addition As a demonstration of the coherence of Geometric Algebra's (GA's) geometric and algebraic concepts of bivectors, we add three geometric bivectors according to the procedure described by Hestenes and Macdonald, then use bivector identities to determine, from the result, two bivectors whose outer product is equal to the initial sum. In this way, we show that the procedure that GA's inventors dened for adding geometric bivectors is precisely that which is needed to give results that coincide with those obtained by calculating outer products of vectors that are expressed in terms of a 3D basis. We explain that that accomplishment is no coincidence: it is a consequence of the attributes that GA's designers assigned (or didn't) to bivectors. The Dirac Operators for Lie Groups In the case of a manifold which is a Lie group, a Dirac operator can be defined acting over the vector fields of the Lie group instead of the spinors. On Surface Measures on Convex Bodies and Generalizations of Known Tangential Identities Authors: Johan Aspegren One theme of this paper is to extend known results from polygons and balls to the general convex bodies in n− dimensions. An another theme stems from approximating a convex surface with polytope surface. Our result gives a sufficient and necessary condition for an natural approximation method to succeed (in principle) in the case of surfaces of convex bodies. Thus, Schwartz`s paradox does not affect our method. This allows us to denefine certain surface measures on surfaces of convex bodies in a novel and simple way. Vortex Equation in Holomorphic Line Bundle Over Non-Compact Gauduchon Manifold Authors: Zhenghan Shen, Wen Wang, Pan Zhang In this paper, by the method of heat flow and the method of exhaustion, we prove an existence theorem of Hermitian-Yang-Mills-Higgs metrics on holomorphic line bundle over a class of non-compact Gauduchon manifold. Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to calculate Solar azimuths and altitudes as a function of time via the heliocentric model. We begin by representing the Earth's motions in GA terms. Our representation incorporates an estimate of the time at which the Earth would have reached perihelion in 2017 if not affected by the Moon's gravity. Using the geometry of the December 2016 solstice as a starting point, we then employ GA's capacities for handling rotations to determine the orientation of a gnomon at any given latitude and longitude during the period between the December solstices of 2016 and 2017. Subsequently, we derive equations for two angles: that between the Sun's rays and the gnomon's shaft, and that between the gnomon's shadow and the direction ``north" as traced on the ground at the gnomon's location. To validate our equations, we convert those angles to Solar azimuths and altitudes for comparison with simulations made by the program Stellarium. As further validation, we analyze our equations algebraically to predict (for example) the precise timings and locations of sunrises, sunsets, and Solar zeniths on the solstices and equinoxes. We emphasize that the accuracy of the results is only to be expected, given the high accuracy of the heliocentric model itself, and that the relevance of this work is the efficiency with which that model can be implemented via GA for teaching at the introductory level. On that point, comments and debate are encouraged and welcome. A Note on a Problem in Misho Sampo Comments: 2 Pages. This paper will be submitted to Sangaku Journal of Mathematics. A problem involving an isosceles triangle with a square and three congruent circles is generalized. Euclid's Geometry is Just in Our Mind, Rather Than Describing the Real World Authors: Arturo Tozzi, James Peters The first definition (prior to the well-known five postulates) of Euclid describes the point as "that of which there is no part". Here we show how the Euclidean account of manifolds is untenable in our physical realm and that the concepts of points, lines, surfaces, volumes need to be revisited, in order to allow us to be able to describe the real world. Here we show that the basic object in a physical context is a traversal of spacetime via tiny subregions of spatial regions, rather than the Euclidean point. We also elucidate the psychological issues that lead our mind to think to points and lines as really existing in our surrounding environment. The Radian's Coincidence Conjecture Authors: Ryan Haddad This conjecture may be a tool in defining the indefinite tangent of 90 degrees, and is a (new) mathematical coincidence that is indeed strange; why would the tangent of angles near 90 degrees be equal to the angle of the radian multiplied by powers of 10? In fact, if there is no geometrical explanation in current mathematics, it may resides in metamathematics. Toroidal Approach to the Doubling of the Cube Authors: Gerasimos T. Soldatos Comments: Published in: FORUM GEOMETRICORUM, VOL. 18, PAGES 93-97 A doubling of the cube is attempted as a problem equivalent to the doubling of a horn torus. Both doublings are attained through the circle of Apollonius. Proof of Playfair's Axiom Hits a Roadblock Authors: Prashanth R. Rao The Playfair's axiom is considered an equivalent of Euclid's fifth postulate or parallel postulate in Euclidean planar geometry. It states that in a given plane, with a line in the plane and a point outside the line that is also in the same plane, one and only one line passes through that point that is also parallel to the given line. Previous proofs of Euclid's postulate or the Playfair's axiom have unintentionally assumed parallel postulate to prove it. Also, these axioms have different results in hyperbolic and spherical geometries. We offer proof for the Playfair's axiom for subset of cases in the context of plane Euclidean geometry and describe another subset of cases that cannot be proven by the same approach. The Flow of Dirac-Ricci Comments: 2 pages, written in french Following the definition of the flow of Ricci, we construct a flow of hermitian metrics for the spinors fiber bundle. The Flow of Ricci over an Hermitian Fiber Bundle The flow of Ricci is defined for the hermitian metric of a fiber bundle. A Note on a Problem Invoving a Square in a Curvilinear Triangle Comments: 3 Pages. This is a paper considering a problem in Wasan geometry. A problem involving a square in the curvilinear triangle made by two touching congruent circles and their common tangent is generalized. Extension of Proposition 23 from Euclid's Elements Proposition 23 states that two parallel lines in a plane never intersect. We use this definition with first and second postulate of Euclid to prove that two distinct lines through a single point cannot be parallel. A Relationship Between a Cevian, Two Perpendicular Bisectors and a Median in an Isosceles Triangle. Comments: 1 Page. Revista Escolar de la Olimpiada Iberoamericana de Matemática. Volume 18. Spanish. A very simple solution to a geometric problem (proposed by Alex Sierra Cardenas, Medellin, Colombia) that involves a cevian, two perpendicular bisectors and a median in an isosceles triangle. Geometry Beyond Algebra. the Theorem of Overlapped Polynomials (Top) Andits Application to the Sawa Masayoshi\'s Sangaku Problem. the Adventure Ofsolving a Mathematical Challenge Stated in 1821. Comments: 47 Pages. https://arxiv.org/abs/1110.1299 This work presents for the first time a solution to the 1821 unsolved Sawa Masayoshi's problem, giving an explicit and algebraically exact solution for the symmetric case (particular case b = c, i.e., for ABC isosceles right-angled triangle), see (1.60) and (1.61). Despite the isosceles triangle restriction is not necessary, in view of the complexity of the explicit algebraic solution for the symmetric case, one can guessing the impossibility of achieving an explicit relationship for the asymmetric case (the more general case: ABC right-angled scalene triangle). For this case is given a proof of existence and uniqueness of solution and a proof of the impossibility of getting such a relationship, even implicitly, if the sextic equation (2.54) it isn't solvable. Nevertheless, in (2.56) - (2.58) it is shown the way to solve the asymmetric case under the condition that (2.54) be solvable. Furthermore, it is proved that with a slight modification in the final set of variables (F), it is still possible to establish a relation between them, see (2.59) and (2.61), which provides a bridge that connects the primitive relationship by means of numerical methods, for every given right-angled triangle ABC. And as the attempt to solve Fermat's conjecture (or Fermat's last theorem), culminated more than three centuries later by Andrew Wiles, led to the development of powerful theories of more general scope, the attempt to solve the Masayoshi's problem has led to the development of the Theory of Overlapping Polynomials (TOP), whose application to this problem reveals a great potential that might be extrapolated to other frameworks. Wasan Geometry: 3-Isoincircles Problem. Trigonometric Analysis of a Hard Sangaku Challenge. Sacred Mathematics: Japanese Temple Geometry. Fukagawa Hidetoshi - Tony Rothman. Still Harder Temple Geometry Problems: Chapter 6 - Problem 3. La Représentation de Bonne Comments: 13 Pages. In French. This paper gives the elements of definition of the Bonne's map projection. It was used for the ancient cartography at 1/50000 scale in Tunisia and Algeria. Calculating the Angle Between Projections of Vectors Via Geometric (Clifford) Algebra We express a problem from visual astronomy in terms of Geometric (Clifford) Algebra, then solve the problem by deriving expressions for the sine and cosine of the angle between projections of two vectors upon a plane. Geometric Algebra enables us to do so without deriving expressions for the projections themselves. The Flow of HyperKaehler-Ricci Studying the flow of Kaehler-Ricci, a flow is defined for a manifold which is HyperKaehler. The flow of Ricci-Schrödinger The flow of Ricci-Schrödinger is defined from the flow of Ricci, like the Schrödinger equation is a twist of the heat equation. Remarks on Liouville-Type Theorems on Complete Noncompact Finsler Manifolds Authors: Songting Yin, Pan Zhang In this paper, we give a gradient estimate of positive solution to the equation $$\Delta u=-\lambda^2u, \ \ \lambda\geq 0$$ on a complete non-compact Finsler manifold. Then we obtain the corresponding Liouville-type theorem and Harnack inequality for the solution. Moreover, on a complete non-compact Finsler manifold we also prove a Liouville-type theorem for a $C^2$-nonegative function $f$ satisfying $$\Delta f\geq cf^d, c>0, d>1, $$ which improves a result obtained by Yin and He. An Upper Bound for Lebesgue's Universal Covering Problem Authors: Philip Gibbs The universal covering problem as posed by Henri Lebesgue in 1914 seeks to find the convex planar shape of smallest area that contains a subset congruent to any point set of unit diameter in the Euclidean plane. Methods used previously to construct such a cover can be refined and extended to provide an improved upper bound for the optimal area. An upper bound of 0.8440935944 is found. A Remark on the Localization Formulas About Two Killing Vector Fields Authors: Xu Chen In this article, we will discuss a localization formulas of equlvariant cohomology about two Killing vector fields on the set of zero points ${\rm{Zero}}(X_{M}-\sqrt{-1}Y_{M})=\{x\in M \mid |Y_{M}(x)|=|X_{M}(x)|=0 \}.$ As application, we use it to get formulas about characteristic numbers and to get a Duistermaat-Heckman type formula on symplectic manifold. A Poincaré-Hopf Type Formula for A Pair of Vector Fields We extend the reslut about Poincar\'e-Hopf type formula for the difference of the Chern character numbers (cf.[3]) to the non-isolated singularities, and establish a Poincar\'e-Hopf type formula for a pair of vector field with the function $h^{T_{\mathbb{C}}M}(\cdot,\cdot)$ has non-isolated zero points over a closed, oriented smooth manifold of dimension $2n$. Sennimalai Kalimuthu Publications Authors: Sennimalai Kalimuthu Comments: 03 Pages. Interested people may contact k,me at any time. Tha The 5th Euclidean postulate is 2300 years old mathematical impossibility. I have worked on this problem for nearly b35 years and found a number of consistent solutions. My findings have been appeared in international peer reviewed research journals. Generation of power freely from space, space Bombs, Lion's Tonic and Lemurian Yoga are my ambitious scientific projects. Interested researchers and people may contact me at +91 8508991577. My email is [email protected] and [email protected] Poliedros Fórmulas Indemostradas Authors: Carlos Alejandro Chiappini Leonhard Euler demostró que en un poliedro regular convexo hay tres números ue cumplen una ley, expresada en una ecuación conocida como fórmula de Euler. Son el número de caras, el número de vértices y el número de aristas. Este documento presenta algunas fórmulas más, obtenidas por ensayo y error a partir de una tabla que contiene los datos de los 5 poliedros regulares convexos. Estas fórmulas indemostradas tienen rasgos verosímiles. Buscar el modo de demostrar la invalidez o la validez de esas fórmulas podría ser, para las personas amantes de la topología, una tarea interesante. Projection of a Vector upon a Plane from an Arbitrary Angle, via Geometric (Clifford) Algebra We show how to calculate the projection of a vector, from an arbitrary direction, upon a given plane whose orientation is characterized by its normal vector, and by a bivector to which the plane is parallel. The resulting solutions are tested by means of an interactive GeoGebra construction. Formulas and Spreadsheets for Simple, Composite, and Complex Rotations of Vectors and Bivectors in Geometric (Clifford) Algebra Comments: 29 Pages. Formulas and Spreadsheets for Simple, Composite, and Complex Rotations of Vectors and Bivectors in Geometric (Clifford) Algebra We show how to express the representations of single, composite, and ``rotated" rotations in GA terms that allow rotations to be calculated conveniently via spreadsheets. Worked examples include rotation of a single vector by a bivector angle; rotation of a vector about an axis; composite rotation of a vector; rotation of a bivector; and the ``rotation of a rotation". Spreadsheets for doing the calculations are made available via live links. Replacements of recent Submissions [102] viXra:1905.0026 [pdf] replaced on 2019-05-11 06:44:00 Comments: 11 Pages. accepted for M. Gavrilova et al (eds.), Proceedings of Workshop ENGAGE 2019 at CGI 2019 with Springer LNCS, April 2019, 1 table, corrections: 03+11 May 2019. Comments: 11 Pages. accepted for M. Gavrilova et al (eds.), Proceedings of Workshop ENGAGE 2019 at CGI 2019 with Springer LNCS, April 2019, 1 table, correction: 03 May 2019. Direct Sum Decomposition of a Linear Vector Space Authors: Anamitra Palit The direct sum decomposition of a vector space has been explored to bring out a conflicting feature in the theory. We decompose a vector space using two subspaces. Keeping one subspace fixed we endeavor to replace the other by one which is not equal to the replaced subspace. Proceeding from such an effort we bring out the conflict. From certain considerations it is not possible to work out the replacement with an unequal subspace. From alternative considerations an unequal replacement is possible. [99] viXra:1903.0317 [pdf] replaced on 2019-03-18 18:09:25 This paper is mainly focused on the equivalence of closed figures and infinitely extended lines. Using this principle, some major conclusions can be drawn. The equivalence of closed figures and infinitely extended lines is mainly based on the idea that closed figures and infinitely extended lines are equivalent. One of the most significant conclusions drawn from this equivalency is that if any object moves along a straight infinitely extended line, it will return back to the point, where it started to move, after some definite time. This principle of equivalence of closed figures and infinitely extended lines may lead us to understand the physical reality of infinities. Comments: 16 Pages. published in Adv. of App. Cliff. Algs., 29:46, pp. 1-16, 2019. DOI: 10.1007/s00006-019-0964-1, 1 table. This work explains how three dimensional quadrics can be defined by the outer products of conformal geometric algebra points in higher dimensions. These multivector expressions code all types of quadrics in arbitrary scale, location and orientation. Furthermore, a newly modified (compared to Breuils et al, 2018, https://doi.org/10.1007/s00006-018-0851-1.) approach now allows not only the use of the standard intersection operations, but also of versor operators (scaling, rotation, translation). The new algebraic form of the theory will be explained in detail. Comments: Submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of AGACSE 2018, 23 Feb. 2019, 15 pages. 4 errors corrected: 25 Feb. 2019. Proposition 4.1 corrected: 02 Mar. 2019. Comments: Submitted to Topical Collection of Adv. in Appl. Clifford Algebras, for Proceedings of AGACSE 2018, 23 Feb. 2019, 15 pages. 4 errors corrected: 25 Feb. 2019. This work explains how three dimensional quadrics can be defined by the outer products of conformal geometric algebra points in higher dimensions. These multivector expressions code all types of quadrics in arbitrary scale, location and orientation. Furthermore a newly modified (compared to Breuils et al, 2018, https://doi.org/10.1007/s00006-018-0851-1.) approach now allows not only the use of the standard intersection operations, but also of versor operators (scaling, rotation, translation). The new algebraic form of the theory will be explained in detail. The Curvature and Dimension of Non-Differentiable Surfaces Authors: Shawn Halayka The curvature of a surface can lead to fractional dimension. In this paper, the properties of the 2-sphere surface of a 3D ball and the 2.x-surface of a 3D fractal set are considered. Tessellation is used to approximate each surface, primarily because the 2.x-surface of a 3D fractal set is otherwise non-differentiable. Geometries of O adhere to Ockham's principle of simplest possible ontology: the only individuals are points, there are no straight lines, circles, angles etc. , just as it was was laid down by Tarski in the 1920s, when he put forward a set of axioms that only contain two relations, quaternary congruence and ternary betweenness. However, relations are not as intuitive as functions when constructions are concerned. Therefore the planar geometries of O contain only functions and no relations to start with. Essentially three quaternary functions occur: appension for line-joining of two pairs of points, linisection representing intersection of straight lines and circulation corresponding to intersection of circles. Functions are strictly defined by composition of given ones only. Both, Euclid and Lobachevsky planar geometries are developed using a precise notation for object-language and metalanguage, that allows for a very broad area of mathematical systems up to theory of types. Some astonishing results are obtained, among them: (A) Based on a special triangle construction Euclid planar geometry can start with a less powerful ontological basis than Lobachevsky geometry. (B) Usual Lobachevsky planar geometry is not complete, there are nonstandard planar Lobachevsky geometries. One needs a further axiom, the 'smallest' system is produced by the proto-octomidial- axiom. (C) Real numbers can be abandoned in connection with planar geometry. A very promising conjecture is put forward stating that the Euclidean Klein-model of Lobachevsky planar geometry does not contain all points of the constructive Euclidean unit-circle. The Seiberg-Witten Equations for Vector Fields By similarity with the Seiberg-Witten equations, we propose two differential equations, depending of a spinor and a vector field, instead of a connection. Good moduli spaces are espected as a consequence of commutativity. We define here the notion of Balan-Killing manifolds which are spin manifolds whose metrics verify a certain differential equation. We take our inspiration from the notion of Killing spinors. One theme of this paper is to extend known results from polygons and balls to the general convex bodies in n− dimensions. An another theme stems from approximating a convex surface with a polytope surface. Our result gives a sufficient and necessary condition for an natural approximation method to succeed (in principle) in the case of surfaces of convex bodies. Thus, Schwartz`s paradox does not affect our method. This allows us to define certain surface measures on surfaces of convex bodies in a novel and simple way. The Quaternionic Seiberg-Witten Equations We define here the Seiberg-Witten equations in the quaternionic case. We formulate some algebra of the Hamilton numbers and study geometric applications of the quaternions. Following the definition of the flow of Ricci and with help of the Dirac operator, we construct a flow of hermitian metrics for the spinors fiber bundle. The flow of Ricci is defined for the hermitian metrics of a complex fiber bundle. Comments: 3 Pages. The paper is considering a problem in Wasan geometry. The flow of Ricci-Schrödinger is defined from the Ricci flow, like the Schrödinger equation with respect to the heat equation. Estimation of the Earth's "Unperturbed" Perihelion from Times of Solstices and Equinoxes Published times of the Earth's perihelions do not refer to the perihelions of the orbit that the Earth would follow if unaffected by other bodies such as the Moon. To estimate the timing of that ``unperturbed" perihelion, we fit an unperturbed Kepler orbit to the timings of the year 2017's equinoxes and solstices. We find that the unperturbed 2017 perihelion, defined in that way, would occur 12.93 days after the December 2016 solstice. Using that result, calculated times of the year 2017's solstices and equinoxes differ from published values by less than five minutes. That degree of accuracy is sufficient for the intended use of the result. Contact - Disclaimer - Privacy - Funding
CommonCrawl
\begin{document} \title{ An iterative method for solving the stable subspace of a matrix pencil and its application. } \begin{abstract} This work is to propose an iterative method of choice to compute a stable subspace of a regular matrix pencil. This approach is to define a sequence of matrix pencils via particular left null spaces. We show that this iteration preserves a discrete-type flow depending only on the initial matrix pencil. Via this recursion relationship, we propose an accelerated iterative method to compute the stable subspace and use it to provide a theoretical result to solve the principal square root of a given matrix, both nonsingular and singular. We show that this method can not only find out the matrix square root, but also construct an iterative approach which converges to the square root with any desired order. \end{abstract} \textbf{Keywords:} Stable subspace,\,Sherman Morrison Woodbury formula,\,Matrix square root,\,Accelerated iterative method,\,Q-superlinear convergence \pagestyle{myheadings} \thispagestyle{plain} \section{Introduction} Throughout this paper we shall use the following notation to facilitate our discussions. $\lambda(A)$ and $\lambda(A,B)$ denote the sets of eigenvalues of the matrix $A$ and the matrix pencil $A-\lambda B$, respectively, and let $\rho(A)$ be the spectral radius of the square matrix $A$. $\mathbb{C}^+$ and $\mathbb{C}^-$ represent the open right and left half complex planes. Given a regular $n\times n$ matrix pencil $A- \lambda B$ (i.e., $\det(A-\lambda B)$ is not identically zero for all $\lambda$) and an integer $m\leq n$, we want to find in this work a full rank matrix $U\in\mathbb{C}^{n\times m}$ such that \begin{equation}\label{eq:gen} A U = B U \Lambda, \end{equation} {where $\Lambda\in\mathbb{C}^{m\times m}$ and $\rho(\Lambda) < 1$.} Note that the column space $\mathcal{U}=\mbox{Span}\{U\}$ is called the stable deflating subspace of $A-\lambda B$. Specially, $\mathcal{U}$ is called the stable invariant space if $B$ is the identity matrix. Over the past few decades, considerable attention has been paid to study the property of the invariant and deflating subspace~\cite{Gohberg06}. In application, one can obtain the solutions of algebraic Riccati-type matrix equations by computing its corresponding stable deflating subspaces or stable invariant subspaces, e.g., \cite{Lancaster95,Bini2012}. Particularly, this problem is related to the so-called \emph{generalized spectral divide and conquer} (SDC) problem~\cite{Bai1997,Bini2012}, which is to find a pair of left and right deflating subspaces $\mathcal{L}$ and $\mathcal{R}$ such that \begin{equation*} A\mathcal{R} \subset \mathcal{L},\quad B\mathcal{R} \subset \mathcal{L}, \end{equation*} corresponding to eigenvalues of the pair $A-\lambda B$ in a specified region $\mathcal{D}\subset \mathbb{C}$. That is, find two nonsingular partitioned matrices $U_L = \begin{bmatrix} U_{L_1},U_{L_2}\end{bmatrix}$ and $U_R= \begin{bmatrix} U_{R_1},U_{R_2}\end{bmatrix}$ with $\mathcal{L} = \mbox{span}(U_{L_1})$ and $\mathcal{R} = \mbox{span}(U_{R_1})$ so that \begin{equation*} A U_R = U_L\begin{bmatrix} A_{11} & A_{12} \\0 & A_{22}\end{bmatrix}, \quad B U_R = U_L\begin{bmatrix} B_{11} & B_{12} \\0 & B_{22}\end{bmatrix}, \quad \end{equation*} and the eigenvalues of $A_{11} -\lambda B_{11}$ are the eigenvalues of $A-\lambda B$ in the region $\mathcal{D}$. We notice that if $A_{11} - \lambda B_{11}$ has no infinite eigenvalues, then $B_{11}$ is invertible and \begin{equation*} AU_{R_1} = BU_{R_1} (B_{11}^{-1} A_{11}); \end{equation*} if $A_{11} - \lambda B_{11}$ has no zero eigenvalues, then $A_{11}$ is invertible and \begin{equation*} AU_{R_1} ( A_{11}^{-1} B_{11})= BU_{R_1}. \end{equation*} Note that the region $\mathcal{D}$ in the SDC problem is generally assumed in the interior (or exterior) of the unit disk. Otherwise, the M\"{o}bius transformations $(\alpha A + \beta B) (\gamma A + \delta B)^{-1}$ can be applied to transform original region as a rather general region~\cite{Bai1997}. One direct method to solve~\eqref{eq:gen} (not requires $\rho(\Lambda)<1$), is to apply the so-called QZ algorithm. That is, through the QZ algorithm, the matrix $A$ is reduced to triangular or upper quasi-triangular form and $B$ to upper triangular form. One is then able to compute eigenvectors through the reduced form (see~\cite{Demmelbook00, Golub2013,Kressnerbook05} for the details). Unlike the direct method, we propose in this work an iterative method, AB-algorithm, to solve~\eqref{eq:gen}. This method is done by defining a sequence of matrix pencils $\{A_k-\lambda B_k\}$ with $(A_1, B_1) = (A,B)$ and $(A_k, B_k) = ( \mathcal{M}_{k-1} A_{k-1}, \mathcal{N}_{k-1} B_{1})$ for any integer $k>1$. Here, $( \mathcal{N}_{k-1}, \mathcal{M}_{k-1})$ is a solution belonging to the left null space of $ \left[\begin{array}{c}A_1 \\-B_k\end{array}\right] $, that is, \begin{equation}\label{eq:null} \mathcal{N}_{k} A_{1} = \mathcal{M}_{k} B_{k}. \end{equation} { Due to the specific structure embedded in the matrix pencil $A_1-\lambda B_k$, some $( \mathcal{N}_{k}, \mathcal{M}_{k})$ can be designed such that $\{A_k-\lambda B_k\}$ have the same structure. We refer the reader to \cite{WWL2006,Bini2012} and to the references therein. In these works, iterative algorithms for computing invariant subspace of structured matrix pencil $A_1-\lambda B_k$ are provided with a quadratic convergence for solving algebraic Riccati and related matrix equations. } Observe that $A_1 U = B_1 U \Lambda^{1}$. Suppose this process can be continually iterated to obtain the new pencil $A_k-\lambda B_k$ such that $A_k U = B_k U \Lambda^k$. It must be that \begin{eqnarray}\label{eq:eig1} A_{k+1} U &=& \mathcal{M}_kA_k U= \mathcal{M}_k B_k U \Lambda^k\nonumber \\ &=& \mathcal{N}_k A_1 U \Lambda^k = \mathcal{N}_k B_1 U \Lambda^{k+1} = B_{k+1} U \Lambda^{k+1}. \end{eqnarray} This implies that if $\rho (\Lambda)< 1$, and if the sequence $\{B_k\}$ is uniformly bounded, then $\lim\limits_{k\rightarrow \infty} A_k U = 0$. Once the sequence $\{A_k\}$ also converges, say $A_\infty := \lim\limits_{k\rightarrow \infty} A_k$, we are able to solve the solution $U$ by computing the the right null space of $A_\infty$. {We notice that this AB-algorithm theoretically not only preserves a discrete-type flow property (see Theorem~\ref{thm:trans1}) but can be accelerated with the rate of convergence of any desired order. To demonstrate the capability of this algorithm, we use it to provide a theoretical result to compute the matrix square root as an example. It is known that matrix square root is not unique (even up to sign) or even exists, for example, \begin{equation*} \left[\begin{array}{cc} 1 & 0 \\0 & 1\end{array}\right] = \left[\begin{array}{cc} \cos(\theta) & \sin(\theta) \\\sin(\theta) & -\cos(\theta)\end{array}\right]^2 \end{equation*} for any $\theta\in\mathbb{R}$ and $\left[\begin{array}{cc} 0 & 1 \\0 & 0\end{array}\right] $ does not have a square root. Indeed, let $S \in\mathbb{C}^{n\times n}$ be a matrix having no nonpositive real eigenvalues. Then the quadratic matrix equation \begin{equation}\label{eq:quad} X^2 - S =0 \end{equation} has a unique solution $X$ such that $\lambda(X) \subset \mathbb{C}^+$. This is called the principal square root of $S$ and denote it by $\sqrt{S}$~\cite{Deprima1974, Higham1987}. Numerical methods for computing the matrix principal square root, including the (modified) Schur method, Newton's method, and its variants, have been widely discussed in the numerical linear algebra community. See~\cite{Laasonen1958, Deprima1974, Higham1986,Higham1987,Higham1997,Meini2004,Highambook08,Mizuno2016} and the references therein. Unlike conventional methods, this AB-algorithm can be modified to obtain the square root efficiently, that is, the rate of convergence of this algorithm can be of any desired order $r$. Specifically, our method is equivalent to the so-called Newton's method when $r=2$. More precisely, though preserving a similar convergence property like the Newton's method, our algorithm can be shown that, under mild adjustments, the speed of convergence can be q-superlinearly with any order~\cite{Kelly1995}. This work is organized as follows. In section 2 we provide properties of the AB-Algorithm. In section 3 we modified this AB-Algorithm so that its convergence can be of any order. In section 4 we report a numerical application to solve the matrix square root, and the concluding remarks are given in section 5. \section{The AB-Algorithm and Its Corresponding Properties} Recall that the idea of the AB-Algorithm depends heavily on the determination of the left null space of $ \left[\begin{array}{c}A_1 \\-B_k\end{array}\right]$. Observe that $ B_1(A_1+B_1)^{-1}A_1 - A_1(A_1+B_1)^{-1}B_1 = 0 $, if $-1\not\in\lambda(A_1,B_1)$ and $(A_1, B_1)$ are two $n\times n$ matrices. That is, $(N_1, M_1):=(B_1(A_1+B_1)^{-1},A_1(A_1+B_1)^{-1})$ is the left null space of $ \left[\begin{array}{c}A_1 \\-B_1\end{array}\right]$. Using the same procedure, we would like to generate the matrix sequences $\{A_k\}$ and $\{B_k\}$ by defining \begin{subequations}\label{signal2} \begin{align} A_k&= A_1(A_1+B_{k-1})^{-1}A_{k-1}, \\ B_k&=B_{k-1}(A_1+B_{k-1})^{-1}B_1, \end{align} \end{subequations} once the process can be iterated. It should be noted that if $ \mathcal{M}_{k-1} = B_{k-1}(A_1+B_{k-1})^{-1}$ and $ \mathcal{N}_{k-1} = A_1(A_1+B_{k-1})^{-1}$ for any integer $k>1$, it can be seen that $B_{k-1}(A_1+B_{k-1})^{-1} A_1 = A_1 (A_1+B_{k-1})^{-1} B_{k-1}$, which satisfies the assumption~\eqref{eq:null}. For simplicity, we let $\Delta_{i,j}:=(A_{i}+B_{j})^{-1}$ so that the sequences $\{A_k\}$ and $\{B_k\}$ in~\eqref{signal2} can be rewritten as \begin{subequations}\label{eq1} \begin{align} A_k&=A_1\Delta_{1,k-1}A_{k-1}=A_{k-1}-B_{k-1}\Delta_{1,k-1}A_{k-1},\label{eq1a}\\ B_k&=B_{k-1}\Delta_{1,k-1}B_{1}=B_{1}-A_{1}\Delta_{1,k-1}B_{1}.\label{eq1b} \end{align} \end{subequations} Based on~\eqref{eq1}, we propose the following AB-algorithm for computing the stable {subspace} of the matrix pencil {$A_1-\lambda B_1$}: \begin{Algorithm}\label{abalg} {\emph{(AB-Algorithm)}} \begin{enumerate} \item Given a pencil $A_1-\lambda B_1$, initialize a tolerance $\tau>0$ and a positive integer $kmax$. \item For $k = 2...$, iterate {until {$\mbox{dist}(\mbox{Null}(A_{k-1}),\mbox{Null}(A_{k}))< \tau$} or {$k>kmax$}.} \begin{enumerate} \item $A_k = A_1\Delta_{1,k-1}A_{k-1}$, \item $B_k = B_{k-1}\Delta_{1,k-1}B_1$, \end{enumerate} \end{enumerate} { Here, ``Null$(\cdot)$'' denotes the null space of the given matrix and ``dist$(\cdot,\cdot)$''denotes the distance between two subspaces~\cite{Golub2013}.} \end{Algorithm} Note that on the one hand, Algorithm~\ref{abalg} provides an alternative approach for finding the stable invariant subspace $U$ (i.e., $A_1 U = U\Lambda$ and $\rho(\Lambda) < 1$) of the matrix $A_1$ by constructing $A_\infty$ (once it exists) directly as follows: \begin{Remark} If no breakdown occurs in Algorithm~\ref{abalg} and $B_1=I_n$, for any integer $k>1$ we have \begin{subequations} \begin{align} A_k &=A_1^k( \sum_{j= 0}^{k-1} A_1^j )^{-1},\label{s1}\\ B_k &=(\sum_{j= 0}^{k-1} A_1^j )^{-1}.\label{s2} \end{align} \end{subequations} In other words, to obtain the stable subspace of the matrix $A_1$, we only need to focus on the {iterations} generated by \eqref{s1}. \end{Remark} On the other hand, once the iteration is available, we are interested in characterizing the transformation of eigenvalues of the matrix pencil $A_1-\lambda B_1$ after each iteration. First, we give an observation about the relationship between the eigenvalues of $A_k-\lambda B_k$ and the eigenvalues of $A_1-\lambda B_1$. Since the proof can be read off from~\eqref{eq:eig1}, we omit our proof here. \begin{Lemma}\label{lem:eig} Let $A_1-\lambda B_1$ be a regular matrix pencil, and let $\{A_k-\lambda B_k\}$ be the sequence of matrix pencils generated by Algorithm~\ref{abalg}, if no breakdown occurs. If $\lambda\in\lambda(A_1,B_1)$ with $\lambda\in\mathbb{C}\cup\{\infty\}$, then $\lambda^k\in\lambda(A_k,B_k).\, (Here, \infty^k:=\infty)$ \end{Lemma} Subsequently, we have the following theorem which gives rise to the appearance of new eigenvalues induced by the AB-algorithm. { \begin{Theorem}\label{Lem:eig} Let $A_1-\lambda B_1$ be a regular matrix pencil, and let $\{A_k-\lambda B_k\}$ be the sequence of matrix pencils generated by Algorithm~\ref{abalg}, if no breakdown occurs. Let $\{{\lambda}_1^{(i,k)},\ldots,{\lambda}_n^{(i,k)}\}$ be the set of eigenvalues of the matrix pencils $A_{i}-\lambda B_{k}$ for any two positive integers $i$ and $k$. Then, for $1\leq j \leq n$, the set of eigenvalues has the following properties: \begin{enumerate} \item $ \lambda_j^{(1,k)}=\left\{ \begin{array}{rl} \sum\limits_{s=1}^{k}(\lambda_j^{(1,1)})^s,&\lambda_j^{(1,1)}\in\mathbb{C}, \\ \infty,&\lambda_j^{(1,1)}=\infty.\\ \end{array} \right. $ \item $ {\lambda}_j^{(i,1)}=\left\{ \begin{array}{rl} \dfrac{(\lambda_j^{(1,1)})^i}{\sum\limits_{s=0}^{i-1}(\lambda_j^{(1,1)})^s},&\lambda_j^{(1,1)}\in\mathbb{C}, \\ \infty,&\lambda_j^{(1,1)}=\infty.\\ \end{array} \right. $ \item $ {\lambda}_j^{(i,k)}=\left\{ \begin{array}{rl} (\lambda_j^{(1,1)})^i\dfrac{\sum\limits_{s=0}^{k-1}(\lambda_j^{(1,1)})^s}{\sum\limits_{s=0}^{i-1}(\lambda_j^{(1,1)})^s},&\lambda_j^{(1,1)}\in\mathbb{C}, \\ \infty,&\lambda_j^{(1,1)}=\infty.\\ \end{array} \right. $ \end{enumerate} \end{Theorem} \begin{proof} Assume without loss of generality that $A_1$ and $B_1$ are upper triangular matrices. Otherwise, let $U$ and $V$ be two unitary matrices such that $U^H A_1 V$ and $U^H B_1 V$ both are upper triangular matrices. Upon using~\eqref{signal2}, it can be seen that $A_k$ and $B_k$ are also upper triangular, and \begin{eqnarray*} \lambda_j^{(1,k)}&=&\left\{ \begin{array}{rl} (1+\lambda_j^{(1,k-1)})\lambda_j^{(1,1)},&\lambda_j^{(1,1)}\in\mathbb{C}, \\ \infty,&\lambda_j^{(1,1)}=\infty,\\ \end{array} \right.\\ {\lambda}_j^{(i,1)} &=&\left\{ \begin{array}{rl} \dfrac{{\lambda}_j^{(i,i)}}{ 1+{\lambda}_j^{(1,i-1)}},&\lambda_j^{(1,1)}\in\mathbb{C}, \\ \infty,&\lambda_j^{(1,1)}=\infty.\\ \end{array} \right. \end{eqnarray*} Moreover, \begin{equation*} {\lambda}_j^{(i,k)}=\left\{ \begin{array}{rl} (1+{\lambda}_j^{(1,k-1)}){\lambda}_j^{(i,1)},&\lambda_j^{(1,1)}\in\mathbb{C}, \\ \infty,&\lambda_j^{(1,1)}=\infty,\\ \end{array} \right. \end{equation*} for $i, k\geq 2$. We remark that $\lambda_j^{(1,i-1)}\neq -1$ since $A_i-\lambda B_i$ is well-defined, and from Lemma~\ref{lem:eig}, we have $\lambda_j^{(i,i)}=(\lambda_j^{(1,1)})^i$, which completes the proof of the theorem. \end{proof} } We notice that Algorithm~\ref{abalg} is workable { if and only if} the sum of matrices $A_1$ and $B_{k-1}$, for any integer $k>1$, is invertible, that is, {$-1\not\in\lambda(A_1,B_{k-1})$}, for any integer $k>1$. This capacity can be completely characterized by the $p$th roots of unity, except itself. \begin{Theorem}\label{thm:well-defined} Let {$A_1-\lambda B_1$} be a regular matrix pencil, and let \begin{equation*} S_k= \bigcup\limits_{2\leq p\leq k+1} \{ e^{\frac{2q\pi i}{p}} : {1\leq q\leq p-1} \}. \end{equation*} If \begin{equation*} S_k \cap \lambda(A_1,B_{1}) = \phi, \end{equation*} then the sequence of matrix pencils {$A_k-\lambda B_k$}, for any integer $k\geq 1$, can be generated using Algorithm~\ref{abalg}, or, generally, all sequences of matrices $\{A_k-\lambda B_k\}$ generated by iterations \eqref{signal2} with the initial matrix pencil $A_1-\lambda B_1$ are no breakdown, if { \begin{align}\label{cond} S_\infty\cap\lambda(A_1,B_{1})=\phi. \end{align}} \end{Theorem} \begin{Corollary}\label{cor:dif} For any positive integers $i,j$ and $k$, we have $A_k-B_k=A_1-B_1$, that is, $A_i-A_j=B_i-B_j$, provided that {$S_{\max\{i,j,k\}}\cap\lambda(A_1,B_{1})=\phi$}. \end{Corollary} \begin{proof} The proof is by induction on $k$. When $k=1$, the result is evident. Suppose we have proved this corollary for $k =\ell$. Then, by the induction hypothesis \begin{align*} A_{\ell+1}-B_{\ell+1}&=A_\ell-B_\ell\Delta_{1,\ell}A_\ell-B_\ell\Delta_{1,\ell}B_1\\ &=A_\ell-B_\ell\Delta_{1,\ell}(A_\ell+B_1)=A_\ell-B_\ell=A_1-B_1. \end{align*} This completes the proof. \end{proof} { From Corollary~\ref{cor:dif}, each step of $B_k$ can be obtained by $B_k=A_k+B_1-A_1$. We conclude that the counts of Algorithm~\ref{abalg} for one iteration is $\frac{14}{3}n^3$ flops. This is because the computation is preliminary determined by the product of two $n\times n$ matrices, the calculation of the Gaussian elimination with partial pivoting, and the performance of solving $n$ lower triangular systems and $n$ upper triangular systems. Hence, the calculation of the counts contains a PLU factorization $(\mbox{cost}:\frac{2}{3}n^3$ flops) and two multiplication $(\mbox{cost}:4n^3$ flops). Here, we ignore any $O(n^2)$ operation counts and the memory counts. We notice that the computational cost of QZ algorithm is about $46n^3$ flops (the right eigenvectors are desired). On the other hand, it follows from Theorem~\ref{thm:well-defined} that Algorithm~\ref{abalg} is well-defined, once~\eqref{cond} is satisfied. Here, we use Gaussian elimination with partial pivoting, which is known to perform well and usually eliminate the numerical instability in practice~\cite{WICS:WICS164}, to compute the matrix inverse so that the iteration will not terminate prematurely. To perform the error analysis and decide the numerical stability of Algorithm~\ref{abalg}, the reader is referred to~\cite{HUANG20091452} for a similar discussion. } We remark that Corollary~\ref{cor:dif} also implies that $\lim\limits_{k\rightarrow\infty} A_k$ exists if and only if $\lim\limits_{k\rightarrow\infty} B_k$ exists. Note that in~\eqref{signal2}, the iterations of {the matrix pencils $A_k-\lambda B_k$, for $k\geq 1$, are relative to the initial pencil $A_1-\lambda B_1$}. We would like to derive a more general iterative method, which are easily accessible through any initial pencil {$A_i-\lambda B_i$}. To this purpose, we shall first introduce the well-known Sherman Morrison Woodbury formula (SMWF). \begin{Lemma}\cite{Bernstein2005}\label{Schur} Let $A$ and $B$ be two arbitrary matrices of size $n$, and let $X$ and $Y$ be two $n\times n$ nonsingular matrices. Assume that $Y^{-1}\pm BX^{-1}A$ is nonsingular. Then, $X\pm A Y B$ is invertible and \[ (X\pm A Y B)^{-1}=X^{-1}\mp X^{-1}A(Y^{-1}\pm B X^{-1}A)^{-1}BX^{-1}. \] \end{Lemma} This lemma gives a useful method to prove the following result. \begin{Theorem}\label{thm:trans1} Let the assumption~\eqref{cond} holds and {$\{A_k-\lambda B_k\}$} be the sequence of matrix pencils obtained by~\eqref{signal2} with initial {$A_1-\lambda B_1$}. Then, \begin{subequations}\label{tran2} \begin{align} A_{i+j}&=A_{i}(A_{i}+B_{j})^{-1}A_{j},\\ B_{i+j}&=B_{j}(A_{i}+B_{j})^{-1}B_{i}, \end{align} where $i$ and $j$ are any two positive integers. \end{subequations} \end{Theorem} \begin{proof} This proof is divided into two parts. We first fix $j=1$ and show that the statement~\eqref{tran2} is true for any positive integer $i$. We prove by induction on $i$. When $i =1$, the statement~\eqref{tran2} is definitely true from the definition of $A_2$ and $B_2$. Suppose~\eqref{tran2} is true for $i=s$. It follows from Lemma~\ref{Schur} that \begin{align*} \Delta_{1,s+1} &=(A_1+B_s-A_s\Delta_{s,1}B_s)^{-1}\\ &=\Delta_{1,s}+\Delta_{1,s}A_{s}(A_s+B_1-B_s\Delta_{1,s}A_s)^{-1}B_{s}\Delta_{1,s}\\ &=\Delta_{1,s}+\Delta_{1,s}A_{s}\Delta_{1+s,1}B_{s}\Delta_{1,s},\\ \Delta_{1+s,1} &=(A_s-B_s\Delta_{1,s}A_s +B_1)^{-1}\\ &=\Delta_{s,1}+\Delta_{s,1}B_{s}(A_1+B_s-A_s\Delta_{s,1}B_s)^{-1}A_{s}\Delta_{s,1}\\ &=\Delta_{s,1}+\Delta_{s,1}B_s\Delta_{1,s+1}A_s\Delta_{s,1}. \end{align*} Thus, we have \begin{align*} A_{(s+1)+1}&=A_{1+(s+1)}=A_{s+1}-B_{s+1}\Delta_{1,s+1}A_{s+1}\\ &=A_1-B_{1}\left[\Delta_{s,1}+\Delta_{s,1}B_s\Delta_{1,s+1}A_s\Delta_{s,1}\right]A_{1}\\ &=A_{1}-B_{1}\Delta_{s+1,1}A_{1}=A_{s+1}\Delta_{s+1,1}A_{1},\\ B_{(s+1)+1}&=B_{1+(s+1)}=B_{1}-A_{1}\Delta_{1,s+1}B_{1}\\ &=B_{1}-A_{1}\left[\Delta_{1,s}+\Delta_{1,s}A_{s}\Delta_{s+1,1}B_{s}\Delta_{1,s}\right]B_{1}\\ &=B_{s+1}-A_{s+1}\Delta_{s+1,1}B_{s+1}=B_{1}\Delta_{s+1,1}B_{s+1}, \end{align*} which completes the proof of the first part. Now suppose that \eqref{tran2} is true for $j=s$ and any $i$. In particular, \begin{align*} \Delta_{i,s+1} &=(A_i+B_s-A_s\Delta_{s,1}B_s)^{-1}\\ &=\Delta_{i,s}+\Delta_{i,s}A_{s}(A_s+B_1-B_s\Delta_{i,s}A_s)^{-1}B_{s}\Delta_{i,s}\\ &=\Delta_{i,s}+\Delta_{i,s}A_{s}\Delta_{i+s,1}B_{s}\Delta_{i,s},\\ \Delta_{i+s,1} &= (A_s-B_s\Delta_{i,s}A_s +B_1)^{-1}\\ &=\Delta_{s,1}+\Delta_{s,1}B_{s}(A_i+B_s-A_s\Delta_{s,1}B_s)^{-1}A_{s}\Delta_{s,1}\\ &=\Delta_{s,1}+\Delta_{s,1}B_s\Delta_{i,s+1}A_s\Delta_{s,1}. \end{align*} This implies \begin{align*} A_{i+(s+1)}&=A_{(i+s)+1}=A_{1}-B_{1}\Delta_{i+s,1}A_{1}\\ &=A_1-B_{1}\left[\Delta_{s,1}+\Delta_{s,1}B_s\Delta_{i,s+1}A_s \Delta_{s,1} \right]A_{1}\\ &=A_{s+1}-B_{s+1}\Delta_{i,s+1}A_{s+1}=A_{i}\Delta_{i,s+1}A_{s+1},\\ B_{i+(s+1)}&=B_{(i+s)+1}=B_{i+s}-A_{i+s}\Delta_{i+s,1}B_{i+s}\\ &=B_{i}-A_{i}\left[\Delta_{i,s}+\Delta_{i,s}A_{s}\Delta_{i+s,1}B_{s}\Delta_{i,s}\right]B_{i}\\ &=B_{i}-A_{i}\Delta_{i,s+1}B_{i}=B_{s+1}\Delta_{i,s+1}B_{i}, \end{align*} which completes the proof of the theorem. \end{proof} {Two things are required to be noted. First, Theorem~\ref{thm:trans1} implies that the iterative sequence $\{A_k-\lambda B_k\}$ can be formulated explicitly from any two matrix pencils $A_i-\lambda B_i$ and $A_j-\lambda B_j$, where $i+j = k$. The formula also gives rise to { a discrete-type flow} and can be used to accelerate the iterations given in Algorithm~\ref{abalg}. Second, it follows} from Corollary~\ref{cor:dif} and Theorem~\ref{thm:trans1} that $A_k=A_{k-1}\Delta_{1,k-1} A_{1}=A_{1}\Delta_{1,k-1} A_{k-1}=A_{k-1}\Delta_{k-1,1} A_{1}=A_{1}\Delta_{k-1,1} A_{k-1}$. It shows that the iterations $A_k$ and $B_k$, regardless of the assumptions~\eqref{signal2}, have the following four equivalent forms by using the same initial matrix pencil: { \begin{equation*} \begin{array}{|c|c|} \hline \multirow{2}{*}{1.} & A_k^{(1)}=A_1^{(1)}(A_1^{(1)}+B_{k-1}^{(1)})^{-1}A_{k-1}^{(1)},\\ & B_k^{(1)}=B_{k-1}^{(1)}(A_1^{(1)}+B_{k-1}^{(1)})^{-1}B_1^{(1)}; \\ \hline \multirow{2}{*}{2.} & A_k^{(2)}=A_1^{(2)}(B_1^{(2)}+A_{k-1}^{(2)})^{-1}A_{k-1}^{(2)},\,\\ & B_k^{(2)}=B_{k-1}^{(2)}(A_1^{(2)}+B_{k-1}^{(2)})^{-1}B_1^{(2)}; \\ \hline \multirow{2}{*}{3.} & A_k^{(3)}=A_1^{(3)}(A_1^{(3)}+B_{k-1}^{(3)})^{-1}A_{k-1}^{(3)},\\ & B_k^{(3)}=B_{k-1}^{(3)}(B_1^{(3)}+A_{k-1}^{(3)})^{-1}B_1^{(3)}; \\ \hline \multirow{2}{*}{4.} & A_k^{(4)}=A_1^{(4)}(B_1^{(4)}+A_{k-1}^{(4)})^{-1}A_{k-1}^{(4)},\\ & B_k^{(4)}=B_{k-1}^{(4)}(B_1^{(4)}+A_{k-1}^{(4)})^{-1}B_1^{(4)}.\\ \hline \end{array} \end{equation*} } The next theorem is to know how the eigeninformation is transferred during the iterative process. \begin{Theorem}\label{thm:trans2} Let $A_1-\lambda B_1$ be a regular matrix pencil, and let $\{A_k-\lambda B_k\}$ be the sequence of matrices generated by Algorithm~\ref{abalg}. Suppose that the condition~\eqref{cond} holds and $A_1U=B_1U\Lambda$. Then, \begin{itemize} \item [\emph{(a)}] $A_1 U= B_k U \sum\limits_{j=1}^k \Lambda^j$. \item [\emph{(b)}] $A_k U= B_k U \Lambda^k$. In particular, if $1\not\in\lambda(\Lambda)$, then \begin{equation}\label{eq:AB1} A_k U = (B_1-A_1)U\Lambda^k(I_n-\Lambda^k)^{-1}. \end{equation} \item [\emph{(c)}] $A_i U \sum\limits_{j=1}^i \Lambda^j= B_k U \Lambda^i \sum\limits_{j=1}^k \Lambda^j$, for any two positive integers $i$ and $k$. \end{itemize} \end{Theorem} \begin{proof} Clearly, (a) is true for $k=1$. Suppose that the statement is true for a positive integer $k=s$; that is, \begin{align*} A_1 U= B_s U \sum\limits_{j=1}^s \Lambda^j. \end{align*} We notice that \begin{align*} A_1U-B_s\Delta_{1,s}A_1U &=(A_1+B_s)\Delta_{1,s}A_1U-B_s\Delta_{1,s}A_1U\\ &= B_s\Delta_{1,s}A_1U\sum\limits_{j=1}^s \Lambda^j =B_s\Delta_{1,s}B_1 U \sum\limits_{j=2}^{s+1} \Lambda^j, \end{align*} so that \begin{align*} &A_1U=B_s\Delta_{1,s}B_1U\Lambda+B_s\Delta_{1,s}B_1 U \sum\limits_{j=2}^{s+1} \Lambda^j=B_{s+1}U\sum\limits_{j=1}^{s+1} \Lambda^j. \end{align*} The result of the first part of (b) has been given in our introduction. We thus omit the proof here. Since \begin{equation*} A_k U = B_k U \Lambda^k=(A_k+B_1-A_1)U \Lambda^k= A_k U \Lambda^k + (B_1-A_1) U\Lambda^k, \end{equation*} we see that~\eqref{eq:AB1} holds, while $1\not\in \lambda(\Lambda)$. Here, the second equality follows from Corollary~\ref{cor:dif}. To prove (c), we first show that for any positive integer $i$, \begin{equation*} A_1 U \Lambda^i = A_i U \sum\limits_{j=1}^i \Lambda^j. \end{equation*} By Theorem~\ref{thm:trans1}, since $A_i = A_1 - B_1 \Delta_{i-1,1} A_1$ and $B_i = B_{1} \Delta_{i-1,1} B_{i-1}$, we have \begin{equation*} (A_1- A_i) U = (B_1 \Delta_{i-1,1} A_1) U = B_1 \Delta_{i-1,1} B_{i-1} U\sum_{j=1}^{i-1}\Lambda^j= B_i U\sum_{j=1}^{i-1}\Lambda^j. \end{equation*} Or, equivalently, \begin{equation*} A_1 U \Lambda^i = A_i U \Lambda^i+B_i U \Lambda^i \sum_{j=1}^{i-1}\Lambda^j = A_i U \sum_{j=1}^{i}\Lambda^j, \end{equation*} since $A_i U= B_i U \Lambda^i$. Second, from (a), we have already proved (c) for $i=1$ and a given positive integer $k$. Assume (c) is true for $i = s$; that is, \[ A_s U \sum_{j=1}^s \Lambda^j = B_k U\Lambda^s \sum_{j=1}^k \Lambda^j. \] Then \begin{align*} A_{s+1}U\sum_{j=1}^{s+1}\Lambda^j &= A_1 U \Lambda^{s+1}= (A_s U \sum_{j=1}^s \Lambda^j) \Lambda= B_k U \Lambda^{s+1} \sum_{j=1}^{k} \Lambda^j. \end{align*} \end{proof} \section{Modified AB-Algorithm } Let {$\{A_k-\lambda B_k\}$} be the sequence of matrices generated by Algorithm~\ref{abalg}. {Before we move on, we should emphasize that the structure of the matrix pencil $A_k-\lambda B_k$ is invariant once the subscripts $i+j = k$; that is, the generation of the sequence $\{A_k-\lambda B_k\}$ is independent of the subscript in $A_i$, $A_j$, $B_i$ and $B_j$. To fully take advantages of this invariance, we would like to design algorithms by applying Theorem~\ref{thm:trans1} to generate accelerated iterations with convergence of any desired order as follows.} { \begin{Algorithm}\label{aa2} {\emph{(Modified AB-Algorithm)}} \begin{enumerate} \item {Given a positive integer $r>1$, a tolerance $\tau>0$, and a positive integer $kmax$, let $(\widehat{A}_1,\widehat{B}_1)=(A_1,B_1)$;} \item {For} $k= 2,\ldots,$ iterate {until {$\mbox{dist}(\mbox{Null}(\widehat{A}_{k-1}),\mbox{Null}(\widehat{A}_{k}))< \tau$} or $k> kmax$.} \begin{align*} \widehat{A}_{k}& =A_{k-1}^{(r-1)}(A_{k-1}^{(r-1)}+\widehat{B}_{k-1})^{-1}\widehat{A}_{k-1},\\ \widehat{B}_{k}& =\widehat{B}_{k-1}(A_{k-1}^{(r-1)}+\widehat{B}_{k-1})^{-1}B_{k-1}^{(r-1)}, \end{align*} until convergence, where $(A_{k-1}^{(r-1)},B_{k-1}^{(r-1)})$ is defined in step 3. \item {For} $\ell=1,\ldots,r-2$, iterate \begin{align*} A_{k-1}^{(\ell+1)}& =A_{k-1}^{(\ell)}(A_{k-1}^{(\ell)}+\widehat{B}_{k-1})^{-1}\widehat{A}_{k-1},\\ B_{k-1}^{(\ell+1)}& =\widehat{B}_{k-1}(A_{k-1}^{(\ell)}+\widehat{B}_{k-1})^{-1}B_{k-1}^{(\ell)}, \end{align*} with $(A_{k-1}^{(1)},B_{k-1}^{(1)})=(\widehat{A}_{k-1},\widehat{B}_{k-1})$. \end{enumerate} \end{Algorithm} } For clarity, a thing should be emphasized here. The AB algorithm has been developed to obtain the stable deflating subspace of the generalized eigenvalue problem $A_1U = B_1 U\Lambda$. However, {the sequence $\{A_k U\}$ provided in Algorithm~\ref{abalg} converges only r-linearly to $0$, once the spectral radius of $\Lambda$ is less than $1$, and the sequence $\{B_k\}$ is uniformly bounded.} From Algorithm~\ref{aa2} it follows that \begin{subequations} \begin{eqnarray}\label{eq:conv} &&{A}_{k-1}^{(\ell+1)} U = {B}_{k-1}^{(\ell+1)} U \Lambda^{(\ell+1)r^{k-2}},\\ &&\widehat{A}_{k} U =\widehat{B}_{k} U \Lambda^{r^{k-1}}, \end{eqnarray} \end{subequations} for $k = 2,\ldots$, and $\ell = 1,\ldots, r-2$, and $(\widehat{A}_{k},\widehat{B}_{k}) = (A_{r^{k-1}},B_{r^{k-1}})$. It follows from Theorem~\ref{thm:trans2} that \begin{align*} \|\widehat{A}_{k} U\|\leq \dfrac{\|(B_1-A_1)U\|}{1-\|\Lambda\|^{r^{k-1}}} \|\Lambda\|^{r^{k-1}}, \end{align*} where $\|.\|$ is a matrix induced norm such that $\|\Lambda\|<1$. Thus the sequence $\{\widehat{A}_{k} U\}$ converges to $0$ with r-order $r$. For a full account of the definition of the rate of convergence, the reader is referred to~\cite{Kelly1995}. Note that given two initial $n\times n$ matrices $A_1$ and $B_1$, the overall cost for computing the modified AB-algorithm per iteration is $\frac{14(r-1)}{3} n^3$ flops. The computation cost of the modified AB-algorithm with positive integer $r$ definitely increases as $r$ increases. Theoretically, Algorithm~\ref{aa2} provide a $r$-order convergence sequence which approximates the solution of the stable subspace of $A-\lambda B$. Numerically, if $\rho(\Lambda)$ is not sufficiently close to $1$, choosing $r=2$ will be fast enough. \section{Application of the AB-Algorithm for Solving the Matrix Square Root} { We notice that only recently, the modified AB-Algorithm with $r=2$ have been adjusted specifically for solving a kind of Sylvester matrix equations~\cite{Lin20152171} and the palindromic generalized eigenvalue problem~\cite{LI20112269}. In this section, we show that the AB-algorithm provide an alternative way to compute the matrix square root. In particular, the speed of convergence of the AB-algorithm can be of any desired order. As mentioned before, numerical methods for solving the matrix square root are numerous. Comparison of numerical performance among different methods is something worthy of our investigation and is in process.} In~\eqref{eq:conv} we see that the sequence $\{\widehat{A}_{i} U\}$ converges with r-order $r$ to $0$. We then in this section use this accelerated techniques to solve the quadratic matrix equation defined in~\eqref{eq:quad}, i.e., find the principle square root $\sqrt{S}$ of the matrix $S$ with $\lambda(S) \subset \mathbb{C}^+$. To this end, we relate~\eqref{eq:quad} to the generalized eigenvalue problem { \begin{align}\label{eq:space} A \begin{bmatrix} I_{n} \\ \sqrt{S} \end{bmatrix}= B \begin{bmatrix} I_{n} \\ \sqrt{S} \end{bmatrix} \sqrt{S}, \end{align} } where $A=\begin{bmatrix} 0 & I_{n} \\ S & 0 \end{bmatrix}$ and $B=I_{2n}$. Since $\lambda(S) \subseteq \mathbb{C}^+$, there is no guarantee that the AB-algorithm will converge. To remedy this situation, this matrix $\sqrt{S}$ in~\eqref{eq:space} must be retreated. One way is to apply the M\"{o}bius transformation { \begin{equation*} \mathcal{C}_{\sqrt{S}}(\gamma I_n) = (\gamma I_n-\sqrt{S})(\gamma I_n+\sqrt{S})^{-1}, \end{equation*} } where $\gamma > 0$ and {$-1\not\in\lambda(\gamma I_n-\lambda \sqrt{S})$}, i.e., $-1$ is not an eigenvalue of the matrix pencil {$\gamma I_n-\lambda\sqrt{S}$}; that is, recast~\eqref{eq:space} in the following equation \begin{equation}\label{eq:genAB} A_1\begin{bmatrix} I_{n} \\ \sqrt{S} \end{bmatrix}=B_1\begin{bmatrix} I_{n} \\ \sqrt{S} \end{bmatrix} \mathcal{C}_{\sqrt{S}}(\gamma I_n) , \end{equation} where $A_1=\gamma B-A$ and $B_1=\gamma B+A$. Observe that $\rho(\mathcal{C}_{\sqrt{S}}(\gamma I_n)) < 1$ since $\lambda(S)\subseteq \mathbb{C}^+$. Upon using the AB-algorithm, it can be easily checked that for any integer $k\geq1$, $A_k$ and $B_k$ can be expressed as \begin{equation}\label{eq:abk} A_k=\begin{bmatrix} Q_k & -I_n\\ -S & Q_k \end{bmatrix},\,B_k=\begin{bmatrix} Q_k & I_n\\ S & Q_k \end{bmatrix},\end{equation} respectively, where the sequence $\{Q_k\}$ satisfies $Q_i Q_j=Q_j Q_i$, for any integers $i, j > 0$, and the following iteration \begin{align}\label{fix} Q_{k+1}=(\gamma Q_k+S)(\gamma I_n+Q_k)^{-1} \end{align} with $Q_1=\gamma I_n$. Note that once $Q_k = \sqrt{S}$ for some $k$, it follows that $Q_\ell = \sqrt{S}$ for all $\ell \geq k$. Specifically, let $\mathcal{C}_\gamma(\lambda)=\frac{\gamma-\lambda}{\lambda+\gamma}$ be the M\"{o}bius transformation with a parameter $\gamma \neq 0$ and $\lambda \neq -\gamma$. Then, the inverse scalar M\"{o}bius transformation can be written as \begin{equation*} \mathcal{C}_\gamma^{-1}(\lambda)=\gamma\frac{1-\lambda}{1+\lambda},\quad \lambda\neq -1. \end{equation*} Let $\lambda=e^{\frac{2j \pi i}{n}}\in S_n \backslash \{-1\}$, where $1\leq j < n$. It follows that the real part of the square of $a:=\mathcal{C}_\gamma^{-1}(\lambda)$ is a real negative number, since \begin{equation}\label{eq:reala} a^2 = \gamma^2 (\frac{1-e^{\frac{2j\pi i}{n}}}{1+e^{\frac{2j\pi i}{n}}})^2 = -\gamma^2\tan^2(\frac{j\pi}{n})<0. \end{equation} From~\eqref{eq:reala} and Theorem~\eqref{thm:well-defined}, it follows that the AB-algorithm will terminate prematurely only if $\lambda(S)\subseteq\mathbb{C}^-$; that is, once $\lambda(S) \subseteq \mathbb{C}^+ $, or even, $\lambda(S) \subseteq \mathbb{C}^+ \cup\{0\}$, the sequence of matrix pencils $\{A_k-\lambda B_k\}$, initiated by~\eqref{eq:genAB}, is well-defined. With an eye on the structure of the matrix pencil $A_k-\lambda B_k$, we look for an accelerated iteration induced by the assumption of $\widehat{A}_{k}-\lambda \widehat{B}_{k}$ in Algorithm~\ref{aa2}. { \begin{Algorithm} \label{square} {\emph{(Iteration for solving the matrix square root)}} \begin{enumerate} \item {Given a positive integer $r>1$, a tolerance $\tau>0$, and a positive integer $kmax$,} let $\widehat{Q}_1={Q}_1=\gamma I_n$; \item {For} $i=2,\ldots$, {iterate} {until {$\|\widehat{Q}_{k}-\sqrt{S}\|< \tau$} or $k>kmax$.} \begin{align*} \widehat{Q}_{k}&:={(S+\widehat{Q}_{k-1}{Q}_{k-1}^{(r-1)})}{(\widehat{Q}_{k-1}+{Q}_{k-1}^{(r-1)})^{-1}}, \end{align*} until convergence, where $\widehat{Q}_{k-1}^{(r-1)}$ is defined in step 3. \item {For} $\ell=1,\cdots,r-2$, iterate \begin{align*} {Q}_{k-1}^{(\ell+1)}&:={(S+\widehat{Q}_{k-1}{Q}_{k-1}^{(\ell)})}{(\widehat{Q}_{k-1}+{Q}_{k-1}^{(\ell)})^{-1}}, \end{align*} with $\widehat{Q}_{k-1}^{(1)}=\widehat{Q}_{k-1}$. \end{enumerate} \end{Algorithm} } Note that $\widehat{Q}_{k} = Q_{r^{k-1}}$ for $k \geq 1$, and with the assumption of the existence of iterative sequences, we immediately have the following iterative formulae. We omit the proof here because the result can be straightforwardly shown by using induction. \begin{Theorem} Assume that the sequences generated by Algorithm~\ref{square} can be constructed with no break down. Then, we have the following two iterative formulae. \begin{enumerate} \item When $r$ is even, let $q = \frac{r}{2}$. We have \begin{align}\label{eq:whatQ1} \widehat{Q}_{k+1}=(\sum\limits_{j=0}^{q} {r \choose 2j}\widehat{Q}_k^{r-2j} S^j)(\sum\limits_{j=0}^{q-1} {r \choose 2j+1}\widehat{Q}_k^{r-2j-1} S^{j})^{-1}. \end{align} \item While $r$ is odd, let $q = \frac{r-1}{2}$. We have \begin{align}\label{eq:whatQ2} \widehat{Q}_{k+1}=(\sum\limits_{j=0}^{q} {r \choose 2j}\widehat{Q}_k^{r-2j} S^j)(\sum\limits_{j=0}^{q} {r \choose 2j+1}\widehat{Q}_k^{r-2j-1} S^{j})^{-1}, \end{align} \end{enumerate} { where the notation ${n \choose k}$ denotes the number of $k$-combinations from the set $S=\{1,2,\cdots,n\}$ of $n$ elements.} \end{Theorem} We notice that if $S$ is a nonsingular matrix, then~\eqref{eq:whatQ1} and~\eqref{eq:whatQ2} can be simply expressed by the following rule: \begin{align*} \widehat{Q}_{k+1}=V_m U_m^{-1}, \end{align*} where { \begin{eqnarray*} &&V_m =\sum\limits_{j=0}^{[\frac{m}{2}]} {m \choose 2j}\widehat{Q}_k^{m-2j} S^j=\dfrac{1}{2}((\widehat{Q}_k+\sqrt{S})^m+(\widehat{Q}_k-\sqrt{S})^m), \\ &&U_m=\sum\limits_{j=0}^{[\frac{m-1}{2}]} {m \choose 2j+1}\widehat{Q}_k^{m-2j-1} S^j=\dfrac{( {\sqrt{S})^{-1}}}{2}((\widehat{Q}_k+\sqrt{S})^m-(\widehat{Q}_k-\sqrt{S})^m). \end{eqnarray*}} Importantly, under nonsingularity assumption, a strong result related to the sequences $\{ \mathcal{C}_{\sqrt{S}}(Q_i) \}$ and $\{ \mathcal{C}_{\sqrt{S}}(\widehat{Q}_i) \}$ hold. { \begin{Lemma}\label{lem:qk} Suppose that $S$ is nonsingular. Let $i$, $j$, and $k$ be any positive integers, and $1\leq i,j \leq k$. Then the following properties hold. \begin{enumerate} \item For the sequence~$\{{Q}_k\}$, we have \par\noindent \begin{itemize} \item[a.] ${Q}_{k}=\sqrt{S}(I_n+\mathcal{C}_{\sqrt{S}}({Q}_{1})^{{k}})(I_n-\mathcal{C}_{\sqrt{S}}({Q}_{1})^{{k}})^{-1}$, \item[b.] {$\mathcal{C}_{\sqrt{S}}({Q}_{i})^{j}=\mathcal{C}_{\sqrt{S}}({Q}_{j})^{i}.$} \end{itemize} \item For the sequence~$\{\widehat{Q}_k\}$, we have \par\noindent \begin{itemize} \item[a.] $\widehat{Q}_{k} =\sqrt{S}(I_n+\mathcal{C}_{\sqrt{S}}(\widehat{Q}_{1})^{r^{k-1}})(I_n-\mathcal{C}_{\sqrt{S}}(\widehat{Q}_{1})^{r^{k-1}})^{-1}$, \item[b.] {$\mathcal{C}_{\sqrt{S}}(\widehat{Q}_{i})^{r^{k-i}}=\mathcal{C}_{\sqrt{S}}(\widehat{Q}_{j})^{r^{k-j}}$.} \end{itemize} \end{enumerate} \end{Lemma} } \begin{proof} It follows from Theorem~\ref{thm:trans2} and $Q_1=\gamma I_n$ that \begin{align}\label{eq:abk2} { {A}_k U(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{k})=(B_1-A_1)U\mathcal{C}_{\sqrt{S}}({Q}_1)^{k}.} \end{align} Then,~\eqref{eq:abk} and~\eqref{eq:abk2} yield \begin{align}\label{eq:qk} ({Q}_k-\sqrt{S})(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{k})&=2 \sqrt{S} \mathcal{C}_{\sqrt{S}}({Q}_1)^{k}. \end{align} By adding $2 \sqrt{S}(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{k})$ to both sides of~\eqref{eq:qk}, we have \begin{align}\label{eq:qk2} ({Q}_k+\sqrt{S})(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{k})&=2 \sqrt{S}. \end{align} From~\eqref{eq:qk} and~\eqref{eq:qk2} together, it must be that { \begin{align*} {Q}_k(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{k}) =\sqrt{S} (I_n +\mathcal{C}_{\sqrt{S}}({Q}_1)^{k}). \end{align*} } Since $S$ is nonsingular, it follows that $1\not\in\lambda(\mathcal{C}_{\sqrt{S}}({Q}_1))$ so that { \begin{align*} {Q}_k= \sqrt{S} (I_n +\mathcal{C}_{\sqrt{S}}({Q}_1)^{k})(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{k}) ^{-1}, \end{align*} } which is equivalent to { \begin{align*} \mathcal{C}_{\sqrt{S}}({Q}_1)^{k} = \mathcal{C}_{\sqrt{S}}({Q}_k). \end{align*} } Since $k$ is an arbitrary positive integer, we have { \begin{align*} (\mathcal{C}_{\sqrt{S}}({Q}_i))^{j} = \mathcal{C}_{\sqrt{S}}({Q}_1)^{ij} = (\mathcal{C}_{\sqrt{S}}({Q}_j))^{i}, \end{align*} } for $1\leq i, j \leq k$. Also, by Theorem~\ref{thm:trans2}, Algorithm~\ref{aa2}, $\widehat{Q}_1 = \gamma I_n$, we have \begin{align*} \widehat{A}_k U=\widehat{B}_k U (\mathcal{C}_{\sqrt{S}}(\widehat{Q}_1))^{r^{k-1}}, \end{align*} which then completes the proof of { part 2a. and part 2b}. by applying the same strategies as above. \end{proof} { Indeed, this iteration in Algorithm~\ref{square} converges to $\sqrt{S}$ with { q}-order $r$. } \begin{Theorem}\label{thm:qua} Suppose that $S$ is a nonsingular matrix. Let $\|.\|$ be a matrix induced norm such that $\|\mathcal{C}_{\sqrt{S}}(\widehat{Q}_1)\|<1$. Then, { \begin{equation*} \|\widehat{Q}_{k+1}-\sqrt{S}\| \leq \mu \|\widehat{Q}_k-\sqrt{S}\|^r, \end{equation*} } for some $\mu > 0$; that is, $\widehat{Q}_k\rightarrow \sqrt{S}$ with { q}-order $r$. \end{Theorem} \begin{proof} Using~\eqref{eq:qk} and $\widehat{Q}_k = {Q}_{r^{k-1}}$, we see that \begin{align*} \widehat{Q}_k-\sqrt{S}&=2\sqrt{S}\mathcal{C}_{\sqrt{S}}({Q}_1)^{r^{k-1}}(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{r^{k-1}})^{-1}. \end{align*} Without loss of generality we assume that $\widehat{Q}_{k}\neq \sqrt{S}$ for all $k$. Otherwise, $\widehat{Q}_{\ell}= \sqrt{S}$ for all $\ell\geq k$. It follows that \begin{align*} \dfrac{\|\widehat{Q}_{k+1}-\sqrt{S}\|}{\|\widehat{Q}_k-\sqrt{S}\|^r}&\leq \dfrac{\|2\sqrt{S}\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k}}\| \|I_n-\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k-1}}\|^r}{\|2\sqrt{S}\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k-1}}\|^r(1-\|\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k}}\|)}\\ &\leq \dfrac{2\|\sqrt{S}\|\|\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k-1}}\|^r \|I_n-\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k-1}}\|^r }{2^{r}\frac{\|\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k-1}}\|^r}{\|(\sqrt{S})^{-1}\|^r} (1-\|\mathcal{C}_{\sqrt{S}}(Q_1)^{r^{k}}\|) } \\ &\leq2^{1-r}\|\sqrt{S}\| \|(\sqrt{S})^{-1}\|^r\sup\limits_{k\geq 1}\dfrac{(1+\|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{r^{k-1}})^r}{1-\|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{r^{k}}}\\ &\leq \mu:= \dfrac{2\|\sqrt{S} \|}{ 1-\|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{r}}{ \|(\sqrt{S})^{-1}\|^r} <\infty. \end{align*} \end{proof} Note that for $r=2$ the iteration $\widehat{Q}_{k+1}=(\widehat{Q}_{k}^2+S)(2\widehat{Q}_{k})^{-1}=\frac{1}{2}(\widehat{Q}_{k}+S\widehat{Q}_{k}^{-1})$ with initial $\widehat{Q}_{1}=\gamma I_n$, which is equivalent to the Newton's method for solving the matrix square root~\cite{Highambook08}, converges to $\sqrt{S}$ with quadratic convergence. For $r=3$ we have $\widehat{Q}_{k+1}=(\widehat{Q}_{k}^3+3\widehat{Q}_{k}S)(3\widehat{Q}_{k}^2+S)^{-1}$, which provides that a cubically convergent iteration converges to $\sqrt{S}$ with initial $\widehat{Q}_{1}=\gamma I_n$. Similarly, by Algorithm~\ref{square} we can make $\widehat{Q}_{k+1}$ converges to $\sqrt{S}$ $q$-superlinearly with any desired $q$-order $r$. However, without the accelerated technique, we can show in the following that the original sequence $\{Q_k\}$ only converges to $\sqrt{S}$ q-linearly. \begin{Theorem} Suppose that $S$ is a nonsingular matrix. Let $\|.\|$ be a matrix induced norm such that $\|\mathcal{C}_{\sqrt{S}}(\widehat{Q}_1)\|<1$. Then, { \begin{equation*} \|{Q}_{k+1}-\sqrt{S}\| \leq \mu \|{Q}_k-\sqrt{S}\|, \end{equation*} } for some $\mu \in (0,1)$ and sufficient large $k$; that is, ${Q}_k\rightarrow \sqrt{S}$ q-linearly with $q$-factor $\mu$. \end{Theorem} \begin{proof} From~\eqref{eq:qk}, we have \begin{align*} {Q}_k-\sqrt{S}&=2\sqrt{S}\mathcal{C}_{\sqrt{S}}({Q}_1)^{k}(I_n-\mathcal{C}_{\sqrt{S}}({Q}_1)^{k})^{-1}. \end{align*} Thus, \begin{align*} \|{Q}_{k+1}-\sqrt{S}\|&= { \|({Q}_k-\sqrt{S})(I_n-\mathcal{C}_{\sqrt{S}}(Q_1)^k)} \mathcal{C}_{\sqrt{S}}(Q_1)(I_n-\mathcal{C}_{\sqrt{S}}(Q_1)^{k+1})^{-1}\| \\ &\leq \|\mathcal{C}_{\sqrt{S}}(Q_1)\| \dfrac{1+ \|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k}}{1-\|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k+1}} \|Q_k-\sqrt{S}\|. \end{align*} Since $\dfrac{1+ \|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k}}{1-\|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k+1}} \rightarrow 1$ as $k\rightarrow \infty$, there exists a constant $k_0$ such that \begin{equation*} \|\mathcal{C}_{\sqrt{S}}(Q_1)\| \dfrac{1+ \|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k}}{1-\|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k+1}} < 1 \end{equation*} for { $k \geq k_0$.} Let $\mu = \dfrac{1+ \|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k_0}}{1-\|\mathcal{C}_{\sqrt{S}}(Q_1)\|^{k_0+1}}\|\mathcal{C}_{\sqrt{S}}(Q_1)\| $, which completes the proof. \end{proof} { In the next result, we show that the AB-algorithm still converges, while solving the square root of a singular matrix, which is hard to be handled in general. See~\cite{Meini2004} for further discussion. } \begin{Corollary}\label{DAconvthm2} Suppose that $S$ is a singular matrix having $\lambda(S) \subseteq \mathbb{C}^+ \cup\{0\}$ and the null eigenvalues are semisimple. Then, \begin{enumerate} \item ${Q}_k \rightarrow \sqrt{S}$ {sublinearly}, \item $\widehat{Q}_k \rightarrow \sqrt{S}$ q-linearly with q-factor $\frac{1}{r}$. \end{enumerate} \end{Corollary} \begin{proof} Let $P$ be an invertible matrix so that $\mbox{diag}(0_p, J_{n-p}) = P \sqrt{S} P^{-1} $ be the Jordan canonical form of $\sqrt{S}$ with $\lambda(J_{n-p})\subset\mathbb{C}^+$. Upon the use of substitution and Lemma~\ref{lem:qk}, we have { \begin{eqnarray*} P {Q}_k P^{-1}= \mbox{diag}({Q}_k^{(11)}, {Q}_k^{(22)}), \end{eqnarray*} } where ${Q}_k^{(11)}$ is derived directly by~\eqref{fix} and ${Q}_k^{(22)}$ is followed from Lemma~\ref{lem:qk} such that \begin{align*} {Q}_k^{(11)} &=\dfrac{\gamma I_n}{k},\\ {Q}_k^{(22)} &=J_{n-p}(I+\mathcal{C}_{J_{n-p}}(\gamma I_{n-p})^{k})(I-\mathcal{C}_{J_{n-p}}(\gamma I_{n-p})^{k})^{-1}. \end{align*} Since $ \{{Q}_k^{(11)}\}$ converges to zero sublinearly and $ \{{Q}_k^{(22)}\}$ converges to zero q-linearly, $\{{Q}_k\}$ converges to $\sqrt{S}$ sublinearly. In the similar way, we have { \begin{eqnarray*} P \widehat{Q}_k P^{-1}= \mbox{diag}(\widehat{Q}_k^{(11)}, \widehat{Q}_k^{(22)}), \end{eqnarray*} } where \begin{align*} \widehat{Q}_k^{(11)} &=\dfrac{\gamma I_n}{r^k},\\ \widehat{Q}_k^{(22)} &= J(I_n+\mathcal{C}_{J}(\gamma I_{n})^{r^{k-1}})(I_n-\mathcal{C}_{J}(\gamma I_{n})^{r^{k-1}})^{-1}. \end{align*} Since $ \{\widehat{Q}_k^{(11)}\}$ converges to zero q-linearly with q-factor $r$, it follows that $\{\widehat{Q}_k\}$ converges to $\sqrt{S}$ q-linearly with { q}-factor $r$. \end{proof} \begin{Remark} Once the spectral radius of $\sqrt{S}$ in \eqref{eq:space} is not less than 1, we can apply the M\"{o}bius transformations to shift eigenvalues of $S$ such that $\rho(\mathcal{C}_{\sqrt{S}}(\gamma I_n))$ in~\eqref{eq:genAB} is less than 1. We would like our $\gamma$ to have a capacity such that the optimal convergence speed in~Algorithm~\ref{square} can be achieved. To this end, we seek $\gamma$ to be equal to the optimal solution $\gamma_0$ of the following min-max problem \begin{align}\label{eq:gamma} \gamma_0:=\min\limits_{\gamma>0}\max\limits_{\lambda\in \lambda(S)} |\frac{\sqrt{\lambda}-\gamma}{\sqrt{\lambda}+\gamma}|. \end{align} This min-max problem is also known as the ADI min-max problem~\cite{MR3076884}. Numerical approaches for solving~\eqref{eq:gamma} are numerous. Here we will not discuss it further. The reader is referred to~\cite{MR2494951,MR3076884,MR1742324} for example. \end{Remark} \section{Concluding remarks} {By computing left null spaces, the contribution of this work is twofold. Theoretically, it provides an iterative method, embedded with a discrete-type flow property, to solve the stable deflating subspace of a matrix pencil $A-\lambda B$. This property then allows us to advance the iterative method. Numerically, we have discussed with the numerical behavior of the AB-algorithm, including both low computational cost and high numerical reliability. Since the solution of the matrix square root can be interpreted in terms of the stable deflating subspace of a matrix pencil, our method can be used to compute the matrix square root. We show that the speed of convergence has q-order $r$, and even more, for the singular case, where $S$ is singular having no negative real eigenvalues, and the null eigenvalues are semisimple, the iteration still succeeds with a linear rate of convergence. Particularly, since Algorithm~\ref{square} corresponds to Newton iteration with $r=2$ and the initial guess $\gamma I_n$, the limiting accuracy should not be worse than $\kappa(\sqrt{S})\epsilon$, where $\kappa(\sqrt{S})$ is the condition number of $\sqrt{S}$ and $\epsilon$ is machine precision \cite[Table 6.2 on p.147]{Highambook08}. Numerically, it is known that a stable variant of Newton iteration, the IN iteration \cite[(6.20) on p.142]{Highambook08}, has been proposed with the limiting accuracy equal to $\epsilon$. Whether the AB-algorithm for $r=2$ has the desired accuracy or even more for $r> 2$ is something worthy of further investigation. Numerically, modified AB-algorithms for $r=2$ were also developed for solving generalized continuous/discrete-time algebraic Riccati equations~\cite{LI20112269} and $\star$-Sylvester matrix equation~\cite{Lin20152171}. How to apply the accelerated techniques in the work for solving other matrix equations (for example, matrix $p$th root) leads to the work in future. } \section*{Acknowledgment} This research work is partially supported by the Ministry of Science and Technology and the National Center for Theoretical Sciences in Taiwan. \def$'${$'$} \end{document}
arXiv
Reduction (computability theory) In computability theory, many reducibility relations (also called reductions, reducibilities, and notions of reducibility) are studied. They are motivated by the question: given sets $A$ and $B$ of natural numbers, is it possible to effectively convert a method for deciding membership in $B$ into a method for deciding membership in $A$? If the answer to this question is affirmative then $A$ is said to be reducible to $B$. The study of reducibility notions is motivated by the study of decision problems. For many notions of reducibility, if any noncomputable set is reducible to a set $A$ then $A$ must also be noncomputable. This gives a powerful technique for proving that many sets are noncomputable. Reducibility relations A reducibility relation is a binary relation on sets of natural numbers that is • Reflexive: Every set is reducible to itself. • Transitive: If a set $A$ is reducible to a set $B$ and $B$ is reducible to a set $C$ then $A$ is reducible to $C$. These two properties imply that reducibility is a preorder on the powerset of the natural numbers. Not all preorders are studied as reducibility notions, however. The notions studied in computability theory have the informal property that $A$ is reducible to $B$ if and only if any (possibly noneffective) decision procedure for $B$ can be effectively converted to a decision procedure for $A$. The different reducibility relations vary in the methods they permit such a conversion process to use. Degrees of a reducibility relation Every reducibility relation (in fact, every preorder) induces an equivalence relation on the powerset of the natural numbers in which two sets are equivalent if and only if each one is reducible to the other. In computability theory, these equivalence classes are called the degrees of the reducibility relation. For example, the Turing degrees are the equivalence classes of sets of naturals induced by Turing reducibility. The degrees of any reducibility relation are partially ordered by the relation in the following manner. Let $\leq $ be a reducibility relation and let $C$ and $D$ be two of its degrees. Then $C\leq D$ if and only if there is a set $A$ in $C$ and a set $B$ in $D$ such that $A\leq B$. This is equivalent to the property that for every set $A$ in $C$ and every set $B$ in $D$, $A\leq B$, because any two sets in C are equivalent and any two sets in $D$ are equivalent. It is common, as shown here, to use boldface notation to denote degrees. Turing reducibility Main article: Turing reduction The most fundamental reducibility notion is Turing reducibility. A set $A$ of natural numbers is Turing reducible to a set $B$ if and only if there is an oracle Turing machine that, when run with $B$ as its oracle set, will compute the indicator function (characteristic function) of $A$. Equivalently, $A$ is Turing reducible to $B$ if and only if there is an algorithm for computing the indicator function for $A$ provided that the algorithm is provided with a means to correctly answer questions of the form "Is $n$ in $B$?". Turing reducibility serves as a dividing line for other reducibility notions because, according to the Church-Turing thesis, it is the most general reducibility relation that is effective. Reducibility relations that imply Turing reducibility have come to be known as strong reducibilities, while those that are implied by Turing reducibility are weak reducibilities. Equivalently, a strong reducibility relation is one whose degrees form a finer equivalence relation than the Turing degrees, while a weak reducibility relation is one whose degrees form a coarser equivalence relation than Turing equivalence. Reductions stronger than Turing reducibility The strong reducibilities include • One-one reducibility: $A$ is one-one reducible to $B$ if there is a computable one-to-one function $f$ with $A(x)=B(f(x))$ for all $x$. • Many-one reducibility: $A$ is many-one reducible to $B$ if there is a computable function $f$ with $A(x)=B(f(x))$ for all $x$. • Truth-table reducible: $A$ is truth-table reducible to $B$ if $A$ is Turing reducible to $B$ via a single (oracle) Turing machine which produces a total function relative to every oracle. • Weak truth-table reducible: $A$ is weak truth-table reducible to $B$ if there is a Turing reduction from $B$ to $A$ and a computable function $f$ which bounds the use. Whenever $A$ is truth-table reducible to $B$, $A$ is also weak truth-table reducible to $B$, since one can construct a computable bound on the use by considering the maximum use over the tree of all oracles, which will exist if the reduction is total on all oracles. • Positive reducible: $A$ is positive reducible to $B$ if and only if $A$ is truth-table reducible to $B$ in a way that one can compute for every $x$ a formula consisting of atoms of the form $B(0),B(1),...$ such that these atoms are combined by and's and or's, where the and of $a$ and $b$ is 1 if $a=1$ and $b=1$ and so on. • Enumeration reducibility: Similar to positive reducibility, relating to the effective procedure of enumerability from $A$ to $B$. • Disjunctive reducible: Similar to positive reducible with the additional constraint that only or's are permitted. • Conjunctive reducibility: Similar to positive reducibility with the additional constraint that only and's are permitted. • Linear reducibility: Similar to positive reducibility but with the constraint that all atoms of the form $B(n)$ are combined by exclusive or's. In other words, $A$ is linear reducible to $B$ if and only if a computable function computes for each $x$ a finite set $F(x)$ given as an explicit list of numbers such that $x\in A$ if and only if $F(x)$ contains an odd number of elements of $B$. Many of these were introduced by Post (1944). Post was searching for a non-computable, computably enumerable set which the halting problem could not be Turing reduced to. As he could not construct such a set in 1944, he instead worked on the analogous problems for the various reducibilities that he introduced. These reducibilities have since been the subject of much research, and many relationships between them are known. Bounded reducibilities A bounded form of each of the above strong reducibilities can be defined. The most famous of these is bounded truth-table reduction, but there are also bounded Turing, bounded weak truth-table, and others. These first three are the most common ones and they are based on the number of queries. For example, a set $A$ is bounded truth-table reducible to $B$ if and only if the Turing machine $M$ computing $A$ relative to $B$ computes a list of up to $n$ numbers, queries $B$ on these numbers and then terminates for all possible oracle answers; the value $n$ is a constant independent of $x$. The difference between bounded weak truth-table and bounded Turing reduction is that in the first case, the up to $n$ queries have to be made at the same time while in the second case, the queries can be made one after the other. For that reason, there are cases where $A$ is bounded Turing reducible to $B$ but not weak truth-table reducible to $B$. Strong reductions in computational complexity Main article: Reduction (complexity) The strong reductions listed above restrict the manner in which oracle information can be accessed by a decision procedure but do not otherwise limit the computational resources available. Thus if a set $A$ is decidable then $A$ is reducible to any set $B$ under any of the strong reducibility relations listed above, even if $A$ is not polynomial-time or exponential-time decidable. This is acceptable in the study of computability theory, which is interested in theoretical computability, but it is not reasonable for computational complexity theory, which studies which sets can be decided under certain asymptotical resource bounds. The most common reducibility in computational complexity theory is polynomial-time reducibility; a set A is polynomial-time reducible to a set $B$ if there is a polynomial-time function f such that for every $n$, $n$ is in $A$ if and only if $f(n)$ is in $B$. This reducibility is, essentially, a resource-bounded version of many-one reducibility. Other resource-bounded reducibilities are used in other contexts of computational complexity theory where other resource bounds are of interest. Reductions weaker than Turing reducibility Although Turing reducibility is the most general reducibility that is effective, weaker reducibility relations are commonly studied. These reducibilities are related to the relative definability of sets over arithmetic or set theory. They include: • Arithmetical reducibility: A set $A$ is arithmetical in a set $B$ if $A$ is definable over the standard model of Peano arithmetic with an extra predicate for $B$. Equivalently, according to Post's theorem, A is arithmetical in $B$ if and only if $A$ is Turing reducible to $B^{(n)}$, the $n$th Turing jump of $B$, for some natural number $n$. The arithmetical hierarchy gives a finer classification of arithmetical reducibility. • Hyperarithmetical reducibility: A set $A$ is hyperarithmetical in a set $B$ if $A$ is $\Delta _{1}^{1}$ definable (see analytical hierarchy) over the standard model of Peano arithmetic with a predicate for $B$. Equivalently, $A$ is hyperarithmetical in $B$ if and only if $A$ is Turing reducible to $B^{(\alpha )}$, the $a$th Turing jump of $B$, for some $B$-recursive ordinal $a$. • Relative constructibility: A set $A$ is relatively constructible from a set $B$ if $A$ is in $L(B)$, the smallest transitive model of ZFC set theory containing $B$ and all the ordinals. References • K. Ambos-Spies and P. Fejer, 2006. "Degrees of Unsolvability." Unpublished preprint. • P. Odifreddi, 1989. Classical Recursion Theory, North-Holland. ISBN 0-444-87295-7 • P. Odifreddi, 1999. Classical Recursion Theory, Volume II, Elsevier. ISBN 0-444-50205-X • E. Post, 1944, "Recursively enumerable sets of positive integers and their decision problems", Bulletin of the American Mathematical Society, volume 50, pages 284–316. • H. Rogers, Jr., 1967. The Theory of Recursive Functions and Effective Computability, second edition 1987, MIT Press. ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1 • G. Sacks, 1990. Higher Recursion Theory, Springer-Verlag. ISBN 3-540-19305-7 Internet resources • Stanford Encyclopedia of Philosophy: Recursive Functions
Wikipedia
\begin{document} \begin{figure}\label{logo_nano} \end{figure} \begin{center} \Large{Master thesis report\\(arXiv research extract only)} \\\Large{\bfseries{Quantum Technology: Single-Photon Source}} \large{Author: \\ Vincent Camus\\ Master Nanotech 2011-2012 - Grenoble INP Phelma\\ [email protected]$-$inp.fr$ Supervisors: \\Prof. Kae Nemoto\\NII - 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 Japan\\ [email protected]$ \\Simon Devitt\\NII - 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 Japan\\ [email protected]$ Defended in September 2012\\ Declassified in February 2017} \end{center} \begin{figure}\label{logos} \end{figure} \section*{Abstract} This report is a synthesis of my master thesis internship at the National Institute of Informatics (NII) in Tokyo, Japan, that lasted during the summer of year 2012. I worked in the Quantum Information Science Theory (QIST) group under supervision of Prof. Kae Nemoto and Dr. Simon Devitt. This group works on theoretical and experimental implementations of quantum information science.\\ The aim of my project was to study and improve quantum optical systems. I first studied different fields and systems of quantum information science. Then I focused my research on single-photon sources, entangled photon sources and interferometric photonic switches. Finally, I found some strategies to design an efficient and optimized single-photon source that could be built with today's technologies. This report describes in details the created and optimized design of a single-photon source based on time and space multiplexing of Spontaneous Parametric Downconversion (SPDC) sources. \section*{Résumé (French)} Ce rapport est une synthèse de mon stage de fin de master à l'Institut National d'Informa- tique de Tokyo, qui s'est déroulé pendant l'été 2012. J'ai travaillé dans le groupe de recherche d'information quantique (QIST) sous la supervision du Prof. Kae Nemoto et du Dr. Simon Devitt. Ce groupe travaille à la réalisation théorique et pratique de systèmes et modules quantiques.\\ Le but de mon projet était d'étudier et de chercher à améliorer les systèmes optiques quantiques. J'ai tout d'abord étudié plusieurs domaines et systèmes relatifs à la théorie de l'information quantique. Par la suite, je me suis focalisé sur les sources monophotoniques, les sources photoniques intriquées et l'interférométrie quantique. Enfin, j'ai trouvé une nouvelle stratégie pour améliorer et optimiser des sources monophotoniques implémentables avec les technologies d'aujourd'hui et de demain. Ce rapport décrit en détail le fonctionnement et les composants d'une source basée sur le multiplexing spacial et temporel de sources de type SPDC (Spontaneous Parametric Downconversion). \section*{Riassunto (Italian)} Questo documento è una sintesi del mio stage di master, effettuato presso l'Istituto Nazionale di Informatica nel Tokyo nell'estate 2012. Ho lavorato nel gruppo di ricerca di informazioni quantistiche (QIST) sotto la supervisione del Prof. Kae Nemoto e del Dr. Simon Devitt. Questo gruppo lavora nella realizzazione di moduli teorici e pratici per sistemi quantistici.\\ L'obiettivo del mio progetto era di studiare e migliorare i sistemi ottici quantistici. Nella prima parte del mio lavoro, ho studiato vari domini e sistemi, inerenti alla scienza dell'informa- zione quantistica. Successivamente, ho focalizzato la mia attenzione sulle sorgenti di singoli fotoni, fotoni intricati, e, allo stesso tempo, interferometria quantistica. Infine, ho trovato delle nuove strategie al fine di migliorare e ottimizzare le sorgenti di singoli fotoni, le quali potrebbe essere implementate con le tecnologie attuali. Questo documento descrive in dettaglio il funzionamento e i componenti delle sorgenti basate sulla moltiplicazione spazio-temporale SPDC (Spontaneous Parametric Downcoversion). \tableofcontents \chapter{Spontaneous Parametric Downconversion single-photon source} \section{Single-photon source} An ideal single-photon source\cite{sps} is a source that can emit at any arbitrary and defined time (\emph{on-demand}) and in which the probability of single photon emission is 100\% and the probability of multiple-photon emission is 0\%. Emitted photons are indistinguishable, repetition rate is arbitrarily fast. Deviations from these ideal characteristics.\\ We imagine building single-photon sources from various systems: {quantum dots\cite{spsqd1,spsqd2,spsqd3}}, {single atom\cite{spssa}}, ion\cite{spssi} or {molecule\cite{spssm}}. Another possibility is to use a probabilistic system based on photon Spontaneous Parametric Downconversion\cite{spsspdc1,spsspdc2} (SPDC). \section{Single-photon detector} Single-photon detectors based on single-photon avalanche photodiodes, photomultiplier tubes, superconducting nanowires are typically used as \emph{non photon-number resolving} detectors. They can only distinguish zero photon and more than zero photons, and they are the most commonly used single-photon detectors.\\ Others are \emph{photon-number resolving} detectors. While detecting a single photon is a very difficult task, discriminating the number of incident photons is even more difficult. One direct approach is simply to break the detector active area into distinct pixels and split the idler signal onto these pixel areas. Another approach is based on superconducting tunnel junction, but their complexity are high, their efficiencies are not very high, their resolution and speed are limited, and their working temperature very low ($<$~0.4~K). \section{SPDC theory} The response of an optical material to incoming fields is the combination of responses from a large number of atoms making up that material. The Maxwell equations for a simple linear material response wit electric given by the following equation: \begin{equation} \textbf{P} = \epsilon_0 \chi \textbf{E} \end{equation} where $\textbf{E}$ is the original electric field, $\textbf{P}$ is the the material polarization and $\chi$ the susceptibility tensor of the material (for isotropic media like a gas or a glass, it is a scalar).\\ In a nonlinear material, the response of the medium to an exciting field is non-linear, as: \begin{equation} \textbf{P} = \epsilon_0 ( \chi \textbf{E} + \chi^{(2)} \textbf{E}^2 +\chi^{(3)} \textbf{E}^3 ... ) \end{equation} where $\chi^{(n)}$ are the susceptibility tensors of higher order in the material response. The nonlinear response is usually small. The second order term $\chi^{(2)}$ is due to a lack an inversion symmetry in the material crystal. Thus amorphous materials like glass or polymers present the property $\chi^{(2)} = 0$.\\ For a monochromatic laser, the electric field is of the form: \begin{equation} \textbf{E}(x,t) = \textbf{E}_0 e^{\emph{\textbf{i}} ( k x - \omega t)} \end{equation} The second order nonlinear susceptibility results in a polarization component which oscillate at twice the original frequency: \begin{equation} \textbf{P}^{(2)}(t) \propto \chi^{(2)} \textbf{E}^2 \propto \chi^{(2)} \textbf{E}_0^2 e^{\emph{-2\textbf{i}} \omega t} \end{equation} This polarization component at a new frequency can be considered as a source in the Maxwell equations, and will propagate through the material according to the dispersion relation.\\ \section{SPDC source} In a SPDC source, a pump laser illuminates a material with a $\chi^{(2)}$ optical nonlinearity, creating two photons, called Signal and Idler, under the constraints of momentum and energy conservation (Fig.~\ref{spdc_theory}). The detection of one trigger photon indicates, or heralds, the presence of its twin, that can be send into a photon-source output.\\ \begin{figure} \caption{Spontaneous Parametric Downconversion of one photon into two photons.} \label{spdc_theory} \end{figure} It is a probabilistic process. Because of this nature, in addition to the probability of generating one pair of photons, there is also a finite probability of generating more than one pair via higher order emissions.\\ Using a pulsed laser to pump the nonlinear crystals, the probability of generating $n$ pairs of photons per pulse is led by a Poisson distribution: \begin{equation} P_n = \frac {N^n e^{-N}} {n!}, n \in \mathbb{N} \end{equation} where $N$ is the mean number of pairs, it depends on the pump power and parameters of the crystals.\\ Such behavior decreases the quality of the single-photon source. Since \emph{photon-number resolving} detectors are too complicated to integrate and too expensive, \emph{non photon-number resolving} detectors are used to herald the Idler photon. SPDC heralded sources must be held to average pair production levels much less than one to avoid producing multiple pairs which would result in the heralded channel containing more than one photon. \section{Optical switch} The best optical switch for single photons consists in a Mach-Zehnder Interferometer\cite{mzi_thesis} (MZI). A 3~dB splitter and a~3 dB combiner connected by two interferometer arms. By changing the effective refractive index of one of the arms, the phase difference at the beginning of the combiner can be changed, such that the light switches from one output port to the other. The structure is fabrication tolerant and polarisation insensitive. A schematic layout of the MZI-based switch is depicted on~Fig.~\ref{mzi_switch}. \begin{figure}\label{mzi_switch} \end{figure} \section{SPDC based source} Most of SPDC based sources work with a temporal or a spatial multiplexing, thus have the same frequency as pumping or a (low) divisor. In both cases, the multiplexing cannot control more than one photon and the Poissonian statistics of SPDC leads to a waste of photons or a stronger multi-photon {rate\cite{aa1,aa2,aa3}}$^{ (links)}$. Some SPDC based sources also try to use photon storage into different kinds of loops or cavity systems without meeting great {success\cite{aa4,aa5}}$^{ (links)}$. \chapter{Design of SPDC based single-photon source with output rate up-conversion multiplexed architecture} \section{Generalities} \subsection{Abstract} We describe a single-photon source based on an array of spontaneous parametric downconversion (SPDC) modules multiplexed by an integrated structure composed of electro-optic switches and delay wave-guides.\\ The suggested optical device allows to increase the output photon source rate compared to the SPDC module laser pump rate and enhances both stability, controllability, and efficiency.\\ This device transforms the spatial multiplexing into a temporal multiplexing with temporary photon storage. The photons are stored into different delay lines and temporally sequentialized. This train of photon can then be used in high frequency emission and compensation of photon lack among the next pump cycles.\\ The implementation consists of a \textbf{crossed multiplexed architecture using electro-optic switches}. The architecture first drives the downconverters array outputs into several layers of \textbf{delay lines} organized in \textbf{shared and by-passable binary delay register}. The architecture then includes a \textbf{routing tree} driving all the photons into a single output. \subsection{Aim} I aimed at creating a high efficiency device, eliminating the waste of photons and the stability problems due to probabilistic behavior of spontaneous downconversion. I also wanted a different way of thinking. The tools are known, the direction is different, but the result is there, and better. It takes a logical and optimistic look toward advanced research on quantum computer.\\ The main advantages of the frequency up-conversion toward the pumping rate is the reduction of length of the integrated optics and the possibility to overtake heralding detector problems. While progress of integrated MZI interferometers, integrated wave-guides or photonic crystals is prompt, single-photon detector technology, number-resolving or non-number-resolving, is still far from the same level of development.\\ \textbf{This suggested device offers the possibility to build an advanced integrated optical device, moreover functional element of a future quantum computer, whose design gives the most of its flexibility to its detectors and the most of its potential to its optical control.} \section{General scheme} \begin{figure} \caption{General scheme of a single-photon source with 11 SPDC crystals and $0T$ to $7T$ delay register. (red) Pumping laser. (green) Downconverters (pair generator, heralding detector and heralding decision delay). (black) Crossed by-passable delay register. (blue) Routing tree.} \label{design_exemple11} \end{figure} \section{Detailed description of elements} \subsection{Downconverters} The pump laser illuminates a array of $\chi^{(2)}$ non-linear crystals, creating both idler and signal photons. The idler photons are sent to heralding detectors connected to a processor unit (FPGA). The signal photons are sent into the integrated architecture through delay fibers to let time for the processor unit to take path decisions.\\ The delay fibers have slightly different lengths to correct the entering time differences among emitted photons due to the growing distance between the pump and each crystal of the array.\\ The number of SPDC modules can be chosen and optimized for the symmetry of the linear then logarithmic routing into the single output. In the hypothesis of a photon driving demonstration, single-photon or multi-photon are driven in the same way through standard MZI switches. Thus, a small number of downconverters (below twenty) can be chosen with an exaggerated photon rate. For a demonstration of efficient single-photon source, number of downconverters must be higher (from a few tens to more than hundred). \subsection{Crossed by-passable binary delay register} \begin{figure} \caption{$0$-$7T$ delay register paths. Delays: red: $2T$, green: $3T$, blue: $4T$, yellow: $5T$.} \label{design_exemple11_use} \end{figure} The crossed by-passable delay register (CBDR) structure is the heart of the invention. It distributes the photons in a train of period $T$. The CBDR can achieve photon delays between $0T$ to $(2^N-1)T$ with $N$ number of delay steps. Thus, a 3-step register can delay photons from $0$ to $7T$ (Fig.~\ref{design_exemple11_use}).\\ In a given architecture, due to the asymmetry of the structure, control of boundary photons is more or less limited as shown on Table~\ref{table_1}. It is not an important problem if we choose good actuation rules (see \emph{fast-to-slow top-to-down driving} in the next sections) or if we simply remove one or two boundary downconverters. \begin{table}[ht] \centering \begin{tabular}{c c c c} \hline\hline SPDC N\textordmasculine & Inaccessible states & SPDC N\textordmasculine & Inaccessible states \\ [0.4ex] \hline 1 & 1T, 2T, 3T, 4T, 5T, 6T, 7T & N-2 & 0T\\ 2 & 3T, 5T, 6T, 7T & N-1& 0T, 1T, 2T, 4T\\ 3 & 7T & N & 0T, 1T, 2T, 3T, 4T, 5T, 6T\\ [0.4ex] \hline \end{tabular} \caption{Limitation of control of boundary downconverters for a $0$-$7T$ register structure.} \label{table_1} \end{table} \subsection{Routing tree} \label{routing_tree} This structure simply drives the different delayed photons into a single output. The amount of switches increases in a logarithmic scale of the number of delayed single-photon inputs. It also allows to drive out of the structure the possible excess of photons.\\ For large structures and because of the low single-photon generation probability per downconverter (usually lower than 10\%), we can imagine different trees. It may be distributed before, after or in between the steps of the crossed binary delay register (Fig.~\ref{designs}). In this case, a more complex driving of photons needs to be established to avoid photon bunching (but more opportunities come to drive out concurrent or excess) or to consider it (using MZI switches in different configurations). \begin{figure} \caption{Alternative structures. (left) Routing distributed in between delaying gives a more efficient integration on chip. (center) Routing before and after delaying is optimized for very low pumping power and high quality single-photon. (right) Reducing of one layer of switches by parity trade-off.} \label{designs} \end{figure} \section{Detailed functioning} \subsection{Output rate up-conversion} The CBDR structure and routing tree build a T-periodic photon train at each pumping pulse on the downconverter array. The rate of downconverter and the number of downconverters is matched in order to obtain enough photons to emit a train of several photons at the frequency of the pumping. The photon source frequency is thus a multiple of the pump frequency. With a $0$-$7T$ by-passable binary delay register, the longest photon train is 8. In this example, the maximal frequency multiplication is 8.\\ The maximal photon source frequency is fixed by the CBDR implementation to the first delay step T. The device frequency can be divided by skipping register steps and drive some photons out of the routing tree.\\ By trade-off on stability or multi-photon emission rate, the pumping period can be changed and lowered by several $T$, the frequency multiplication is thus lower. \subsection{Stability enhancement} Multiplying frequency needs a constant number of photons per pump pulse. Probabilistically, it needs in average much more photons than the frequency multiple. It is achievable by increasing again the downconverter number or pump power, but will lead to multi-photon emissions and large waste of photons.\\ Keeping the source frequency below the maximum frequency multiple make possible a \emph{temporary photon storage} of the excess photons into the longest delay lines. Photons can be delayed up to the next pump cycle and so will be emitted in the first place in this new pumping period. A little photon excess can be used during the next pulse, and a small lack can be corrected by the storage from the previous pulse (Fig.~\ref{table_delay}). This 1-cycle ahead memory allows to reduce the SPDC array average rate almost down to the frequency conversion multiple. \begin{figure} \caption{Photon source frequency 6 times higher than pump rate. 1 or 2 photon excess (red) is stored and used at the beginning of the next cycle before the newly produced photons.} \label{table_delay} \end{figure} With the $0$-$7T$ register example and by lowering the frequency multiplication to 6, two photons can be stored for the next pulse cycle. 1 or 2-photon excess can be used during the next pulse, and 1 or 2-photon lack can be corrected by the storage from the previous pulse (Fig.~\ref{table_delay}). In these conditions, probability of lack of photon at the output drops significantly, since it only occurs when several successive low photon numbers are issued from the downconverter array.\\ The size of the downconverter array will be a factor of stability in addition to increase the total photon rate. Stability can once more be improved easily by adding a feedback to increase the pumping power for the next cycle to refill the temporary photon storage. \subsection{Switching decisions} The best strategy to operate the suggested device and avoid CBDR limitation problems or photon bunching problems is to adopt a \emph{fast-to-slow top-to-down} driving approach. It consists in allocating fastest photons (low delays) to slowest photons (high delays) from the top to the bottom of the CBDR structure.\\ In this operating mode, we reduce the need of inaccessible paths due to CBDR limitations and avoid any bunching of photons. Photons may share the same path, but at different times (Fig.~\ref{design_exemple11_use}).\\ With this approach, during a pumping cycle, each switch of the CBDR structure and the routing tree needs to be permuted \emph{at most once} and all in the same direction (figure out the permutations in the CBDR and in the routing tree switches needed to drive the photon train out on Fig.~\ref{design_exemple11_use}). The use of photon storage for stability enhancement slightly changes this property by delaying some permutations to the next pump cycle. \section{Advantages to prior art} \begin{itemize} \item Frequency up-conversion of the laser pumpimg rate to the photon source output. \item Tolerate low detector speed. \item High stability of the output photon rate. \item Possibility of feedback on the next cycle due to the efficient photon storage. \item High controllability at fixed source frequency (pump power, pump frequency, power or frequency feedback on next pump pulse). \item High control of the trade-off between frequency rise/multi-photon rate/output stability. \item Few waste of photon. \item Efficient use of multiplexing and customizable structure (SPDC number and CBDR level). \item Easy up-to-down switching programming. \end{itemize} \chapter{Detailed analysis of the SPDC based single-photon source with output rate up-conversion multiplexed architecture} \section{Case analysis of emission probability} The following table shows various configurations of SPDC based single-photon source adapted with different error rates. \begin{figure} \caption{Optimization of minimum photon emission probabilities and total error rate in function of number of sources, frequency multiple, mean photon number per module. First row of 19 sources tables with 18 sources considered for border limitation. Method: case analysis.} \label{prob_graph} \end{figure} \section{Simulation} A simulation tool has been realized with Maple. It assumes negligible photon loss inside the structure, considering a perfect waveguide and switching network. The following graphs show the absolute rate of lack error and multi-photon error of different configurations and parametrs of SPDC based single-photon source.\\ Each point has been simulated on 100,000 working cycles (on which multiplication factor for frequency must be applied to obtain the total photon number in ideal case) using the standard random tool of Maple. The simulator is also able to manage different power feed back responses (\emph{boost} and \emph{turbo-boost} modes). \subsection{Working parameter variation} The following Fig.~\ref{100a4power}~and~\ref{100a2_7} display the working characteristics of a large structure of 100 SPDC modules used without boosting with variation of frequency multiplication factor and pump power. \begin{figure} \caption{Error rates of a 100 SPDC structure with frequency multiple of 4 in function of the pump power (or N).} \label{100a4power} \end{figure} The results of the pump power variation for large structure as 100 SPDC modules show very low error rates and good possibilities of error trade-off. It gives an excellent device with high output gain.\\ The optimized point, with equivalence of lack and multi-photon error rates, is obtained at N $\approx$ 0.05. The source quality is then around 95~\% with 2.4~\% multi-photons and 2.4~\% lacks. No equivalent theoretical prior art SPDC source could achieve this quality at this rate. \begin{figure} \caption{Error rates of a 100 SPDC structure in function of the frequency multiple (optimized power at N = 0.049 for a gain output rate of 4).} \label{100a2_7} \end{figure} We note that it is very easy to reduce the lack by changing the output multiple rate of the device. In the case of a variable pump frequency as feed-back, lack could be drastically reduced while keeping an equivalent multi-photon error rate.\\ The multi-photon rate depends linearly of the pump power. We note that on the figures, particularly on Fig.~\ref{100a2_7}, the pump power is fixed but the multi-photon error rate is not constant in the region of large lack. It is due to the absolute and homogeneous units used in this report, contrary to the relative \emph{multi-photon within emitted photon} unit used in most of papers.\\ \begin{figure} \caption{Error rates of 9 and 11 SPDC structures at multiple of 3 (right) and 4 (left) in function of the pump power.} \label{9_11a3_4} \end{figure} The Fig.~\ref{9_11a3_4} shows several working configurations of very small structure (9 and 11 SPDC modules) with optimized points for output gain of 3 and 4. For these small structures, I preferred to keep working at high output gain that lowering the error rates as it is a strong advantage of this device on the alternative solutions.\\ \subsection{Structure size} In Fig.~\ref{10_200a4power1242}, we analyze the variation error rates in function of the number of SPDC module of the structure. \begin{figure} \caption{Lack error rate and relative multi-photon error rate of SPDC structure depending on size for three pump powers (green: N=0.025, red: N=0.05, blue: N=0.10), with frequency multiple of 4.} \label{10_200a4power1242} \end{figure} \addcontentsline{toc}{chapter}{Bibliography} \end{document}
arXiv
A Simple Deterministic Algorithm for Systems of Quadratic Polynomials over DOI:10.1137/1.9781611977066.22 In book: Symposium on Simplicity in Algorithms (SOSA) (pp.285-296) Charles Bouillaguet Claire Delaplace This person is not on ResearchGate, or hasn't claimed this research yet. Monika Trimoska Request full-text PDF To read the full-text of this research, you can request a copy directly from the authors. Discover the world's research No full-text available To read the full-text of this research, you can request a copy directly from the authors. ... 2. By utilizing a simple version of the crossbred algorithm [JV17] proposed in [BDT22] to solve an overdefined quadratic equation system, our attacks on 3-round LowMC require negligible memory and can achieve better time-memory tradeoffs than Dinur's algorithm [Din21]. ... ... In our attacks, we will adopt a simple version of the crossbred algorithm [JV17] to solve an overdefined system of quadratic equations, which is described in [BDT22]. This algorithm fits very well with our attacks on LowMC for its simplicity to bound the time complexity and to implement in practice. ... ... The first attack is a new and simple guess-and-determine (GnD) attack on 3-round LowMC by using Banik et al.'s strategy [BBDV20] to linearize the 3-bit S-box, where we solve a system of quadratic equations with the standard linearization technique. The second attack is a much simpler yet more efficient GnD attack on 3-round LowMC by using a naive guess strategy to linearize the 3-bit S-box, where we solve quadratic equations with the simplified version of the crossbred algorithm [BDT22]. The third attack is for full-round (4-round) LowMC, where we still adopt the naive guess strategy but use Dinur's algorithm [Din21] to solve equations of degree 4. ... New Low-Memory Algebraic Attacks on LowMC in the Picnic Setting Fukang Liu Willi Meier Santanu Sarkar Takanori Isobe The security of the post-quantum signature scheme Picnic is highly related to the difficulty of recovering the secret key of LowMC from a single plaintext-ciphertext pair. Since Picnic is one of the alternate third-round candidates in NIST post-quantum cryptography standardization process, it has become urgent and important to evaluate the security of LowMC in the Picnic setting. The best attacks on LowMC with full S-box layers used in Picnic3 were achieved with Dinur's algorithm. For LowMC with partial nonlinear layers, e.g. 10 S-boxes per round adopted in Picnic2, the best attacks on LowMC were published by Banik et al. with the meet-in-the-middle (MITM) method. In this paper, we improve the attacks on LowMC in a model where memory consumption is costly. First, a new attack on 3-round LowMC with full S-box layers with negligible memory complexity is found, which can outperform Bouillaguet et al.'s fast exhaustive search attack and can achieve better time-memory tradeoffs than Dinur's algorithm. Second, we extend the 3-round attack to 4 rounds to significantly reduce the memory complexity of Dinur's algorithm at the sacrifice of a small factor of time complexity. For LowMC instances with 1 S-box per round, our attacks are shown to be much faster than the MITM attacks. For LowMC instances with 10 S-boxes per round, we can reduce the memory complexity from 32GB (238 bits) to only 256KB (221 bits) using our new algebraic attacks rather than the MITM attacks, while the time complexity of our attacks is about 23.2 ∼ 25 times higher than that of the MITM attacks. A notable feature of our new attacks (apart from the 4-round attack) is their simplicity. Specifically, only some basic linear algebra is required to understand them and they can be easily implemented. Fast Exhaustive Search for Polynomial Systems in \mathbbF2{\mathbb{F}_2} Hsieh-Chung Chen Chen-Mou Cheng Bo-Yin Yang We analyze how fast we can solve general systems of multivariate equations of various low degrees over \({\mathbb{F}_{2}}\); this is a well known hard problem which is important both in itself and as part of many types of algebraic cryptanalysis. Compared to the standard exhaustive search technique, our improved approach is more efficient both asymptotically and practically. We implemented several optimized versions of our techniques on CPUs and GPUs. Our technique runs more than 10 times faster on modern graphic cards than on the most powerful CPU available. Today, we can solve 48+ quadratic equations in 48 binary variables on a 500-dollar NVIDIA GTX 295 graphics card in 21 minutes. With this level of performance, solving systems of equations supposed to ensure a security level of 64 bits turns out to be feasible in practice with a modest budget. This is a clear demonstration of the computational power of GPUs in solving many types of combinatorial and cryptanalytic problems. Hybrid approach for solving multivariate systems over finite fields Luk Bettale Jean-Charles Faugère Ludovic Perret In this paper, we present an improved approach to solve multivariate systems over finite fields. Our approach is a tradeoff between exhaustive search and Gröbner bases techniques. We give theoretical evidences that our method brings a significant improvement in a very large context and we clearly define its limitations. The efficiency depends on the choice of the tradeoff. Our analysis gives an explicit way to choose the best tradeoff as well as an approximation. From our analysis, we present a new general algorithm to solve multivariate polynomial systems. Our theoretical results are experimentally supported by successful cryptanalysis of several multivariate schemes (TRMS, UOV, . . .). As a proof of concept, we were able to break the proposed parameters assumed to be secure until now. Parameters that resists to our method are also explicitly given. Our work permits to refine the parameters to be choosen for multivariate schemes. Lean Programs, Branch Mispredictions, and Sorting⋆ Amr Elmasry Jyrki Katajainen According to a folk theorem, every program can be trans-formed into a program that produces the same output and only has one loop. We generalize this to a form where the resulting program has one loop and no other branches than the one associated with the loop con-trol. For this branch, branch prediction is easy even for a static branch predictor. If the original program is of length κ, measured in the number of assembly-language instructions, and runs in t(n) time for an input of size n, the transformed program is of length O(κ) and runs in O(κt(n)) time. Normally sorting programs are short, but still κ may be too large for practical purposes. Therefore, we provide more efficient hand-tailored heapsort and mergesort programs. Our programs retain most features of the original programs—e.g. they perform the same number of element comparisons—and they induce O(1) branch mispredictions. On comput-ers where branch mispredictions were expensive, some of our programs were, for integer data and small instances, faster than the counterparts in the GNU implementation of the C++ standard library. Efficient Algorithms for Solving Overdefined Systems of Multivariate Polynomial Equations Nicolas Courtois Alexander Klimov Jacques Patarin Adi Shamir The security of many recently proposed cryptosystems is based on the difficulty of solving large systems of quadratic multivariate polynomial equations. This problem is NP-hard over any field. When the number of equations m is the same as the number of unknowns n the best known algorithms are exhaustive search for small fields, and a Gröbner base algorithm for large fields. Gröbner base algorithms have large exponential complexity and cannot solve in practice systems with n ≥ 15. Kipnis and Shamir [9] have recently introduced a new algorithm called "relinearization". The exact complexity of this algorithm is not known, but for sufficiently overdefined systems it was expected to run in polynomial time. In this paper we analyze the theoretical and practical aspects of relinearization. We ran a large number of experiments for various values of n and m, and analysed which systems of equations were actually solvable. We show that many of the equations generated by relinearization are linearly dependent, and thus relinearization is less efficient that one could expect. We then develop an improved algorithm called XL which is both simpler and more powerful than relinearization. For all 0 < ε ≤ 1/2, and m ≥ εn 2, XL and relinearization are expected to run in polynomial time of approximately \( n^{\mathcal{O}(1/\sqrt \varepsilon )} \). Moreover, we provide strong evidence that relinearization and XL can solve randomly generated systems of polynomial equations in subexponential time when m exceeds n by a number that increases slowly with n. Solving Underdefined Systems of Multivariate Quadratic Equations Louis Goubin Jean-Daniel Tacier The security of several recent digital signature schemes is based on the difficulty of solving large systems of quadratic multivariate polynomial equations over a finite field F. This problem, sometimes called MQ, is known to be NP-hard. When the number m of equations is equal to the number n of variables, and if n < 15, Gröbner base algorithms have been applied to solve MQ. In the overdefined case n 《 m, the techniques of relinearization and XL, due to A. Shamir et. al., have shown to be successful for solving MQ. In signature schemes, we usually have n 》 m. For example signature schemes Flash and Sflash submitted to Nessie call for primitives or the UOV scheme published at Eurocrypt 1999. Little is known about the security of such underdefined systems. In this paper, three new and different methods are presented for solving underdefined multivariate systems of quadratic equations. As already shown at Eurocrypt 1999, the problem MQ becomes polynomial when n ≥ m(m+1) for fields F of characteristic 2. We show that for any field, for about n ≥ 2m/7(m + 1), exponential but quite small in practice, the problem becomes polynomial in n. When n → m the complexity of all our 3 algorithms tends to q m. However for practical instances of cryptosystems with n ≈ O(m), we show how to achieve complexities significantly lower than exhaustive search. For example we are able break Unbalanced Oil and Vinegar signature schemes for some "bad" choices of the parameters (but not for the parameters proposed in [4]). Solving systems of polynomial equations over GF(2) by a parity-counting self-reduction Richard Ryan Williams Andreas Björklund Petteri Kaski Andreas Björklund, Petteri Kaski, and Ryan Williams; licensed under Creative Commons License CC-BY We consider the problem of finding solutions to systems of polynomial equations over a finite field. Lokshtanov et al. [SODA'17] recently obtained the first worst-case algorithms that beat exhaustive search for this problem. In particular for degree-d equations modulo two in n variables, they gave an O∗2(1−1/(5d))n time algorithm, and for the special case d = 2 they gave an O∗20.876n time algorithm. We modify their approach in a way that improves these running times to O∗2(1−1/(27d))n and O∗20.804n, respectively. In particular, our latter bound - that holds for all systems of quadratic equations modulo 2 - comes close to the O∗20.792n expected time bound of an algorithm empirically found to hold for random equation systems in Bardet et al. [J. Complexity, 2013]. Our improvement involves three observations: 1. The Valiant-Vazirani lemma can be used to reduce the solution-finding problem to that of counting solutions modulo 2. 2. The monomials in the probabilistic polynomials used in this solution-counting modulo 2 have a special form that we exploit to obtain better bounds on their number than in Lokshtanov et al. [SODA'17]. 3. The problem of solution-counting modulo 2 can be "embedded" in a smaller instance of the original problem, which enables us to apply the algorithm as a subroutine to itself. Cryptanalytic Applications of the Polynomial Method for Solving Multivariate Equation Systems over GF(2) Itai Dinur At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree d in n variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field F2. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of n. In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over F2. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about n2·20.815n bit operations for d=2 and n2·21-1/2.7dn for any d≥2. We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function. Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems. Improved Algorithms for Solving Polynomial Systems over GF(2) by Multiple Parity-Counting Correction to: Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra John Little Donal B. O'Shea After initial publication of the book, various errors were identified that needed correction. The following corrections have been updated within the current version, along with all known typographical errors. Implementing Joux-Vitse's Crossbred Algorithm for Solving $${\mathcal M\mathcal Q}$$ Systems over $${\mathbb F}_2$$ on GPUs Ruben Niederhagen Kai-Chun Ning Beating Brute Force for Systems of Polynomial Equations over Finite Fields Daniel Lokshtanov Ramamohan Paturi Suguru Tamaki Huacheng Yu We consider the problem of solving systems of multivariate polynomial equations of degree k over a finite field. For every integer k ≤ 2 and finite field q where q = pd for a prime p, we give, to the best of our knowledge, the first algorithms that achieve an exponential speedup over the brute force O(qⁿ) time algorithm in the worst case. We present two algorithms, a randomized algorithm with running time qn+o(n) · q−n/O(k) time if q < 24ekd, and otherwise, where e = 2.718… is Napier's constant, and a deterministic algorithm for counting solutions with running time qn+o(n) · q−n/O(kq6/7d). For the important special case of quadratic equations in F2, our randomized algorithm has running time O(20.8765n). For systems over 2 we also consider the case where the input polynomials do not have bounded degree, but instead can be efficiently represented as a ΣΠΣ circuit, i.e., a sum of products of sums of variables. For this case we present a deterministic algorithm running in time 2n-dn for δ = 1/O(log(s/n)) for instances with s product gates in total and n variables. Our algorithms adapt several techniques recently developed via the polynomial method from circuit complexity. The algorithm for systems of ΣΠΣ polynomials also introduces a new degree reduction method that takes an instance of the problem and outputs a subexponential-sized set of instances, in such a way that feasibility is preserved and every polynomial among the output instances has degree O(log(s/n)). From 5-Pass $$\mathcal {MQ}$$ -Based Identification to $$\mathcal {MQ}$$ -Based Signatures Lect Notes Comput Sci Ming-Shing Chen Andreas Hülsing Joost Rijneveld Peter Schwabe This paper presents MQDSS, the first signature scheme with a security reduction based on the problem of solving a multivariate system of quadratic equations (\(\mathcal {MQ}\) problem). In order to construct this scheme we give a new security reduction for the Fiat-Shamir transform from a large class of 5-pass identification schemes and show that a previous attempt from the literature to obtain such a proof does not achieve the desired goal. We give concrete parameters for MQDSS and provide a detailed security analysis showing that the resulting instantiation MQDSS-31-64 achieves 128 bits of post-quantum security. Finally, we describe an optimized implementation of MQDSS-31-64 for recent Intel processors with full protection against timing attacks and report benchmarks of this implementation. On the Complexity of the F5 Gr\"obner basis Algorithm Magali Bardet Bruno Salvy We study the complexity of Gr\"obner bases computation, in particular in the generic situation where the variables are in simultaneous Noether position with respect to the system. We give a bound on the number of polynomials of degree d in a Gr\"obner basis computed by Faug\`ere's F5 algorithm (Fau02) in this generic case for the grevlex ordering (which is also a bound on the number of polynomials for a reduced Gr\"obner basis), and use it to bound the exponent of the complexity of the F5 algorithm. A New Efficient Algorithm for Computing Gröbner Bases (F4) J PURE APPL ALGEBRA This paper introduces a new efficient algorithm for computin g Grobner bases. To avoid as much as possible intermediate computation, the algorithm computes successive truncated Grobner bases and it replaces the classical polynomial reduction found in the Buchberger algorithm by the simultaneous reduction of several polynomials. This powerful reduction mechanism is achieved by means of a symbolic precomputation and by extensive use of sparse linear algebra methods. Current techniques in linear algebra used in Computer Al- gebra are reviewed together with other methods coming from the numerical field. Some previously untractable problems (Cyclic 9) are presented as well as an empirical comparison of a first implementation of this algorithm with other well kn own programs. This compari- son pays careful attention to methodology issues. All the benchmarks and CPU times used in this paper are frequently updated and available on a Web page. Even though the new algorithm does not improve the worst case complexity it is several times faster than previous implementations both for integers and modulo computations. A new efficient algorithm for computing Gröbner bases without reduction to zero ( F 5 ) This paper introduces a new efficient algorithm for computing Gröbner bases. We replace the Buchberger criteria by an optimal criteria. We give a proof that the resulting algorithm (called F5) generates no useless critical pairs if the input is a regular sequence. This a new result by itself but a first implementation of the algorithm F5 shows that it is also very efficient in practice: for instance previously untractable problems can be solved (cyclic 10). In practice for most examples there is no reduction to zero. We illustrate this algorithm by one detailed example. Unbalanced Oil and Vinegar Signature Schemes Aviad Kipnis In [16], J. Patarin designed a new scheme, called "Oil and Vinegar", for computing asymmetric signatures. It is very simple, can be computed very fast (both in secret and public key) and requires very little RAM in smartcard implementations. The idea consists in hiding quadratic equations in n unknowns called "oil" and v = n unknowns called "vinegar" over a finite field K, with linear secret functions. This original scheme was broken in [10] by A. Kipnis and A. Shamir. In this paper, we study some very simple variations of the original scheme where v > n (instead of v = n). These schemes are called "Unbalanced Oil and Vinegar" (UOV), since we have more "vinegar" unknowns than "oil" unknowns. We show that, when v ⋍ n, the attack of [10] can be extended, but when v ≥ 2n for example, the security of the scheme is still an open problem. Moreover, when \( v \simeq \tfrac{{n^2 }} {2}\) , the security of the scheme is exactly equivalent (if we accept a very natural but not proved property) to the problem of solving a random set of n quadratic equations in \( \tfrac{{n^2 }} {2}\) unknowns (with no trapdoor). However, we show that (in characteristic 2) when v ≥ n 2, finding a solution is generally easy. Then we will see that it is very easy to combine the Oil and Vinegar idea and the HFE schemes of [14]. The resulting scheme, called HFEV, looks at the present also very interesting both from a practical and theoretical point of view. The length of a UOV signature can be as short as 192 bits and for HFEV it can be as short as 80 bits. Gröbner bases, Gaussian elimination and resolution of systems of algebraic equations D. Lazard Without Abstract Algebraic Cryptanalysis of Hidden Field Equation (HFE) Cryptosystems Using Gröbner Bases Antoine Joux In this paper, we review and explain the existing algebraic cryptanalysis of multivariate cryptosystems from the hidden field equation (HFE) family. These cryptanalysis break cryptosystems in the HFE family by solving multivariate systems of equations. In this paper we present a new and efficient attack of this cryptosystem based on fast algorithms for computing Gröbner basis. In particular it was was possible to break the first HFE challenge (80 bits) in only two days of CPU time by using the new algorithm F5 implemented in C. From a theoretical point of view we study the algebraic properties of the equations produced by instance of the HFE cryptosystems and show why they yield systems of equations easier to solve than random systems of quadratic equations of the same sizes. Moreover we are able to bound the maximal degree occuring in the Gröbner basis computation. As a consequence, we gain a deeper understanding of the algebraic cryptanalysis against these cryptosystems. We use this understanding to devise a specific algorithm based on sparse linear algebra. In general, we conclude that the cryptanalysis of HFE can be performed in polynomial time. We also revisit the security estimates for existing schemes in the HFE family. Theoretical Analysis of XL over Small Fields Jiun-Ming Chen XL was first introduced to solve determined or overdetermined systems of equations over a finite field as an "algebraic attack" against multivariate cryptosystems. There has been a steady stream of announcements of cryptanalysis of primitives by such attacks, including stream ciphers (e.g. Toyocrypt), PKC's, and more controversially block ciphers (AES/Rijndael and Serpent). Prior discussions of XL are usually heavy in simulations, which are of course valuable but we would like more attention to theory, because theory and simulations must validate each other, and there are some nuances not easily discerned from simulations. More effort was made in this direction of recent, but much of it was restricted to a large base field of size q, which is usually equal to 2 k . By conducting an analysis of XL variants in general, we try to derive rigorous "termination conditions", minimal degree requirements for reliable, successful operation of XL and its relatives, hence better security estimates. Our work is applicable to small q, in particular the significant q=2 case. Armed with this analysis, we reexamine previously announced results. We conclude that XL and variants represent a theoretical advance that is especially significant over small fields (in particular over GF(2)). However, its applicability and efficacy are occasionally overestimated slightly. We discuss possible future research directions. Much remains to be done. The MAGMA algebra system I: the user language Wieb Bosma John J. Cannon Catherine Playoust In the rst of two papers on Magma, a new system for computational algebra, we present the Magma language, outline the design principles and theoretical background, and indicate its scope and use. Particular attention is given to the constructors for structures, maps, and sets. c 1997 Academic Press Limited Magma is a new software system for computational algebra, the design of which is based on the twin concepts of algebraic structure and morphism. The design is intended to provide a mathematically rigorous environment for computing with algebraic struc- tures (groups, rings, elds, modules and algebras), geometric structures (varieties, special curves) and combinatorial structures (graphs, designs and codes). The philosophy underlying the design of Magma is based on concepts from Universal Algebra and Category Theory. Key ideas from these two areas provide the basis for a gen- eral scheme for the specication and representation of mathematical structures. The user language includes three important groups of constructors that realize the philosophy in syntactic terms: structure constructors, map constructors and set constructors. The util- ity of Magma as a mathematical tool derives from the combination of its language with an extensive kernel of highly ecient C implementations of the fundamental algorithms for most branches of computational algebra. In this paper we outline the philosophy of the Magma design and show how it may be used to develop an algebraic programming paradigm for language design. In a second paper we will show how our design philoso- phy allows us to realize natural computational \environments" for dierent branches of algebra. An early discussion of the design of Magma may be found in Butler and Cannon (1989, 1990). A terse overview of the language together with a discussion of some of the implementation issues may be found in Bosma et al. (1994). On the Complexity of Solving Quadratic Boolean Systems Pierre-Jean Spaenlehauer A fundamental problem in computer science is to find all the common zeroes of $m$ quadratic polynomials in $n$ unknowns over $\mathbb{F}_2$. The cryptanalysis of several modern ciphers reduces to this problem. Up to now, the best complexity bound was reached by an exhaustive search in $4\log_2 n\,2^n$ operations. We give an algorithm that reduces the problem to a combination of exhaustive search and sparse linear algebra. This algorithm has several variants depending on the method used for the linear algebra step. Under precise algebraic assumptions on the input system, we show that the deterministic variant of our algorithm has complexity bounded by $O(2^{0.841n})$ when $m=n$, while a probabilistic variant of the Las Vegas type has expected complexity $O(2^{0.792n})$. Experiments on random systems show that the algebraic assumptions are satisfied with probability very close to~1. We also give a rough estimate for the actual threshold between our method and exhaustive search, which is as low as~200, and thus very relevant for cryptographic applications. Hidden Fields Equations (HFE) and Isomorphisms of Polynomials (IP): Two New Families of Asymmetric Algorithms In [6] T. Matsumoto and H. Imai described a new asymmet- ric algorithm based on multivariate polynomials of degree two over a finite field, which was subsequently broken in [9]. Here we present two new families of Asymmetric Algorithms that so far have resisted all at- tacks, if properly used: Hidden Field Equations (HFE) and Isomorphism of Polynomials (IP). These algorithms can be seen as two candidate ways to repair the Matsumoto-Imai Algorithm. HFE can be used to do sig- natures, encryption or authentication in an asymmetric way, with very short signatures and short encryptions of short messages. IP can be used for signatures and for zero knowledge authentication. An extended version of this paper can be obtained from the author. Another way to repair the Matsumoto-Imai Algorithm will be presented in [10]. GeMSS: A Great Multivariate Short Signature Antoine Casanova Gilles Macario-Rat Jocelyn Ryckeghem Antoine Casanova, Jean-Charles Faugère, Gilles Macario-Rat, Jacques Patarin, Ludovic Perret, and Jocelyn Ryckeghem. GeMSS: A Great Multivariate Short Signature. Research report, UPMC -Paris 6 Sorbonne Universités ; INRIA Paris Research Centre, MAMBA Team, F-75012, Paris, France ; LIP6 -Laboratoire d'Informatique de Paris 6, December 2017. URL: https://hal.inria.fr/hal-01662158. A Crossbred Algorithm for Solving Boolean Polynomial Systems Vanessa Vitse Antoine Joux and Vanessa Vitse. A Crossbred Algorithm for Solving Boolean Polynomial Systems. In NuTMiC, volume 10737 of Lecture Notes in Computer Science, pages 3-21. Springer, 2017. https://eprint.iacr.org/2017/ 372.pdf. 'If we want to develop AI that helps people, we need all the brainpower we can get.' To build the best artificial intelligence, we don't just need computing power and new technology, argues Pim Haselager. If we want to develop AI that helps people, we need all the brainpower we can get. This includes people from all walks of life where AI will play a role. Only then can we... Class Numbers of Real Quadratic Fields, Continued Fractions, Reduced Ideals, Prime-Producing Quadrat... August 1992 · Canadian Journal of Mathematics Richard Mollin Louboutin Stephane Hugh C. Williams In this paper we consider the relationship between real quadratic fields, their class numbers and the continued fraction expansion of related ideals, as well as the prime-producing capacity of certain canonical quadratic polynomials. This continues and extends work in (10)—(31) and is related to work in (3)-(4). View full-text Prime Producing Quadratic Polynomials and Class-Numbers of Real Quadratic Fields April 1990 · Canadian Journal of Mathematics Frobenius-Rabinowitsch's theorem provides us with a necessary and sufficient condition for the class-number of a complex quadratic field with negative discriminant D to be one in terms of the primality of the values taken by the quadratic polynomial with discriminant Don consecutive integers (See [1], [7]). M. D. Hendy extended Frobenius-Rabinowitsch's result to a necessary and sufficient ... [Show full abstract] condition for the class-number of a complex quadratic field with discriminant D to be two in terms of the primality of the values taken by the quadratic polynomials and with discriminant D (see [2], [7]). Quadratic polynomials producing consecutive, distinct primes and class groups of complex quadratic f... January 1996 · Acta Arithmetica A Complete Characterization of Quadratic Polynomials that are determinants of Linear Matrix Polynomi... Papri Dey Harish K. Pillai The problem of expressing a multivariate polynomial as the determinant of a monic (definite) symmetric (Hermitian) linear matrix polynomial (LMP) has drawn a huge amount of attention due to its connection with optimization problems. In this paper we provide a necessary and sufficient condition for the existence of monic symmetric determinantal representation (MSDR) of size 2 for a given quadratic ... [Show full abstract] polynomial. Further if an MSDR exists we propose a method to construct such a $2 \times 2$ MSDR. It is known that a quadratic polynomial $f(x) = x^TAx + b^T x + 1$ has an MSDR of size $n + 1$ if $A$ is negative semidefinite. We prove that if a quadratic polynomial f(x) with A which is not negative semidefinite has an MSDR of size greater than 2, then it has an MSDR of size 2 too. We also characterize quadratic polynomials which exhibit diagonal MSDRs. Last Updated: 01 Sep 2022 Looking for the full-text? You can request the full-text of this chapter directly from the authors on ResearchGate. Get it from the App Store now. Keep up with your stats and more Access scientific knowledge from anywhere
CommonCrawl
Chris Stevens (mathematician) Terrie Christine Stevens, also known as T. Christine Stevens, is an American mathematician whose research concerns topological groups, the history of mathematics, and mathematics education.[1] She is also known as the co-founder of Project NExT, a mentorship program for recent doctorates in mathematics, which she directed from 1994 until 2009.[2][3][4] Education and career Stevens graduated from Smith College in 1970,[5] and completed her doctorate in 1978 at Harvard University under the supervision of Andrew M. Gleason. Her dissertation was Weakened Topologies for Lie Groups.[6][7] She held teaching positions at the University of Massachusetts Lowell, at Mount Holyoke College and at Arkansas State University before joining Saint Louis University, where for 25 years she was a professor of mathematics and computer science.[8][6] She was also a Congressional Science Fellow assisting congressman Theodore S. Weiss in 1984–1985,[1][5] and was a program officer at the National Science Foundation in 1987–1989.[1] After retiring from SLU, she became Associate Executive Director for Meetings and Professional Services of the American Mathematical Society.[9][6] She also served as an AMS Council member at large from 2011 to 2013.[10] Recognition In 2004 Stevens won the Gung and Hu Award for Distinguished Service to Mathematics of the Mathematical Association of America for her work on Project NExT.[6][8] In 2010 Stevens was awarded the Smith College Medal by her alma mater.[4][5] She has been a fellow of the American Association for the Advancement of Science since 2005,[11] and in 2012, she became one of the inaugural fellows of the American Mathematical Society.[12] She was the 2015 winner of the Louise Hay Award of the Association for Women in Mathematics.[9] References 1. Speaker bio, Society for Industrial and Applied Mathematics, retrieved 2015-01-25. 2. Project NExt, Mathematical Association of America, retrieved 2015-01-25. 3. Higgins, Aparna (November 2009), "AMS Sponsors NExT Fellows" (PDF), Inside the AMS, Notices of the AMS, 56 (10): 1310. 4. "MAA Member Chris Stevens Awarded Smith College Medal", Math in the News, MAA News, Mathematical Association of America, September 17, 2009. 5. Smith College Rally Day: Honors, Hats and a Secret Revealed, Smith College, September 10, 2009, retrieved 2015-01-25. 6. Jackson, Allyn (January 2015), "Chris Stevens Joins AMS Executive Staff" (PDF), Notices of the AMS, 62 (1): 56–57, doi:10.1090/noti1201. 7. Chris Stevens at the Mathematics Genealogy Project 8. Berry, Clayton (January 28, 2004), Professor Earns Highest Honor from Leading Mathematics Organization, Saint Louis University, archived from the original on March 3, 2016, retrieved 2015-01-25. 9. 2015 AWM Louise Hay Award, Association for Women in Mathematics, retrieved 2015-01-25. 10. "AMS Committees". American Mathematical Society. Retrieved 2023-03-29. 11. Elected Fellows, AAAS, retrieved 2017-10-30. 12. List of Fellows of the American Mathematical Society, retrieved 2015-01-25. External links • Home page Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Knowledge Center / Application Notes / Imaging / Testing and Targets Testing and Targets http://www.edmundoptics.com.sg/knowledge-center/application-notes/imaging/testing-and-targets/ http://www.edmundoptics.com.sg Testing and Targets Test Target Overview This is Section 12.2 of the Imaging Resource Guide. Test targets help determine the performance of an imaging system. This includes troubleshooting a system; benchmarking, certifying, or evaluating measurements; or establishing a foundation to ensure multiple systems work well with one another. Because image quality can be defined by different components, particularly resolution, contrast, modulation transfer function (MTF), depth of field (DOF), or distortion, different systems may require different targets. Some systems may require more than one. Note that the results of using a test target are subjective if only viewed visually; using visual observation is dependent on who is looking at the target. Someone with 20/20 vision is typically capable of discerning higher resolution or more detail than someone with 20/25 or 20/30 vision. Additionally, individuals that regularly look at these targets may have trained their brains to interpolate details not actually present due to viewing the target's repetitive patterns. Visual inspection can help compare two different systems but does not always validate results. It is important to use software to truly validate measurements. Targets for Resolution Measurements Target Applications Pros Cons USAF 1951 Test resolution in vision systems, optical test equipment, microscopes, high magnification video lenses, fluorescence and confocal microscopy, photolithography, and nanotechnology Simultaneously test the vertical and horizontal resolutions at discrete spatial frequencies Must reposition target to fully evaluate a system's performance Can be difficult to determine when the entire FOV is in best focus Ronchi Ruling Testing resolution and contrast Can simultaneously determine system's best focus across entire FOV Different target required for each frequency that needs to be evaluated Diffraction testing Non-symmetrical resolution reductions cannot be analyzed Star Comparing highly resolved or magnified imaging systems Potentially most powerful for testing resolution and contrast Difficult to determine exact resolution that the test system is achieving at each element System alignment Can evaluate continuous change in resolution across multiple orientations without repositioning target Assembly assistance Eases the comparison of different imaging systems Requires advanced image analysis software Table 1: Applications and the pros and cons of different resolution targets. The USAF 1951 Target The USAF 1951 target is one of the most common test targets used and is comprised of sets of horizontal and vertical lines, called elements, of varying sizes (Figure 1). Figure 1: Example of a USAF 1951 target. The horizontal and vertical elements are used by a system to simultaneously test the vertical and horizontal resolutions at discrete spatial frequencies (line pairs per millimeter, or $ \small{ \tfrac{\text{lp}}{\text{mm}} } )$ in the object plane. Each element has a unique set of widths and spacings and is identified with a number from 1 to 6. Six sequentially numbered elements are considered a group and each group has an identifying number that can be positive, negative, or zero. This group number ranges from -2 to 7. The group number and element number are then used together to determine spatial frequency. The resolution is based on one line pair (lp) which is equivalent to one black bar and one white space (Figure 2). Figure 2: USAF 1951 target specifications. Vertical bars are used to calculate horizontal resolution, and horizontal bars are used to calculate vertical resolution. Qualitatively, the resolution of an imaging system is defined as the group and element combination that is located directly before the black and white bars begin to blur together. Quantitatively, resolution (in $ \small{ \tfrac{\text{lp}}{\text{mm}} } )$ can be calculated with Equation 1, where $\small{\xi} $, is resolution in $ \small{ \tfrac{\text{lp}}{\text{mm}} }$, $\small{G}$ is the group number and $ \small{E} $ is the element number. (1)$$ \xi \left[ \small{\tfrac{\text{lp}}{\text{mm}}} \right] = 2^{\left[ G + \left( \frac{E - 1}{6} \right) \right]} $$ $$ \xi \left[ \small{\tfrac{\text{lp}}{\text{mm}}} \right] = 2^{\left[ G + \left( \frac{E - 1}{6} \right) \right]} $$ USAF 1951 targets are designed so that higher-resolution elements are closer to the center of the target while lower-resolution elements are closer to the target edges. This arrangement is beneficial for testing zoom lenses because it avoids the need to reposition the target by allowing the higher resolution elements to remain in the FOV as the lens magnification decreases the FOV. Limitations of USAF 1951 Targets USAF 1951 targets do have drawbacks from the higher-resolution elements being centered. Lenses produce different levels of resolution at the center and corners of the FOV. Moving away from the center of the field usually causes resolution to decrease, making it important to check resolution and contrast at many field positions. This requires repositioning the target around the FOV and taking multiple images to evaluate a system's full performance, increasing testing time. This can also cause issues depending on whether the system is focused at the center of the FOV or across the entire field; because resolution varies across the field, it can be difficult to determine when the full FOV is in best focus. Some lenses obtain very high resolution at the center of the FOV, but very low resolution at the corners when the lens and camera system is focused on the center of the image. Defocus the lens slightly to balance the resolution across the full field. However, doing this is at the detriment of the center resolution. The loss of center resolution is not necessarily bad though, because the lens may still meet the demands of the application while still achieving a balanced focus (Figure 3). Figure 3a: USAF 1951 Example: The center and corner of an image that has been repositioned so that the best focus is only in the middle of the target. Figure 3b: The center and corner of an image that features balanced focus across the entire field. The potential for resolution variability across the FOV reinforces the need to analyze all field positions before drawing conclusions on a system's performance. The lens that performs optimally with the target at the center may not perform the best overall. However, it is critical to perform all the analysis at a single focus setting. It may seem intuitive to determine the system's best performance through the middle of the lens and then refocus to see the performance in the corner, but this will not show how the system will perform once deployed because refocusing during operation is often not possible. There are variations of this target that allow for analysis across the entire FOV by repeating the patterns in numerous locations on the target (Figure 4). Figure 4a: A USAF 1951 pattern wheel target. Figure 4b: A USAF 1951 variable contrast and field target. Ronchi Rulings Some issues associated with the USAF 1951 target are overcome using the target known as a Ronchi ruling. This target has repeating lines at one spatial frequency in one orientation that covers the target's full surface (Figure 5). Because there is detail across the full target, the system's best focus across the full field can be evaluated. For applications needing only one frequency analyzed, this is an easy-to-use, straightforward tool. However, there are two drawbacks to using the Ronchi ruling. First, since a given target provides only one frequency, a new target is required for each frequency. Second, non-symmetrical resolution reductions across the field that are the result of factors such as astigmatism cannot be analyzed because the lines only propagate in one direction. To overcome this, the target must be rotated by 90° and a second image must be used to analyze the resolution. Additionally, while a lens' focus can be balanced for best focus, even for cases of astigmatism, it can be difficult to find this balance when alternating back and forth. Figure 5: A Ronchi ruling. The Star Target The multi-element start target is possibly the most powerful tool for testing the resolution and contrast of a system and combines many of the strengths of both the USAF and Ronchi targets. Each element of the star target consists of a circle formed of alternating positive and negative pie-shaped wedges that are tapered towards the center at a known angle (Figure 6). The element's tapered wedges provide a continuous change in resolution that can be evaluated in both vertical and horizontal directions, along with a variety of other orientations, without repositioning the target. Having many stars across the FOV makes comparing different imaging solutions easier by providing the ability to determine the best focus across the FOV while simultaneously analyzing horizontal and vertical information at a variety of resolutions. Figure 4 shows the complete star target; the highlighted areas located in the center, bottom middle, and the corner of the target are compared between two different lenses in the additional example images. For these examples, a Sony ICX625 monochrome sensor with 3.45µm pixels and a total resolution of 5MP and a white light back light illuminator are used. Figure 6: A star target. Figure 7: A star target is imaged with two lenses (A and B) with the same focal length, f/#, FOV, and sensor. The superiority of lens A becomes apparent along the edge and in the corner of the sensor. Limitations of the Star Target The star target also has its drawbacks. Because the wedges provide continuous changes in resolution, it is more difficult to determine the exact resolution that the test system is achieving at each element. While this can be done mathematically, it is not visually done with ease. Additionally, the combination of the star elements' circular nature with the potential for nonsymmetrical blurring make using simple software tools, such as line profilers, to extract information from the image more difficult. More advanced image analysis software is required to make full use of the star target. Telecentricity Targets Telecentricity targets allow keystoning in an image to be visualized and accurately measured. The amount of keystoning is related to the telecentricity of the lens imaging the target. The target is placed at a 45° angle to the optical axis so that the bottom of the target is further away from the lens than the top of the target (Figure 8). Figure 8: A telecentricity target placed under a lens. When imaging the target through a non-telecentric lens, the distance between the vertical lines will appear to decrease at the bottom of the image (Figure 9); this effect is known as keystoning. A perfectly telecentric lens will have no keystoning and the telecentricity will be 0° (Figure 10). Figure 9: This image (taken with an 8mm focal length lens) demonstrates keystoning. The lines clearly converge at the bottom of the image. The center location of a line at the bottom of the image does not have the same center location at the top of the image. Figure 10: It is apparent that the blur in this image (taken with a telecentric lens) is symmetric. If you take a horizontal line profile across the image and find the horizontal component of the center of each black line, the location will be equal in the blurred part of the image and the focused part of the image. This difference in position can be converted to a degree of telecentricity with the following steps: Find the distance between your top line profile and your bottom line profile. $ \Delta Y = \left( Y_1 - Y_2 \right) $ Find the horizontal displacement. $ \Delta X = \left| X_1 - X_2 \right| $ Calculate the telecentric angle. $ \theta = \tan ^{-1} \left( \tfrac{\Delta X}{\Delta Y} \right) $ Depth of Field Targets DOF targets enable the visualization and quantification of how well focus is maintained as details move away from the plane the lens is focused on. DOF targets consist of lines of known frequencies resolutions) that are tipped at a known angle and are used to determine how well focus is maintained. As the lines proceed closer to and farther away from the lens, the blurrier they become, until they are no longer able to be distinguished from one another. Contrast measurement can be made at different distances to determine when the desired level of resolution is lost; this determines the DOF limit for a lens at a setting. Figures 11 and 12 demonstrate how to use a DOF target. Figure 11: A DOF target should be at 45° from the lens. Figure 12: Sample configurations using a DOF target. Example: Using a DOF Target 50mm DG Series Lens Figure 13 shows a vertically mounted camera looking down at a DOF target that has been set at a 45° angle to the imaging path. Since the lens is focused at the middle of the target vertically, the image goes out of focus at the top and bottom of the target. The images show three different f/# settings and how adjustments to the iris change the ability to obtain DOF. Note: Ronchi rulings can also be used to perform this type of testing, as they have fixed frequencies and can be tilted to create this effect; the greater the tip, the more of the DOF that can be measured. Figure 13: Images of a DOF target taken with a 50mm lens at f/4, f/8, and f/11. Go to www.edmundoptics.com/imaging-lab to view EO Imaging Lab Module 1.8 on Depth of Field. Distortion Targets Distortion targets are used to calibrate systems to correctly measure the optical misplacement of imaging information. These targets consist of dot, grid, or square patterns; are compatible with the calibration routines of most imaging software; and can either remap or adjust measurements across the FOV (Figure 14). Figure 14: A dot grid distortion target. Figure 15 shows the types of distortion that can be adjusted. Once the pattern is imaged, the known size and spacing of the pattern allow adjustments to be made (Figure 15). Figure 15: Types of distortion. Figure 16: Positional difference can be measured and corrected with software. Depth of Field and Depth of Focus
CommonCrawl
Probability measure In mathematics, a probability measure is a real-valued function defined on a set of events in a probability space that satisfies measure properties such as countable additivity.[1] The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire probability space. Part of a series on statistics Probability theory • Probability • Axioms • Determinism • System • Indeterminism • Randomness • Probability space • Sample space • Event • Collectively exhaustive events • Elementary event • Mutual exclusivity • Outcome • Singleton • Experiment • Bernoulli trial • Probability distribution • Bernoulli distribution • Binomial distribution • Normal distribution • Probability measure • Random variable • Bernoulli process • Continuous or discrete • Expected value • Markov chain • Observed value • Random walk • Stochastic process • Complementary event • Joint probability • Marginal probability • Conditional probability • Independence • Conditional independence • Law of total probability • Law of large numbers • Bayes' theorem • Boole's inequality • Venn diagram • Tree diagram Intuitively, the additivity property says that the probability assigned to the union of two disjoint events by the measure should be the sum of the probabilities of the events; for example, the value assigned to "1 or 2" in a throw of a dice should be the sum of the values assigned to "1" and "2". Probability measures have applications in diverse fields, from physics to finance and biology. Definition The requirements for a set function $\mu $ to be a probability measure on a probability space are that: • $\mu $ must return results in the unit interval $[0,1],$ returning $0$ for the empty set and $1$ for the entire space. • $\mu $ must satisfy the countable additivity property that for all countable collections $E_{1},E_{2},\ldots $ of pairwise disjoint sets: $\mu \left(\bigcup _{i\in \mathbb {N} }E_{i}\right)=\sum _{i\in \mathbb {N} }\mu (E_{i}).$ For example, given three elements 1, 2 and 3 with probabilities $1/4,1/4$ and $1/2,$ the value assigned to $\{1,3\}$ is $1/4+1/2=3/4,$ as in the diagram on the right. The conditional probability based on the intersection of events defined as: $\mu (B\mid A)={\frac {\mu (A\cap B)}{\mu (A)}}.$ satisfies the probability measure requirements so long as $\mu (A)$ is not zero.[2] Probability measures are distinct from the more general notion of fuzzy measures in which there is no requirement that the fuzzy values sum up to $1,$ and the additive property is replaced by an order relation based on set inclusion. Example applications Market measures which assign probabilities to financial market spaces based on actual market movements are examples of probability measures which are of interest in mathematical finance; for example, in the pricing of financial derivatives.[5] For instance, a risk-neutral measure is a probability measure which assumes that the current value of assets is the expected value of the future payoff taken with respect to that same risk neutral measure (i.e. calculated using the corresponding risk neutral density function), and discounted at the risk-free rate. If there is a unique probability measure that must be used to price assets in a market, then the market is called a complete market.[6] Not all measures that intuitively represent chance or likelihood are probability measures. For instance, although the fundamental concept of a system in statistical mechanics is a measure space, such measures are not always probability measures.[3] In general, in statistical physics, if we consider sentences of the form "the probability of a system S assuming state A is p" the geometry of the system does not always lead to the definition of a probability measure under congruence, although it may do so in the case of systems with just one degree of freedom.[4] Probability measures are also used in mathematical biology.[7] For instance, in comparative sequence analysis a probability measure may be defined for the likelihood that a variant may be permissible for an amino acid in a sequence.[8] Ultrafilters can be understood as $\{0,1\}$-valued probability measures, allowing for many intuitive proofs based upon measures. For instance, Hindman's Theorem can be proven from the further investigation of these measures, and their convolution in particular. See also • Borel measure – Measure defined on all open sets of a topological space • Fuzzy measure – theory of generalized measures in which the additive property is replaced by the weaker property of monotonicityPages displaying wikidata descriptions as a fallback • Haar measure – Left-invariant (or right-invariant) measure on locally compact topological group • Lebesgue measure – Concept of area in any dimension • Martingale measure – Probability measurePages displaying short descriptions of redirect targets • Set function – Function from sets to numbers References 1. An introduction to measure-theoretic probability by George G. Roussas 2004 ISBN 0-12-599022-7 page 47 2. Probability, Random Processes, and Ergodic Properties by Robert M. Gray 2009 ISBN 1-4419-1089-1 page 163 3. A course in mathematics for students of physics, Volume 2 by Paul Bamberg, Shlomo Sternberg 1991 ISBN 0-521-40650-1 page 802 4. The concept of probability in statistical physics by Yair M. Guttmann 1999 ISBN 0-521-62128-3 page 149 5. Quantitative methods in derivatives pricing by Domingo Tavella 2002 ISBN 0-471-39447-5 page 11 6. Irreversible decisions under uncertainty by Svetlana I. Boyarchenko, Serge Levendorskiĭ 2007 ISBN 3-540-73745-6 page 11 7. Mathematical Methods in Biology by J. David Logan, William R. Wolesensky 2009 ISBN 0-470-52587-8 page 195 8. Discovering biomolecular mechanisms with computational biology by Frank Eisenhaber 2006 ISBN 0-387-34527-2 page 127 Further reading • Billingsley, Patrick (1995). Probability and Measure. John Wiley. ISBN 0-471-00710-2. • Ash, Robert B.; Doléans-Dade, Catherine A. (1999). Probability & Measure Theory. Academic Press. ISBN 0-12-065202-1. External links • Media related to Probability measure at Wikimedia Commons Authority control: National • Germany
Wikipedia
\begin{document} \title{Flexible Bayesian Dynamic Modeling of Correlation and Covariance Matrices} \begin{abstract} Modeling correlation (and covariance) matrices can be challenging due to the positive-definiteness constraint and potential high-dimensionality. Our approach is to decompose the covariance matrix into the correlation and variance matrices and propose a novel Bayesian framework based on modeling the correlations as products of unit vectors. By specifying a wide range of distributions on a sphere (e.g. the squared-Dirichlet distribution), the proposed approach induces flexible prior distributions for covariance matrices (that go beyond the commonly used inverse-Wishart prior). For modeling real-life spatio-temporal processes with complex dependence structures, we extend our method to dynamic cases and introduce unit-vector Gaussian process priors in order to capture the evolution of correlation among components of a multivariate time series. To handle the intractability of the resulting posterior, we introduce the adaptive $\Delta$-Spherical Hamiltonian Monte Carlo. We demonstrate the validity and flexibility of our proposed framework in a simulation study of periodic processes and an analysis of rat's local field potential activity in a complex sequence memory task. \end{abstract} \keywords{Dynamic covariance modeling; Spatio-temporal models; Geometric methods; Posterior contraction; $\Delta$-Spherical Hamiltonian Monte Carlo} \section{Introduction} \label{sec:intro} Modeling covariance matrices---or more broadly, positive definite (PD) matrices---is one of the most fundamental problems in statistics. In general, the task is difficult because the number of parameters grows quadratically with the dimension of the matrices. The complexity of the challenge increases substantially if we allow dependencies to vary over time (or space) in order to account for the dynamic (non-stationary) nature of the underlying probability model. In this paper, we propose a novel solution to the problem by developing a flexible and yet computationally efficient Bayesian inferential framework for both static and dynamic covariance matrices. This work is motivated by modeling the dynamic brain connectivity (i.e., associations between brain activity at different regions). In light of recent technical advances that allow the collection of large, multidimensional neural activity datasets, brain connectivity analyses are emerging as critical tools in neuroscience research. Specifically, the development of such analytical tools will help elucidate fundamental mechanisms underlying cognitive processes such as learning and memory, and identify potential biomarkers for early detection of neurological disorders. There are a number of new methods that have been developed \citep{cribben12,fiecas16,lindquist14,ting15,prado13} but the main limitation of these methods (especially the ones that have a frequentist approach) is a lack of natural framework for inference. Moreover, parametric approaches (e.g. vector auto-regressive models) need to be tested for adequacy for modeling complex brain processes and often have high dimensional parameter spaces (especially with a large number of channels and high lag order). This work provides both a nonparametric Bayesian model and an efficient inferential method for modeling the complex dynamic dependence among multiple stochastic processes that is common in the study of brain connectivity. Within the Bayesian framework, it is common to use an inverse-Wishart prior on the covariance matrix for computational convenience \citep{mardia80,anderson03}. This choice of prior however is very restrictive (e.g. common degrees of freedom for all components of variance) \citep{barnard00,tokuda11}. \cite{daniels1999a, daniels2001} propose uniform shrinkage priors. \cite{daniels1999b} discuss three hierarchical priors to generalize the inverse-Wishart prior. Alternatively, one may use decomposition strategies for more flexible modeling choices (see \cite{barnard00} for more details). For instance, \cite{banfield93}, \cite{yang94}, \cite{celeux95}, \cite{leonard92}, \cite{chiu96}, and \cite{bensmail97} propose methods based on the spectral decomposition of the covariance matrix. Another strategy is to use the Cholesky decomposition of the covariance matrix or its inverse, e.g., \cite{pourahmadi99,pourahmadi00,liu93,pinheiro96}. There are other approaches directly related to correlation, including the constrained model based on truncated distributions \citep{liechty2004}, the Cholesky decomposition of correlation matrix using an angular parametrization \citep{POURAHMADI2015}, and methods based on partial autocorrelation and parameterizations using angles \citep{rapisarda2007}. In general, these methods fail to yield full flexibility and generality; and often sacrifice statistical interpretability. While our proposed method in this paper is also based on the separation strategy \citep{barnard00} and the Cholesky decomposition, the main distinction from the existing methods is that it represents each entry of the correlation matrix as a product of unit vectors. This in turn provides a flexible framework for modeling covariance matrices without sacrificing interpretability. Additionally, this framework can be easily extended to dynamic settings in order to model real-life spatio-temporal processes with complex dependence structures that evolve over the course of the experiment. To address the constraint for correlation processes (positive definite matrix at each time having unit diagonals and off-diagonal entries with magnitudes no greater than 1), we introduce unit-vector Gaussian process priors. There are other related works, e.g. generalized Wishart process \citep{wilson10}, and latent factor process \citep{fox15}, that explore the product of vector Gaussian processes. In general they do not grant full flexibility in simultaneously modeling the mean, variance and correlation processes. For example, latent factor based models link the mean and covariance processes through a loading matrix, which is restrictive and undesirable if the linear link is not appropriate, and thus are outperformed by our proposed flexible framework (See more details in Section \ref{sec:flexibility}). Other approaches to model non-stationary processes use a representation in terms of a basis such as wavelets \citep{nason00,park14,cho15} and the SLEX \citep{ombao05}, which are actually inspired by the Fourier representations in the Dahlhaus locally stationary processes \cite{dahlhaus00,priestley65}. These approaches are frequentist and do not easily provide a framework for inference (e.g., obtaining confidence intervals). The class of time-domain parametric models allows for the ARMA parameters to evolve over time \citep[see, e.g.][]{rao70} or via parametric latent signals \citep{west99,prado01}. A restriction for this class of parametric models is that some processes might not be adequately modeled by them. This main contributions of this paper are: (a.) a sphere-product representation of correlation/covariance matrix is introduced to induce flexible priors for correlation/covariance matrices and processes; (b.) a general and flexible framework is proposed for modeling mean, variance, and correlation processes separately; (c.) an efficient algorithm is introduced to infer correlation matrices and processes; (d.) the posterior contraction of modeling covariance (correlation) functions with Gaussian process prior is studied for the first time. The rest of the paper is organized as follows. In the next section, we present a geometric view of covariance matrices and extend this view to allow covariance matrices to change over time. In Section \ref{sec:postinfer}, we use this geometrical perspective to develop an effective and computationally efficient inferential method for modeling static and dynamic covariance matrices. Using simulated data, we will evaluate our method in Section \ref{sec:numerics}. In Section \ref{sec:lfp}, we apply our proposed method to local field potential (LFP) activity data recorded from the hippocampus of rats performing a complex sequence memory task \citep{allen14,allen16,ng17}. In the final section, we conclude with discussions on the limitations of the current work and future extensions. \section{Structured Bayesian Modeling of the Covariance (Correlation) Matrices}\label{sec:strct_model} To derive flexible models for covariance and correlation matrices, we start with the Cholesky decomposition, form a sphere-product representation, and finally obtain the separation decomposition in \cite{barnard00} with correlations represented as products of unit vectors. The sphere-product representation is amenable for the inferential algorithm to handle the resulting intractability, and hence lays the foundation for full flexibility in choosing priors. Any covariance matrix $\vect\varSigma=[\sigma_{ij}]>0$ is symmetric positive definite, and hence has a unique Cholesky decomposition $\vect\varSigma = {\bf L} \tp{\bf L}$ where the Cholesky factor ${\bf L}=[l_{ij}]$ is a lower triangular matrix such that $\sigma_{ij} = \sum_{k=1}^{\min\{i,j\}} l_{ik} l_{jk}$. We denote the variance vector as $\vect\sigma^2:=\tp{[\sigma^2_1,\cdots,\sigma^2_D]}$, then each variance component, $\sigma_i^2:=\sigma_{ii}$, can be written in terms of the corresponding row ${\bf l}_i$ of ${\bf L}$ as follows: \begin{equation}\label{eq:cov_constr} \sigma_i^2 = \sum_{k=1}^{i} l_{ik}^2 = \Vert {\bf l}_i\Vert^2, \quad {\bf l}_i := [l_{i1},l_{i2},\cdots, l_{ii}] \end{equation} For $\vect\varSigma$ to be positive definite, it is equivalent to require all the leading principal minors $\{M_i\}$ to be positive, \begin{equation}\label{eq:minor_constr} M_i = \prod_{k=1}^i l_{kk}^2 >0,\; i=1,\cdots, D\; \Longleftrightarrow \; l_{ii} \neq 0,\; i=1,\cdots, D \end{equation} Based on \eqref{eq:cov_constr} and \eqref{eq:minor_constr}, for $i\in\{1,\cdots, D\}$, ${\bf l}_i$ can be viewed as a point on a sphere with radius $\sigma_i$ excluding the equator, denoted as $\mathcal S_0^{i-1}(\sigma_i):=\{{\bf l}\in \mathbb R^i | \Vert {\bf l}\Vert_2=\sigma_i, l_{ii}\neq 0\}$. Therefore the space of the Cholesky factor in terms of its rows can be written as a product of spheres and we require \begin{equation}\label{eq:cholcov_manifold} ({\bf l}_1,{\bf l}_2,\cdots,{\bf l}_D) \in \mathcal S_0^0(\sigma_1)\times\mathcal S_0^1(\sigma_2)\cdots\times\mathcal S_0^{D-1}(\sigma_D) \end{equation} Note that \eqref{eq:cholcov_manifold} is the sufficient and necessary condition for the matrix $\vect\varSigma={\bf L} \tp{\bf L}$ to be a covariance matrix. We present probabilistic models involving covariance matrices in the following generic form: \begin{equation}\label{eq:strt_cov} \begin{aligned} {\bf y}|\vect\varSigma(\vect\sigma, {\bf L}) &\sim \ell ({\bf y} ; \vect\varSigma(\vect\sigma, {\bf L}) ), \quad \vect\varSigma(\vect\sigma, {\bf L}) = {\bf L} \tp{\bf L} \\ \vect\sigma &\sim p(\vect\sigma) \\ {\bf L} | \vect\sigma &\sim p({\bf L}; \vect\sigma),\quad \tp\mathrm{vech}({\bf L})\in \prod_{i=1}^D \mathcal S_0^{i-1}(\sigma_i) \end{aligned} \end{equation} where $\vect\sigma:=\tp{[\sigma_1,\cdots,\sigma_D]}$, and the half-vectorization in row order, $\tp\mathrm{vech}$, transforms the lower triangular matrix ${\bf L}$ into a vector $({\bf l}_1,{\bf l}_2,\cdots,{\bf l}_D)$. The total dimension of $(\vect\sigma, {\bf L})$ is $\frac{D(D+1)}{2}$.\footnote{For each $i\in \{1,\cdots, D\}$, given $\sigma_i$, there are only $(i-1)$ free parameters on $\mathcal S_0^{i-1}(\sigma_i)$, so there are totally $\frac{D(D-1)}{2}+D$ free parameters.} Alternatively, if we separate variances from covariance, then we have a unique Cholesky decomposition for the correlation matrix $\vect\mathrm{P}=[\rho_{ij}] = {\bf L}^* \tp{({\bf L}^*)}$, where the Cholesky factor ${\bf L}^*=\diag(\vect\sigma^{-1}){\bf L}$ can be obtained by normalizing each row of ${\bf L}$. The magnitude requirements for correlations are immediately satisfied by the Cauchy-Schwarz inequality: $|\rho_{ij}| = \frac{|\sigma_{ij}|}{\sigma_i \sigma_j}= \frac{|\langle {\bf l}_i, {\bf l}_j \rangle|}{\Vert {\bf l}_i\Vert_2 \Vert {\bf l}_j\Vert_2} \leq 1$. Thus we require \begin{equation}\label{eq:cholcorr_manifold} ({\bf l}^*_1,{\bf l}^*_2,\cdots,{\bf l}^*_D) \in \mathcal S_0^0\times\mathcal S_0^1\cdots\times\mathcal S_0^{D-1} \end{equation} where $\mathcal S_0^{i-1}:=\mathcal S_0^{i-1}(1)$. Similarly, \eqref{eq:cholcorr_manifold} is the sufficient and necessary condition for $\vect\mathrm{P}={\bf L}^* \tp{({\bf L}^*)}$ to be a correlation matrix. Then we have the following alternatively structured model for covariance $\vect\varSigma$ that involves correlation $\vect\mathrm{P}$ explicitly \begin{equation}\label{eq:strt_corr} \begin{aligned} {\bf y}|\vect\varSigma(\vect\sigma, {\bf L}^*) &\sim \ell ({\bf y} ; \vect\varSigma(\vect\sigma, {\bf L}^*) ), \quad \vect\varSigma(\vect\sigma, {\bf L}^*) = \diag(\vect\sigma) \vect\mathrm{P} \diag(\vect\sigma), \quad \vect\mathrm{P} = {\bf L}^* \tp{({\bf L}^*)} \\ \vect\sigma &\sim p(\vect\sigma) \\ {\bf L}^* &\sim p({\bf L}^*),\quad \tp\mathrm{vech}({\bf L}^*)\in \prod_{i=1}^D \mathcal S_0^{i-1} \end{aligned} \end{equation} Note, this direct decomposition $\vect\varSigma = \diag(\vect\sigma) \vect\mathrm{P} \diag(\vect\sigma)$ as a \emph{separation strategy} is motivated by statistical thinking in terms of standard deviations and correlations \citep{barnard00}. This setting is especially relevant if the statistical quantity of interest is correlation matrix $\vect\mathrm{P}$ itself, and we can then skip inference of the standard deviation $\vect\sigma$ by fixing it to a data-derived point estimate. In what follows, we will show that the above framework includes the inverse-Wishart prior as a special case, but it can be easily generalized to a broader range of priors for additional flexibility. Such flexibility enables us to better express prior knowledge, control the model complexity and speed up computation in modeling real-life phenomena. This is crucial in modeling spatio-temporal processes with complex structures. \subsection{Connection to the Inverse-Wishart Prior} There are some interesting connections between the spherical product representations \eqref{eq:cholcov_manifold} \eqref{eq:cholcorr_manifold} and the early development of the Wishart distribution \citep{wishart28}. The original Wishart distribution was derived by orthogonalizing multivariate Gaussian random variables leading to a lower triangular matrix whose elements $\{t_{ij}^*| i\geq j\}$ (analogous to $l_{ij}$ or $l^*_{ij}$) were called \emph{rectangular coordinates}. This way, the probability density has a geometric interpretation as a product of volumes and approximate densities on a series of spherical shells with radius $\{t_{ii}^*\}$ \citep[See more details in][]{sverdrup47, anderson03}. Now we demonstrate that the proposed schemes \eqref{eq:strt_cov} \eqref{eq:strt_corr} include the commonly used inverse-Wishart prior as a special case in modeling covariances. Suppose $\vect\varSigma$ is a random sample from the inverse-Wishart distribution $\mathcal W^{-1}_D(\vect\Psi,\nu)$ with the scale matrix $\vect\Psi>0$ and the degree of freedom $\nu\ge D$. Therefore, $\vect\varSigma^{-1} \sim \mathcal W_D(\vect\Psi^{-1},\nu)$. Denote ${\bf C}$ as the Cholesky factor of $\vect\Psi^{-1}$, i.e. $\vect\Psi^{-1} = {\bf C} \tp{\bf C}$. Then $\vect\varSigma^{-1}$ has the following Bartlett decomposition \citep{anderson03,smith72} \begin{equation}\label{eq:bartlett} \vect\varSigma^{-1} = {\bf T}\, \tp{\bf T}, \quad {\bf T}:= {\bf C}{\bf T}^*, \quad t^*_{ij} \sim \begin{cases} \chi_{D-i+1}, & i=j\\ \mathcal N(0,1), & i>j\\\ \delta_0, & i<j \end{cases} \end{equation} where the lower triangular matrix ${\bf T}$, named \emph{Bartlett factor}, has the following density \citep[Theorem 7.2.1 of][]{anderson03} \begin{equation*} p({\bf T}) = \frac{|\vect\Psi|^{\nu/2}}{2^{D(\nu-2)/2} \Gamma_D(\nu/2)} \prod_{i=1}^D |t_{ii}|^{\nu-i} \exp\left(-\frac12\mathrm{tr} (\vect\Psi {\bf T}\tp{\bf T}) \right) \end{equation*} with multivariate gamma function defined as $\Gamma_D(x):= \pi^{D(D-1)/4} \prod_{i=1}^D\Gamma[x+(1-i)/2]$. Now taking the inverse of the first equation in \eqref{eq:bartlett} yields the following \emph{reversed Cholesky decomposition}\footnote{This can be achieved through the \emph{exchange matrix} (a.k.a. reversal matrix, backward identity, or standard involutory permutation) ${\bf E}$ with 1's on the anti-diagonal and 0's elsewhere. Note that ${\bf E}$ is both involutory and orthogonal, i.e. ${\bf E}={\bf E}^{-1}=\tp{\bf E}$. Let ${\bf E}\vect\varSigma{\bf E}={\bf L}\tp{\bf L}$ be the usual Cholesky decomposition. Then $\vect\varSigma = ({\bf E}{\bf L}{\bf E}) \tp{({\bf E}{\bf L}{\bf E})}={\bf U} \tp{\bf U}$ and define ${\bf U}:={\bf E}{\bf L}\tp{\bf E}$.} \begin{equation*}\label{eq:rev_cholcov} \vect\varSigma = {\bf U} \tp{\bf U}, \qquad \sigma_{ij} = \sum_{k=\max\{i,j\}}^D u_{ik} u_{jk}, \quad \mathrm{vech}(\tp{\bf U}) \in \prod_{i=1}^D \mathcal S_0^{D-i}(\sigma_i) \end{equation*} where ${\bf U} := {\bf T}^{-\mathsf T}$ is an upper triangular matrix. The following proposition describes the density of the reversed Cholesky factor ${\bf U}$ of $\vect\varSigma$, which enables us to treat the inverse-Wishart distribution as a special instance of strategy \eqref{eq:strt_cov} or \eqref{eq:strt_corr}. \begin{prop} \label{prop:conn_iWishart} Assume $\vect\varSigma\sim \mathcal W^{-1}_D(\vect\Psi,\nu)$. Then its reversed Cholesky factor ${\bf U}$ has the following density \begin{equation*}\label{eq:pdf_bartlett} p({\bf U}) = \frac{|\vect\Psi|^{\nu/2}}{2^{D(\nu-2)/2} \Gamma_D(\nu/2)} |{\bf U}|^{-(\nu+D+1)} \prod_{i=1}^D u_{ii}^i \exp\left(-\frac12\mathrm{tr} (\vect\Psi {\bf U}^{-\mathsf T} {{\bf U}^{-1}}) \right) \end{equation*} \end{prop} \begin{proof} See Section A in the supplementary file. \end{proof} If we normalize each row of ${\bf U}$ and write \begin{equation*} {\bf U} = \diag(\vect\sigma) {\bf U}^*, \quad \sigma_i = \sqrt{\sigma_{ii}}=\Vert {\bf u}_i\Vert, \quad u^*_{ij} = u_{ij}/\sigma_i \, , \end{equation*} then the following joint prior of $(\vect\sigma,{\bf U}^*)$ is inseparable in general: \begin{equation}\label{eq:jtpri4iwishart} p(\vect\sigma,{\bf U}^*) \propto \prod_{i=1}^D |\sigma_i u^*_{ii}|^{i-(\nu+D+1)} \exp\left\{-\frac12\mathrm{tr} (\vect\Psi \diag(\vect\sigma^{-1})({\bf U}^*)^{-\mathsf T} {({\bf U}^*)^{-1}}\diag(\vect\sigma^{-1})) \right\} \end{equation} With this result, we can conditionally model variance and correlation factor as $p(\vect\sigma|{\bf U}^*)$ and $p({\bf U}^*|\vect\sigma)$ respectively, similarly as in our proposed scheme \eqref{eq:strt_cov} or \eqref{eq:strt_corr}. It is also used to verify the validity of our proposed method \eqref{eq:strt_corr} (see more details in Section \ref{sec:validity}). A similar result exists for the Wishart prior distribution regarding the Cholesky factor. This representation facilitates the construction of a broader class of more flexible prior distributions for covariance matrix detailed below. \subsection{More Flexible Priors}\label{sec:priors} Within the above framework, the only constraint on ${\bf U}$ or ${\bf L}$ is that it resides on the product of spheres with increasing dimensions. Using this fact, we can develop a broader class of priors on covariance matrices and thus be able to model processes with more complicated dependence in covariance structures. Since $\vect\sigma$ and ${\bf L}^*$ have independent priors in \eqref{eq:strt_corr}, in what follows we focus on the scheme \eqref{eq:strt_corr}, and for simplicity, we denote the normalized Cholesky factor as ${\bf L}$. Also, following \cite{barnard00}, we assume a log-Normal prior on $\sigma$: \begin{equation*}\label{eq:pri_var} \log(\vect\sigma) \sim \mathcal N(\vect\xi,\vect\Lambda) \end{equation*} We now discuss priors on ${\bf L}$ that properly reflect the prior knowledge regarding the covariance structure among variables. If two variables, $y_i$ and $y_j$ (assuming $i<j$) are known to be uncorrelated a priori, i.e. $0=\rho_{ij}=\langle {\bf l}_i, {\bf l}_j\rangle$, then we can choose a prior that encourages ${\bf l}_i \perp {\bf l}_j$, e.g. $l_{jk} \approx 0$ for $k\leq i$. In contrast, if we believe a priori that there is a strong correlation between the two variables, we can specify that ${\bf l}_i$ and ${\bf l}_j$ be linearly dependent, e.g., by setting $[l_{jk}]_{k\leq i} \approx \pm {\bf l}_i\,$. When there is no prior information, we might assume that components are uncorrelated and specify priors for ${\bf l}_i$ that concentrate on the (two) poles of $\mathcal S_0^{i-1}$, \begin{equation}\label{eq:pri_cor} p({\bf l}_i) \propto |l_{ii}|, \quad i=2,\cdots, D \end{equation} Putting more prior probability on the diagonal elements of ${\bf L}$ renders fewer non-zero off-diagonal elements, which in turn leads to a larger number of perpendicular variables; that is, such a prior favors zeros in the correlation matrix $\vect\mathrm{P}$. More generally, one can map a probability distribution defined on the simplex onto the sphere and consider the following \emph{squared-Dirichlet} distribution. \begin{dfn}[Squared-Dirichlet distribution] A random vector ${\bf l}_i\in\mathcal S^{i-1}$ is said to have a \emph{squared-Dirichlet} distribution with parameter $\vect\alpha_i:=(\alpha_{i1},\alpha_{i2},\cdots,\alpha_{ii})$ if \begin{equation*}\label{eq:sqDir_def} {\bf l}_i^2 := (l_{i1}^2,l_{i2}^2,\cdots, l_{ii}^2) \sim \mathrm{Dir} (\vect\alpha_i) \end{equation*} Denote ${\bf l}_i \sim \mathrm{Dir}^2 (\vect\alpha_i)$. Then ${\bf l}_i$ has the following density \begin{equation}\label{eq:sqDir_density} p({\bf l}_i) = p({\bf l}_i^2) | 2{\bf l}_i | \propto ({\bf l}_i^2)^{\vect\alpha_i-1} |{\bf l}_i| = |{\bf l}_i|^{2\vect\alpha_i-1} := \prod_{k=1}^i |l_{ik}|^{2\alpha_{ik}-1} \end{equation} \end{dfn} \begin{rk} This definition includes a large class of flexible prior distributions on the unit sphere that specify different concentrations of probability density through the parameter $\vect\alpha_i$. For example, the above prior \eqref{eq:pri_cor} corresponds to $\vect\alpha_i=(\frac12,\cdots,\frac12,1)$. \end{rk} \begin{figure} \caption{Symmetric squared-Dirichlet distributions $\mathrm{Dir}^2 (\vect\alpha)$ defined on the 2-sphere with different settings for concentration parameter $\vect\alpha=\alpha{\bm 1}$. The uniform distribution on the simplex, $\mathrm{Dir}({\bm 1})$, becomes non-uniform on the sphere due to the stretch of geometry (left); the symmetric Dirichlet distribution $\mathrm{Dir}(\frac12 {\bm 1})$ becomes uniform on the sphere (middle); with $\alpha$ closer to 0, the induced distribution becomes more concentrated on the polar points (right).} \label{fig:sqdirichlet} \end{figure} To induce a prior distribution for the correlation matrix $\vect\mathrm{P} = {\bf L} \tp{\bf L}$, one can specify priors on row vectors of ${\bf L}$, ${\bf l}_i \sim \mathrm{Dir}^2 (\vect\alpha_i)$ for $i=2,\cdots, D$. To encourage small correlation, we choose the concentration parameter $\vect\alpha_i$ so that the probability density concentrates around the (two) poles of $\mathcal S_0^{i-1}$, e.g. $0<\alpha_{ik}\ll \alpha_{ii}$ for $k < i$. Figure \ref{fig:sqdirichlet} illustrates the density heat maps of some symmetric squared-Dirichlet distributions $\mathrm{Dir}^2 (\alpha {\bm 1})$ on $\mathcal S^2$. It is interesting that the squared-Dirichlet distribution induces two important uniform prior distributions over correlation matrices from \cite{barnard00} in an effort to provide flexible priors for covariance matrices, as stated in the following theorem. \begin{thm}[Uniform distributions] \label{thm:conn_unif} Let $\vect\mathrm{P}={\bf L}\tp{\bf L}$. Suppose ${\bf l}_i \sim \mathrm{Dir}^2 (\vect\alpha_i)$, for $i=2,\cdots,D$, are independent, where ${\bf l}_i$ is the $i$-th row of ${\bf L}$. We have the following \begin{enumerate} \item If $\vect\alpha_i = ( \frac12 \tp{\bm 1}_{i-1}, \alpha_{ii} ), \; \alpha_{ii} = \frac{(i-2)D-1}{2}$, then $\vect\mathrm{P}$ follows a \emph{marginally} uniform distribution, that is, $\rho_{ij} \sim \mathrm{Unif}(-1,1),\quad i\neq j$. \item If $\vect\alpha_i = ( \frac12 \tp{\bm 1}_{i-1}, \alpha_{ii} ), \; \alpha_{ii} = \frac{D-i}{2}+1$, then $\vect\mathrm{P}$ follows a \emph{jointly} uniform distribution, that is, $p(\vect\mathrm{P}) \propto 1$. \end{enumerate} \end{thm} \begin{proof} See Section A in the supplementary file. \end{proof} Another natural spherical prior can be obtained by constraining a multivariate Gaussian random vector to have unit norm. This is later generalized to a vector Gaussian process constrained to a sphere that serves as a suitable prior for modeling correlation processes. Now we consider the following \emph{unit-vector Gaussian} distribution: \begin{dfn}[Unit-vector Gaussian distribution] A random vector ${\bf l}_i\in\mathcal S^{i-1}$ is said to have a \emph{unit-vector Gaussian} distribution with mean $\vect\mu$ and covariance $\vect\varSigma$ if \begin{equation*}\label{eq:uvG_def} {\bf l}_i \sim \mathcal N_i(\vect\mu, \vect\varSigma), \quad with \; \Vert {\bf l}_i\Vert_2 =1 \end{equation*} Then we denote ${\bf l}_i \sim \mathcal N_i^{\mathcal S}(\vect\mu, \vect\varSigma)$ and ${\bf l}_i$ has the following (conditional) density \begin{equation*}\label{eq:uvG_density} p({\bf l}_i|\,\Vert {\bf l}_i\Vert_2=1) = \frac{1}{(2\pi)^{\frac{i}{2}}|\vect\varSigma|^{\frac12}} \exp\left\{-\frac12 \tp{({\bf l}_i-\vect\mu)} \vect\varSigma^{-1} ({\bf l}_i-\vect\mu) \right\}, \quad \Vert {\bf l}_i\Vert_2 =1 \end{equation*} \end{dfn} \begin{rk} This conditional density essentially defines the following \emph{Fisher-Bingham} distribution \citep[a.k.a. \emph{generalized Kent} distribution,][]{kent82,mardia09}. If $\vect\varSigma={\bf I}$, then the above distribution reduces to the \emph{von Mises-Fisher} distribution \citep{fisher53,mardia09} as a special case. If in addition $\vect\mu={\bf 0}$, then the above density becomes a constant; that is, the corresponding distribution is uniform on the sphere $\mathcal S_0^{i-1}$. See more details in Section E.1 of the supplementary file. \end{rk} \subsection{Dynamically Modeling the Covariance Matrices}\label{sec:dynamic} We can generalize the proposed framework for modeling covariance/correlation matrices to the dynamic setting by adding subscript $t$ to variables in the model \eqref{eq:strt_cov} and the model \eqref{eq:strt_corr}, thus called \emph{dynamic covariance} and \emph{dynamic correlation} models respectively. We focus the latter in this section. One can model the components of $\vect\sigma_t$ as independent dynamic processes using, e.g. ARMA, GARCH, or log-Gaussian process. For ${\bf L}_t$, we use vector processes. Since each row of ${\bf L}_t$ has to be on a sphere of certain dimension, we require the unit norm constraint for the dynamic process over time. We refer to any multivariate process ${\bf l}_i(x)$ satisfying $\Vert {\bf l}_i(x) \Vert \equiv 1, \; \forall x \in \mathcal X$ as \emph{unit-vector process (uvP)}. A unit-vector process can be obtained by constraining an existing multivariate process, e.g. the \emph{vector Gaussian process (vGP)}, as defined below. \begin{dfn}[Vector Gaussian process] A $D$-dimensional \emph{vector Gaussian process} ${\bf Z}(x):=(Z_1(x),\cdots, Z_D(x))$, with vector mean function $\vect\mu(x)=(\mu_1(x),\cdots,\mu_D(x))$, covariance function $\mathcal C$ and ($D$-dimensional) cross covariance ${\bf V}_{D\times D}$, \begin{equation*} {\bf Z}(x) \sim \mathcal{GP}_D(\vect\mu,\mathcal C, {\bf V}_{D\times D}) \end{equation*} is a collection of $D$-dimensional random vectors, indexed by $x\in \mathcal X$, such that for any finite set of indices $\{x_1,\cdots, x_N\}$, the random matrix $\widetilde{\bf Z}_{N\times D}:=\tp{({\bf Z}(x_1),\cdots,{\bf Z}(x_N))}$ has the following matrix normal distribution \begin{equation*} \widetilde{\bf Z}_{N\times D} \sim \mathcal{MN}_{N\times D} ({\bf M}_{N\times D}, {\bf K}_{N\times N}, {\bf V}_{D\times D}) \end{equation*} where ${\bf M}_{N\times D}:=({\bf m}_1,\cdots,{\bf m}_D)$, and ${\bf m}_k=\tp{(\mu_k(x_1),\cdots,\mu_k(x_N))}$, and ${\bf K}$ is the kernel matrix with elements $K_{ij}=\mathcal C(x_i,x_j)$. \end{dfn} \begin{rk} Note for each $k=1,\cdots D$, we have the following marginal GP \begin{equation*} Z_k(x) \sim \mathcal{GP}(\vect\mu_k, \mathcal C) \end{equation*} In the above definition, we require a common kernel $\mathcal C$ for all the marginal GPs, whose dependence is characterized by the cross covariance ${\bf V}_{D\times D}$. On the other hand, for any fixed $x^*\in \mathcal X$, we have \begin{equation*} {\bf Z}(x^*) \sim \mathcal N_D(\vect\mu(x^*), {\bf V}_{D\times D}) \end{equation*} For simplicity, we often consider $\mu\equiv 0$ and ${\bf V}_{D\times D}={\bf I}_D$. That is, $Z_k(x)\overset{iid}{\sim} \mathcal{GP}(0,\mathcal C)$ for $k=1,\cdots,D$. \end{rk} Restricting vGP ${\bf Z}(\cdot)$ to sphere yields a \emph{unit-vector Gaussian process (uvGP)} ${\bf Z}^*(\cdot):={\bf Z}(\cdot) |\, \{\Vert {\bf Z}(\cdot)\Vert_2\equiv 1\}$, denoted as ${\bf Z}^*(\cdot) \sim \mathcal{GP}_D^{\mathcal S}(\vect\mu,\mathcal C, {\bf V})$. Note for any fixed $x^*\in\mathcal X$, ${\bf Z}^*(x^*) \sim \mathcal N_D^{\mathcal S}(\vect\mu, {\bf V})$. Setting $\mu\equiv 0$, ${\bf V}={\bf I}$, and conditioned on the length $\ell_n$ of each row of $\widetilde{\bf Z}$, we have \begin{equation*} p(\widetilde{\bf Z}^*|\,\{\Vert z_{n\cdot}\Vert=\ell_n\}) = \frac{\prod_{n=1}^N \ell_n^D}{(2\pi)^{\frac{ND}{2}}|{\bf K}|^{\frac{D}{2}}} \exp\left\{-\frac12\mathrm{tr} \left[ \tp{(\widetilde{\bf Z}^*)} \diag(\{\ell_n\}){\bf K}^{-1}\diag(\{\ell_n\}) \widetilde{\bf Z}^* \right]\right\} \end{equation*} This conditional density is preserved by the inference algorithm in Section \ref{sec:postinfer} and used for defining priors for correlations with all $\ell_n=1$. For each marginal GP, we select the following exponential function as the common kernel \begin{equation*} \mathcal C(x,x') = \gamma \exp(-0.5\Vert x-x'\Vert^s/\rho^s) \end{equation*} where $s$ controls the smoothness, the scale parameter $\gamma$ is given an inverse-Gamma prior, and the correlation length parameter $\rho$ is given a log-normal prior. Figure \ref{fig:sph_proc} shows a realization of vector GP ${\bf Z}_t$, unit-vector GP (forming rows of) ${\bf L}_t$ and the induced correlation process $\vect\mathrm{P}_t$ respectively. \begin{figure} \caption{A realization of vector GP ${\bf Z}_t$ (left), unit-vector GP (forming rows of) ${\bf L}_t$ (middle) and the induced correlation process $\vect\mathrm{P}_t$ (right).} \label{fig:sph_proc} \end{figure} In what follows, we focus on multivariate time series; therefore, we use the one dimensional time index $t\in \mathcal X=\mathbb R^+$. The overall dynamic correlation model can be summarized as follows: \begin{equation}\label{eq:strt_dyn_model} \begin{aligned} {\bf y}_t \sim \mathcal N(\vect\mu_t,\vect\varSigma_t), &\quad \vect\varSigma_t = \diag(\vect\sigma_t) {\bf L}_t \tp{{\bf L}_t} \diag(\vect\sigma_t) \\ \vect\mu_t \sim \mathcal{GP}_D(0,\mathcal C_{\mu},{\bf I}), &\quad \mathcal C_\mu(t,t') = \gamma_{\mu} \exp(-0.5\Vert t-t'\Vert^s/\rho_{\mu}^s) \\ \log \vect\sigma_t \sim \mathcal{GP}_D(0,\mathcal C_{\sigma},{\bf I}), &\quad \mathcal C_\sigma(t,t') = \gamma_{\sigma} \exp(-0.5\Vert t-t'\Vert^s/\rho_{\sigma}^s) \\ {\bf l}_i(t) \sim \mathcal{GP}_i^{\mathcal S}({\bf n}_i,\mathcal C_L,{\bf I}), &\quad \mathcal C_L(t,t') = \gamma_L \exp(-0.5\Vert t-t'\Vert^s/\rho_L^s) \\ \gamma_* \sim \Gamma^{-1}(a_*,b_*), &\quad \log\rho_* \sim \mathcal N(m_*,V_*), \; * = \mu, \sigma, \,\textrm{or}\, L \end{aligned} \end{equation} where a constant mean function ${\bf n}_i=(0,\cdots,0,1)$ is used in the uvGP prior for ${\bf l}_i(t)$, with mean matrix ${\bf M}={\bm 1}_N\otimes \tp{\bf n}_i$ for the realization $\tilde{\bf l}_i$. This model \eqref{eq:strt_dyn_model} captures the spatial dependence in the matrix $\vect\varSigma_t$, which evolves along the time; while the temporal correlation is characterized by various GPs. The induced covariance process $\vect\varSigma_t$ is not a generalized Wishart process \citep{wilson10}, which only models Cholesky factor of covariance using GP. Though with GP, dynamic covariance model may work similarly as the dynamic correlation model \eqref{eq:strt_dyn_model}, yet the latter provides extra flexibility in modeling the evolution of variances and correlations separately. In general such flexibility could be useful in handling constraints for processes, e.g. modeling the dynamic probability for binary time series. With this structured model \eqref{eq:strt_dyn_model}, one can naturally model the evolution of variances and correlations separately in order to obtain more flexibility. If the focus is on modeling the correlation among multiple time series, then one can substitute $\vect\sigma_t$ with a point estimate $\widehat{\vect\sigma}$ from one trial and assume a steady variance vector. Alternatively, if sufficient trials are present, one can obtain an empirical estimate, $\widehat{\vect\sigma}_t$, from multiple trials at each time point. In the following, we study the posterior contraction of GP modeling in this setting. \subsection{Posterior Contraction Theorem} We now provide a theorem on the posterior contraction of the dynamic covariance model before we conclude this section. Because the posterior contraction for mean regression using Gaussian process has been vastly investigated in the literature \citep{vanderVaart08a,vandervaart09,vanderVaart11,yang16}, we only investigate the posterior contraction for the covariance regression and set $\vect\mu_t\equiv 0$. We leave the posterior contraction of the dynamic correlation model \eqref{eq:strt_dyn_model} for future work. Note, the Cholesky decomposition of covariance matrix $\vect\varSigma={\bf L}\tp{\bf L}$ is unique if all the diagonal entries of ${\bf L}$ are positive. Therefore in the remaining of this section, we identify Cholesky factors up to a column-wise sign, i.e. ${\bf L} \sim {\bf L} \diag(-\sum_{j\in J} e_j)$ for $J\subset\{1,\cdots,D\}$ where $e_j$ is the $j$-th column of identity matrix ${\bf I}_D$. In most cases, Gaussian process ${\bf L}_t$ can be viewed as a tight Borel measurable map in a Banach space, e.g. a space of continuous functions or $L_p$ space. It is well known that the support of a centered GP is equal to the closure of the \emph{reproducible kernel Hilbert space (RKHS)} $\mathbb H$ associated to this process \citep[Lemma 5.1 of][]{vanderVaart08b}. Because the posterior distribution necessarily puts all its mass on the support of the prior, the posterior consistency requires the true parameter ${\bf L}_0$ governing the distribution of the data to fall in this support \citep{vanderVaart08a}. Following \cite{vanderVaart08a,vanderVaart08b,vanderVaart11}, we express the rate of the posterior contraction in terms of the \emph{concentration} function \begin{equation}\label{eq:concent_1} \phi_{{\bf L}_0}(\varepsilon) = \inf_{h\in\mathbb H : \,\Vert h-{\bf L}_0\Vert < \varepsilon} \Vert h\Vert_{\mathbb H}^2 - \log \Pi({\bf L} :\, \Vert {\bf L}\Vert < \varepsilon) \end{equation} where $\Vert\cdot\Vert$ is the norm of the Banach space where the GP ${\bf L}$ takes value, $\Pi$ is the GP prior and $\mathbb H$ is the associated RKHS with norm $\Vert\cdot\Vert_{\mathbb H}$. Under certain regularity conditions, the posterior contracts with increasing data expressed in $n$ at the rate $\varepsilon_n\rightarrow 0$ satisfying \begin{equation}\label{eq:concent_2} \phi_{{\bf L}_0}(\varepsilon_n) \leq n \varepsilon_n^2 \end{equation} Define $\Vert{\bf L}(t)\Vert_\infty := \max_{1\leq i,j\leq D} \sup_{t\in\mathcal X} |l_{ij}(t)|$. Consider the separable Banach space $L^\infty(\mathcal X)^{D(D+1)/2}:=\{{\bf L}(t): \Vert{\bf L}(t)\Vert_\infty<+\infty\}$. Let $p$ be a (centered) Gaussian model, which is uniquely determined by the covariance matrix $\vect\varSigma={\bf L}\tp{\bf L}$. Therefore the model density is parametrized by ${\bf L}$, hence denoted as $p_{\bf L}$. Denote $P_{\bf L}^{(n)}:=\otimes_{i=1}^n P_{{\bf L},i}$ as the product measure on $\otimes_{i=1}^n(\mathcal X_i,\mathcal B_i,\mu_i)$. Each $P_{{\bf L},i}$ has a density $p_{{\bf L}_i}$ with respect to the $\sigma$-finite measure $\mu_i$. Define the average Hellinger distance as $d_n^2({\bf L},{\bf L}')=\frac1n\sum_{i=1}^n\int (\sqrt{p_{{\bf L},i}}-\sqrt{p_{{\bf L}',i}})^2 d\mu_i$. Denote the observations $Y^{(n)}=\{Y_i\}_{i=1}^n$ with $Y_i={\bf y}(t_i)$. Note they are independent but not identically distributed (inid). Now we state the main theorem of posterior contraction. \begin{thm}[Posterior contraction] \label{thm:post_contr} Let ${\bf L}-{\bf I}$ be a Borel measurable, zero-mean tight Gaussian random element in $L^\infty(\mathcal X)^{D(D+1)/2}$ and $P_{\bf L}^{(n)}=\otimes_{i=1}^n P_{{\bf L},i}$ be the product measure of $Y^{(n)}$ parametrized by ${\bf L}$. Let $\phi_{{\bf L}_0}$ be the function in \eqref{eq:concent_1} with the uniform norm $\Vert\cdot\Vert_\infty$. If ${\bf L}_0$ is contained in the support of ${\bf L}$ and $\phi_{{\bf L}_0}$ satisfies \eqref{eq:concent_2} with $\varepsilon_n\geq n^{-\frac12}$, then $\Pi_n({\bf L} : d_n({\bf L}, {\bf L}_0)> M_n\varepsilon_n | Y^{(n)}) \rightarrow 0$ in $P_{{\bf L}_0}^{(n)}$-probability for every $M\rightarrow \infty$. \end{thm} \begin{proof} See Section B in the supplementary file. \end{proof} \begin{rk} In principle, the smoothness of GP should match the regularity of the true parameter to achieve the optimal rate of contraction \citep{vanderVaart08a,vanderVaart11}. One can scale GP, e.g. using an inverse-Gamma bandwidth, to get optimal contraction rate for every regularity level so that the resulting estimator is rate adaptive \citep{vandervaart09,vanderVaart11}. One can refer to Section 3.2 of \citep{vanderVaart11} for posterior contraction rates using squared exponential kernel for GP. We leave further investigation on contraction rates in the setting of covariance regression to future work. \end{rk} \begin{rk} Here the GP prior ${\bf L}$ defines a (mostly finite) probability measure on the space of bounded functions. The true parameter function ${\bf L}_0$ is required to be contained in the support of the prior, the RKHS of ${\bf L}$. The contraction rate depends on the position of ${\bf L}_0$ relative to the RKHS and the small-ball probability $\Pi(\Vert {\bf L}\Vert<\varepsilon)$. \end{rk} \section{Posterior Inference}\label{sec:postinfer} Now we obtain the posterior probability of mean $\vect\mu_t$, variance $\vect\sigma_t$, Cholesky factor of correlation ${\bf L}_t$, hyper-parameters $\vect\gamma:=(\gamma_\mu,\gamma_\sigma,\gamma_L)$ and $\vect\rho:=(\rho_\mu,\rho_\sigma,\rho_L)$ in the model \eqref{eq:strt_dyn_model}. Denote the realization of processes $\vect\mu_t, \vect\sigma_t, {\bf L}_t$ at discrete time points $\{t_n\}_{n=1}^N$ as $\widetilde{\vect\mu}_{N\times D}, \widetilde{\vect\sigma}_{N\times D}, \widetilde{\bf L}_{N\times D\times D}$ respectively. Transform the parameters $\widetilde{\vect\tau}:=\log(\widetilde{\vect\sigma})$, $\vect\eta:=\log(\vect\rho)$ for the convenience of calculation. Denote $\widetilde{\bf Y}_{M\times N\times D}:=\{{\bf Y}_1,\cdots, {\bf Y}_M\}$ for $M$ trials, $({\bf Y}_m)_{N\times D}:=\tp{[{\bf y}_{m1},\cdots, {\bf y}_{mN}]}$ and ${\bf y}_{mn}^*:=({\bf y}_{mn}-\vect\mu_n) \circ e^{-\vect\tau_n}$ where $\circ$ is the Hadamard product (a.k.a. Schur product), i.e. the entry-wise product. Let ${\bf K}_*(\gamma_*,\eta_*) = \gamma_* {\bf K}_{0*}(\eta_*)$ and $\tilde{\bf l}_i^*:=\tilde{\bf l}_i-{\bm 1}_N\otimes \tp{\bf n}_i$. \subsection{Metropolis-within-Gibbs}\label{sec:sample_procedure} We use a Metropolis-within-Gibbs algorithm and alternate updating the model parameters $\widetilde{\vect\mu}, \widetilde{\vect\tau}, \widetilde{\bf L}, \vect\gamma, \vect\eta$. We now list the parameters and their respective updates one by one. \noindent $(\vect\gamma)$. \quad Note the prior for $\vect\gamma$ is conditionally conjugate given $* = \mu, \tau, \,\textrm{or}\, L$, \begin{equation*} \gamma_* | \tilde*, \eta_* \sim \Gamma^{-1}(a'_*,b'_*), \quad a'_* = a_* + \frac{ND}{2} (\frac{D+1}{2}-\frac1D)^{[*=L]},\quad b'_* = b_* + \frac12 \mathrm{tr} (\tp{\tilde*} {\bf K}_{0*}(\eta_*)^{-1} \tilde* ) \end{equation*} where $[{\rm condition}]$ is 1 with the condition satisfied and 0 otherwise. \noindent $(\vect\eta)$. \quad Given $* = \mu, \tau, \,\textrm{or}\, L$, we could sample $\eta_*$ using the slice sampler \citep{neal03}, which only requires log-posterior density and works well for scalar parameters, \begin{equation*} \log p(\eta_*| \tilde*, \gamma_*) = -\frac{D(\frac{D+1}{2}-\frac1D)^{[*=L]}}{2} \log|{\bf K}_{0*}(\eta_*)| - \frac{\mathrm{tr} (\tp{\tilde*} {\bf K}_{0*}(\eta_*)^{-1} \tilde* )}{2\gamma_*} - \frac{(\eta_*-m_*)^2}{2V_*} \end{equation*} \noindent $(\widetilde{\vect\mu})$. \quad By the definition of vGP, we have $\widetilde{\vect\mu}| \gamma_\mu,\eta_\mu \sim \mathcal{MN}_{N\times D}({\bf 0},{\bf K}_\mu,{\bf I}_D)$; therefore, $\mathrm{vec}(\widetilde{\vect\mu}) |\gamma_\mu,\eta_\mu \sim \mathcal N_{ND}({\bf 0},{\bf I}_D\otimes {\bf K}_\mu)$. On the other hand, one can write \begin{equation*} \begin{aligned} \sum_{m=1}^M \sum_{n=1}^N \tp{{\bf y}_{mn}^*} \vect\mathrm{P}_n^{-1} {\bf y}_{mn}^* &= \sum_{m=1}^M \tp{\mathrm{vec}(\tp{({\bf Y}_m-\widetilde{\vect\mu})})} \diag(\{\widetilde{\vect\varSigma}_n^{-1}\}) \mathrm{vec}(\tp{({\bf Y}_m-\widetilde{\vect\mu})}) \\ &= \sum_{m=1}^M \tp{(\mathrm{vec}({\bf Y}_m)-\mathrm{vec}(\widetilde{\vect\mu}))} \widetilde{\vect\varSigma}_K^{-1} (\mathrm{vec}({\bf Y}_m)-\mathrm{vec}(\widetilde{\vect\mu})) \end{aligned} \end{equation*} where $\widetilde{\vect\varSigma}_K^{-1}:=K_{(D,N)} \diag(\{\widetilde{\vect\varSigma}_n\})^{-1} K_{(N,D)}$, and $K_{(N,D)}$ is the \emph{commutation matrix} of size $ND\times ND$ such that for any $N\times D$ matrix ${\bf A}$, $K_{(N,D)} \mathrm{vec}({\bf A})=\mathrm{vec}(\tp{\bf A})$ \citep{tracy69,magnus79}. Therefore, the prior on $\mathrm{vec}(\widetilde{\vect\mu})$ is conditionally conjugate, and we have \begin{align*} \mathrm{vec}(\widetilde{\vect\mu}) | \widetilde{\bf Y}, \widetilde{\vect\varSigma}, \gamma_\mu,\eta_\mu &\sim \mathcal N_{ND}(\vect\mu', \vect\varSigma')\\ \vect\mu' &= \vect\varSigma' \widetilde{\vect\varSigma}_K^{-1} \sum_{m=1}^M \mathrm{vec}({\bf Y}_m), \quad \vect\varSigma' = \left({\bf I}_D\otimes{\bf K}_\mu^{-1}+M \widetilde{\vect\varSigma}_K^{-1}\right)^{-1} \end{align*} \noindent $(\widetilde{\vect\tau})$. \quad Using a similar argument by matrix Normal prior for $\widetilde{\vect\tau}$, we have $\mathrm{vec}(\widetilde{\vect\tau})| \gamma_\tau,\eta_\tau \sim \mathcal N_{ND}({\bf 0},{\bf I}_D\otimes {\bf K}_\tau)$. Therefore, we could use the elliptic slice sampler \citep[ESS,][]{murray10}, which only requires the log-likelihood \begin{equation*} \log p(\widetilde{\vect\tau}; \widetilde{\bf Y},\widetilde{\vect\mu}) = - M \tp{\bm 1}_{ND} \mathrm{vec}(\widetilde{\vect\tau}) - \sum_{m=1}^M \frac12 \tp{\mathrm{vec}({\bf Y}_m^*)} \widetilde{\vect\mathrm{P}}_K^{-1} \mathrm{vec}({\bf Y}_m^*) \end{equation*} where $\widetilde{\vect\mathrm{P}}_K^{-1}:=K_{(D,N)} \diag(\{\widetilde{\vect\mathrm{P}}_n\})^{-1} K_{(N,D)}$ and ${\bf Y}_m^*:=({\bf Y}_m-\widetilde{\vect\mu}) \circ \exp(-\widetilde{\vect\tau})$. \noindent $(\widetilde{\bf L})$. \quad For each $n\in\{1,\cdots N\}$, we have $\tp\mathrm{vech}({\bf L}_n)\in \prod_{i=1}^D \mathcal S_0^{i-1}$. We could sample from its posterior distribution using the \emph{$\Delta$-Spherical Hamiltonian Monte Carlo ($\Delta$-SphHMC)} described below. The log-posterior density of $\widetilde{\bf L}$ is \begin{equation*} \log p(\widetilde{\bf L}| \widetilde{\bf Y}, \widetilde{\vect\mu},\widetilde{\vect\tau},\gamma_L,\eta_L) = - \sum_{n=1}^N \left[ M \log |{\bf L}_n| + \sum_{m=1}^M \frac12 \tp{{\bf y}_{mn}^*} \vect\mathrm{P}_n^{-1} {\bf y}_{mn}^*\right] - \frac12 \sum_{i=2}^D \mathrm{tr} (\tilde{\bf l}_i^{*\mathsf T} {\bf K}_L^{-1} \tilde{\bf l}_i^*) \end{equation*} The derivative of log-likelihood with respect to ${\bf L}_n$ and the derivative of log-prior with respect to $\tilde{\bf l}_i$ can be calculated as \begin{equation*} \frac{\partial \log p(\widetilde{\bf L}; \widetilde{\bf Y},\widetilde{\vect\mu},\widetilde{\vect\tau}) }{\partial {\bf L}_n} = - M \frac{{\bf I}_D}{{\bf L}_n} + \sum_{m=1}^M \mathrm{tril} ( \vect\mathrm{P}_n^{-1} {\bf y}_{mn}^* \tp{{\bf y}_{mn}^*} {\bf L}_n^{-\mathsf T} ), \; \frac{\partial \log p(\widetilde{\bf L}| \gamma_L,\eta_L) }{\partial \tilde{\bf l}_i} = - {\bf K}_L^{-1} \tilde{\bf l}_i^* \end{equation*} \subsection{Spherical HMC}\label{sec:SphHMC} We need an efficient algorithm to handle the intractability in the posterior distribution of $\widetilde{\bf L}$ introduced by various flexible priors. \emph{Spherical Hamiltonian Monte Carlo} \citep[SphHMC,][]{lan14b,lan16} is a Hamiltonian Monte Carlo \citep[HMC,][]{duane87,neal11} algorithm on spheres that can be viewed as a special case of geodesic Monte Carlo \citep{byrne13}, or manifold Monte Carlo methods \citep{girolami11,lan14a}. The algorithm was originally proposed to handle norm constraints in sampling so it is natural to use it to sample each row of the Cholesky factor of a correlation matrix with unit 2-norm constraint. The general notation ${\bf q}$ is instantiated as ${\bf l}_i$ in this section. Assume a probability distribution with density function $f({\bf q})$ is defined on a $(D-1)$ dimensional sphere with radius $r$, $\mathcal S^{D-1}(r)$. Due to the norm constraint, there are $(D-1)$ free parameters ${\bf q}_{-D}:=(q_1,\cdots,q_{D-1})$, which can be viewed as the Cartesian coordinates for the manifold $\mathcal S_+^{D-1}(r)$. To induce Hamiltonian dynamics on the sphere, we define the potential energy for position ${\bf q}$ as $U({\bf q}):= -\log f({\bf q})$. Endowing the canonical spherical metric ${\bf G}({\bf q}_{-D}) = {\bf I}_{D-1}+ \frac{{\bf q}_{-D}\tp {\bf q}_{-D}}{q_D^2}$ on the Riemannian manifold $\mathcal S^{D-1}(r)$, we introduce the auxiliary velocity vector ${\bf v}|{\bf q} \sim \mathcal N({\bf 0}, {\bf G}({\bf q})^{-1})$ and define the associated kinetic energy as $K({\bf v};{\bf q}):= -\log f_{\mathcal N}({\bf v}|{\bf q})=-\frac12 \log|{\bf G}({\bf q}_{-D})|+\frac12 \tp {\bf v}_{-D} {\bf G}({\bf q}_{-D}){\bf v}_{-D}$ \citep{girolami11}. Therefore the total energy is defined as \begin{equation}\label{eq:energy} E({\bf q},{\bf v}) := U({\bf q}) + K({\bf v};{\bf q}) = \tilde U({\bf q}) + K_0({\bf v};{\bf q}) \end{equation} where we denote $\tilde U({\bf q}):=U({\bf q}) -\frac12 \log|{\bf G}({\bf q}_{-D})|=-\log f({\bf q}) +\log|q_D|$, and $K_0({\bf v};{\bf q}):=\frac12 \tp {\bf v}_{-D} {\bf G}({\bf q}_{-D}){\bf v}_{-D}=\frac12 \tp {\bf v} {\bf v}$ \citep{lan16}. Therefore the Lagrangian dynamics with above total energy \eqref{eq:energy} is \citep{lan14a} \begin{equation}\label{eq:LD} \begin{aligned} \dot {\bf q}_{-D} & = {\bf v}_{-D}\\ \dot {\bf v}_{-D} & = -\tp {\bf v}_{-D} \vect\Gamma({\bf q}_{-D}) {\bf v}_{-D} - {\bf G}({\bf q}_{-D})^{-1} \nabla_{{\bf q}_{-D}} \tilde U({\bf q}) \end{aligned} \end{equation} where $\vect\Gamma({\bf q}_{-D}) = r^{-2}{\bf G}({\bf q}_{-D})\otimes {\bf q}_{-D}$ is the Christoffel symbols of second kind \citep[see details in][for $r=1$]{lan16}. A splitting technique is applied to yield the following geometric integrator \citep{lan14b,lan16}, which also includes the last coordinates $q_D, v_D$: \begin{equation}\label{eq:sphHMC_prop} \begin{aligned} {\bf v}^- &= {\bf v} - \frac{h}{2} \mathcal P({\bf q}) {\bf g}({\bf q})\\ \begin{bmatrix} {\bf q}' \\ {\bf v}^+ \end{bmatrix} &= \begin{bmatrix} r & 0\\ 0 & \Vert {\bf v}^-\Vert_2 \end{bmatrix} \begin{bmatrix} \cos(\Vert {\bf v}^-\Vert_2r^{-1} h) + \sin(\Vert {\bf v}^-\Vert_2r^{-1} h)\\ - \sin(\Vert {\bf v}^-\Vert_2r^{-1} h) + \cos(\Vert {\bf v}^-\Vert_2r^{-1} h) \end{bmatrix} \begin{bmatrix} r^{-1} & 0\\ 0 & \Vert {\bf v}^-\Vert_2^{-1} \end{bmatrix} \begin{bmatrix} {\bf q} \\ {\bf v}^- \end{bmatrix}\\ {\bf v}' &= {\bf v}^+ - \frac{h}{2} \mathcal P({\bf q}') {\bf g}({\bf q}') \end{aligned} \end{equation} where ${\bf g}({\bf q}):=\nabla_{{\bf q}}\tilde U({\bf q})$, $\mathcal P({\bf q}):={\bf I}_D - r^{-2}{\bf q} \tp{\bf q}$. \eqref{eq:sphHMC_prop} defines a mapping $\mathcal T_h: ({\bf q}, {\bf v})\mapsto ({\bf q}',{\bf v}')$. Denote $\Vert {\bf u}\Vert_{\mathcal P({\bf q})}^2:= \tp {\bf u} \mathcal P({\bf q}) {\bf u}$. After applying such integrator $T$ times, a proposal $({\bf q}_T, {\bf v}_T)=\mathcal T_h^T({\bf q}_0, {\bf v}_0)$ is accepted with the following probability \begin{equation}\label{eq:sphHMC_acpt} \begin{aligned} a_{sphHMC} =& 1 \wedge \exp(-\Delta E)\\ \Delta E =& \tilde U({\bf q}_T) - \tilde U({\bf q}_0) - \frac{h^2}{8} \left[ \Vert {\bf g}({\bf q}_T) \Vert_{\mathcal P({\bf q})}^2 - \Vert {\bf g}({\bf q}_0) \Vert_{\mathcal P({\bf q})}^2 \right] \\ & - \frac{h}{2} \left[ \langle {\bf v}_0, {\bf g}({\bf q}_0)\rangle + \langle {\bf v}_T, {\bf g}({\bf q}_T)\rangle \right] - h \sum_{\tau=1}^{T-1} \langle {\bf v}_{\tau}, {\bf g}({\bf q}_{\tau})\rangle \end{aligned} \end{equation} We can prove the following limiting result \citep{beskos11}. \begin{thm} \label{thm:energy_conserv} Let $h\to 0$ we have the following energy conservation \begin{equation*} E({\bf q}(T), {\bf v}(T)) - E({\bf q}(0), {\bf v}(0)) = \tilde U({\bf q}(T)) - \tilde U({\bf q}(0)) - \int_0^{T} \langle {\bf v}(t), {\bf g}({\bf q}(t))\rangle dt = 0 \end{equation*} \end{thm} \begin{proof} See Section C in the supplementary file. \end{proof} \subsection{Adaptive Spherical HMC} There are two tuning parameters in HMC and its variants: the step size $h$ and the number of integration (leapfrog) steps $T$. Hand tuning heavily relies on domain expertise and could be inefficient. Here, we adopt the `No-U-Turn' idea from \cite{hoffman14} and introduce a novel adaptive algorithm that obviates manual tuning of these parameters. First, for any given step size $h$, we adopt a rule for setting the number of leapfrog steps based on the same philosophy as `No-U-Turn' \citep{hoffman14}. The idea is to avoid waste of computation occurred (e.g. when the sampler backtracks on its trajectory) without breaking the detailed balance condition for the MCMC transition kernel. $\mathcal S^{D-1}(r)$ is a compact manifold where any two points ${\bf q}(0), {\bf q}(t)\in \mathcal \mathcal S^{D-1}(r)$ have bounded geodesic distance $\pi r$. We adopt the stopping rule for the leapfrog when the sampler exits the orthant of the initial state, that is, the trajectory measured in geodesic distance is at least $\frac{\pi}{2} r$, which is equivalent to $\langle {\bf q}(0), {\bf q}(t) \rangle < 0$. On the other hand, this condition may not be satisfied within reasonable number of iterations because the geometric integrator \eqref{eq:sphHMC_prop} does not follow a geodesic (great circle) in general (only the middle part does), therefore we set some threshold $T_{\max}$ for the number of tests, and adopt the following `Two-Orthants' (as the starting and end points occupy two orthants) rule for the number of leapfrogs: \begin{equation}\label{eq:sphHMC_2orth} T_{2orth} = \min_{\tau\in\{0,\cdots,T_{\max}\}} \{\tau: \langle {\bf q}_0, {\bf q}_{\tau} \rangle < 0\} \end{equation} Alternatively, one can stop the leapfrog steps in a \emph{stochastic} way based on the geodesic distance travelled: \begin{equation}\label{eq:sphHMC_stoch} T_{stoch} = \min_{\tau} \{\tau : Z_{\tau}=0\}, \quad Z_{\tau} \sim \mathrm{Bern}(p_{\tau}), \quad p_{\tau} = \frac{r^{-2} \langle {\bf q}_0, {\bf q}_{\tau} \rangle +1}{2} \end{equation} These stopping criteria are already time reversible, so the recursive binary tree as in `No-U-Turn' algorithm \citep{hoffman14} is no longer needed. Lastly, we adopt the \emph{dual averaging} scheme \citep{nesterov09} for the adaptation of step size $h$. See \cite{hoffman14} for more details. We summarize our \emph{Adaptive Spherical Hamiltonian Monte Carlo (adp-SphHMC)} in the supplementary file. To sample ${\bf L}$ (or ${\bf L}_t$), we could update each row vector ${\bf l}_i\in \mathcal S_0^{i-1}$ according to \eqref{eq:sphHMC_prop} (in parallel), and accept/reject $\tp\mathrm{vech}({\bf L})$ (or $\tp\mathrm{vech}({\bf L}_t)$) simultaneously based on \eqref{eq:sphHMC_acpt} in terms of the sum of total energy of all components. We refer to the resulting algorithm as \emph{$\Delta$-Spherical HMC ($\Delta$-SphHMC)}. The computational complexity involving GP prior is $\mathcal O(N^3)$, and that of the likelihood evaluation is $\mathcal O(MD^2)$. MCMC updates of $\widetilde{\vect\mu}_{N\times D}, \widetilde{\vect\sigma}_{N\times D}, \widetilde{\bf L}_{N\times D\times D}$ have complexity $\mathcal O(ND)$, $\mathcal O(ND)$ and $\mathcal O(ND^2)$ respectively. To scale up applications to larger dimension $D$, one could preliminarily classify data into groups, and arrange the corresponding blocks of their covariance/correlation matrix in some `band' along the main diagonal assuming no correlation among groups. More specifically, we can assume ${\bf L}_t$ is $w$-band lower triangular matrix for each time $t$, i.e. $l_{ij}=0$ for $i<j$ or $i-j\geq w$, then the resulting covariance/correlation matrix will be $(2w-1)$-banded. In this way the complexity of likelihood evaluation and updating $\widetilde{\bf L}$ will be reduced to $\mathcal O(MwD)$ and $\mathcal O(NwD)$ resepctively. Therefore the total computational cost would scale linearly with the dimension $D$. This technique will be investigated in Section \ref{sec:scalability}. \section{Simulation Studies}\label{sec:numerics} In this section, we use simulated examples to illustrate the advantage of our structured models for covariance. First, we consider the normal-inverse-Wishart problem. Since there is conjugacy and we know the true posterior, we use this to verify our method and investigate flexible priors in Section \ref{sec:priors}. Then we test our dynamical modeling method in Section \ref{sec:dynamic} on a periodic process model. Our model manifests full flexibility compared to a state-of-the-art nonparametric covariance regression model based on latent factor process \citep{fox15}. \subsection{Normal-inverse-Wishart Problem} Consider the following example involving inverse-Wishart prior \begin{equation}\label{eq:conj-iwishart} \begin{aligned} {\bf y}_n|\vect\varSigma & \sim \mathcal N(\vect\mu_0,\vect\varSigma), \quad n=1,\cdots,N\\ \vect\varSigma & \sim \mathcal W^{-1}_D(\vect\Psi, \nu) \end{aligned} \end{equation} It is known that the posterior of $\vect\varSigma|{\bf Y}$ is still inverse-Wishart distribution: \begin{equation}\label{eq:post-iwishart} \vect\varSigma|{\bf Y} \sim \mathcal W^{-1}_D (\vect\Psi + ({\bf Y}-\vect\mu_0)\tp{({\bf Y}-\vect\mu_0)}, \nu+N), \qquad {\bf Y}=\tp{[{\bf y}_1,\cdots, {\bf y}_N]} \end{equation} We consider dimension $D=3$ and generate data ${\bf Y}$ with $\vect\mu_0={\bf 0}, \quad \vect\varSigma=\vect\varSigma_0=\frac{1}{11} ({\bf I} + {\bm 1}\tp{\bm 1})$ for $N=20$ data points so that the prior is not overwhelmed by data. \begin{figure} \caption{Marginal posterior densities of $\sigma_{ij}$ in the normal-inverse-Wishart problem. Solid blue lines are estimates by $\Delta$-SphHMC and dashed red lines are estimates by direct sampling. All densities are estimated with $10^6$ samples.} \label{fig:post_bartlett} \end{figure} \subsubsection{Verification of Validity}\label{sec:validity} Specifying conditional priors based on \eqref{eq:jtpri4iwishart} in the structured model \eqref{eq:strt_corr}, we want to check the validity of our proposed method by comparing the posterior estimates using $\Delta$-SphHMC agains the truth \eqref{eq:post-iwishart}. We sample $\vect\tau:=\log(\vect\sigma)$ using standard HMC and ${\bf U}^*$ using $\Delta$-SphHMC. They are updated in Metropolis-Within-Gibbs scheme. $10^6$ samples are collected after burning the first $10\%$ and subsampling every $1$ of $10$. For each sample of $\vect\tau$ and $\mathrm{vech}({\bf U}^*)$, we calculate $\vect\varSigma=\diag(e^{\vect\tau}){\bf U}^*\tp{({\bf U}^*)}\diag(e^{\vect\tau})$. Marginal densities of entries in $\vect\varSigma$ are estimated with these samples and plotted against the results by direct sampling in Figure \ref{fig:post_bartlett}. Despite of sampling variance, these estimates closely match the results by direct sampling, indicating the validity of our proposed method. \begin{figure} \caption{Marginal posterior, prior (induced from squared-Dirichlet distribution) densities of correlations and MLEs with different settings for concentration parameter $\vect\alpha$, estimated with $10^6$ samples.} \label{fig:post_corr_asqdirpri} \end{figure} \subsubsection{Examining Flexibility of Priors} \label{sec:prieff} We have studied several spherical priors for the Cholesky factor of correlation matrix proposed in Section \ref{sec:priors}. Now we examine the flexibility of these priors in providing prior information for correlation with various parameter settings. With the same data generated according to \eqref{eq:conj-iwishart}, we now consider the squared-Dirichlet prior \eqref{eq:sqDir_density} for ${\bf L}$ in the structured model \eqref{eq:strt_corr} with the following setting \begin{equation}\label{eq:nonconj-iwishart} \begin{aligned} \tau_i=\log(\sigma_i) &\sim \mathcal N(0,0.1^2), \quad i=1,\cdots, D\\ {\bf l}_i &\sim \mathrm{Dir}^2 (\vect\alpha_i), \quad \vect\alpha_i = (\alpha {\bm 1}_{i-1},\alpha_0), \quad i=2,\cdots, D \end{aligned} \end{equation} where we consider three cases i) $\alpha =1,\,\alpha_0=1$; ii) $\alpha =0.1,\,\alpha_0=1$; iii) $\alpha =0.1,\,\alpha_0=10$. We generate $10^6$ prior samples (according to \eqref{eq:nonconj-iwishart}) and posterior samples (by $\Delta$-SphHMC) for ${\bf L}$ respectively and covert them to $\vect\mathrm{P}={\bf L}\tp{\bf L}$. For each entry of $\rho_{ij}$, we estimate the marginal posterior (prior) density based on these posterior (prior) samples. The posteriors, priors and maximal likelihood estimates (MLEs) of correlations $\rho_{ij}$ are plotted in Figure \ref{fig:post_corr_asqdirpri} for different $\vect\alpha$'s respectively. In general, the posteriors are compromise between priors and the likelihoods (MLEs). With more and more weight (through $\vect\alpha$) put around the poles (last component) of each factor sphere, the priors become increasingly dominant that the posteriors (red dash lines) almost fall on priors (blue solid lines) when $\vect\alpha=(0.1,0.1,10)$. In this extreme case, the squared-Dirichlet distributions induce priors in favor of trivial (zero) correlations. We have similar conclusion on Bingham prior and von Mises-Fisher prior but results are reported in Section E.1 of the supplementary file. \subsection{Simulated Periodic Processes} In this section, we investigate the performance of our dynamic model \eqref{eq:strt_dyn_model} on the following periodic process example \begin{equation}\label{eq:periodic} \begin{aligned} y(t) &\sim \mathcal N_D(\mu(t),\varSigma(t)), \quad \varSigma(t)=L(t)\tp{L(t)} \circ S, \quad t \in [0,2] \\ \mu_i(t) &= \sin(it\pi/D), \quad L_{ij}(t) = (-1)^i\sin(it\pi/D) (-1)^j\cos(jt\pi/D), \quad j\leq i=1,\cdots, D,\\ &\phantom{= \sin(it\pi/D), \;}\quad S_{ij} = (|i-j|+1)^{-1}, \quad i,j=1,\cdots, D \end{aligned} \end{equation} Based on the model \eqref{eq:periodic}, we generate $M$ trials (process realizations) of data $y$ at $N$ evenly spaced points for $t$ in $[0,2]$, and therefore the whole data set $\{y(t)\}$ is an $M\times N\times D$ array. We first consider $D=2$ to investigate the posterior contraction phenomena and the model flexibility; then we consider $D=100$ over a shorter period $[0,1]$ to show the scalability using the `$w$-band' structure. \begin{figure} \caption{Estimation of the underlying mean functions $\mu_t$ (left in each of 4 subpannels) and covariance functions $\varSigma_t$ (right in each of 4 subpannels) of 2-dimensional periodic processes. $M$ is the number of trials, and $N$ is the number of discretization points. Dashed lines are true values, solid lines are estimates and shaded regions are $95\%$ credible bands.} \label{fig:periodic_postcntrt} \end{figure} \subsubsection{Posterior Contraction}\label{sec:pc_sim} Posterior contraction describes the phenomenon that the posterior concentrates on smaller and smaller neighborhood of the true parameter (function) given more and more data \citep{vanderVaart08a}. We investigate such phenomena in both mean functions and covariance functions in our model \eqref{eq:strt_dyn_model} using the following settings $i) M=10, N=20; \; ii) M=100, N=20; \; iii) M=10, N=200; \; iv) M=100, N=200$. To fit the data using the model \eqref{eq:strt_dyn_model}, we set $s=2$, $a=(1,1,1)$, $b=(0.1,10^{-3},0.2)$, $m=(0,0,0)$ for all settings, $V=(1,0.5,1)$ for $N=20$ and $V=(1,1,0.3)$ for $N=200$. We also add an additional nugget of $10^{-5}I_n$ to all the covariance kernel of GPs to ensure non-degeneracy. Following the procedure in Section \ref{sec:sample_procedure}, we run MCMC for $1.5\times 10^5$ iterations, burn in the first $5\times 10^4$ and subsample 1 for every 10. Based on the resulting $10^4$ posterior samples, we estimate the underlying mean functions and covariance functions and plot the estimates in Figure \ref{fig:periodic_postcntrt}. Note in Figure \ref{fig:periodic_postcntrt}, both $M$ and $N$ have effect on the amount of data information thereafter on the posterior contraction but the contraction rate may depend on them differently. Both mean and covariance functions have narrower credible bands for more discretization points $N$ (comparing $N=20$ in the first row with $N=200$ for the second row). On the other hand, both posteriors contract further with more trials $M$ (comparing $M=10$ in the first column agains $M=100$ for the second column). In general the posterior of mean function contracts to the truth faster than the posterior of covariance function. With $M=100$ trials and $N=200$ discretization points, both mean and covariance functions are almost recovered by the model \eqref{eq:strt_dyn_model}. \begin{figure} \caption{Estimation of the underlying mean functions $\mu_t$ (left column), variance functions $\sigma_t$ (middle column) and correlation function $\rho_t$ (right column) of 2-dimensional periodic processes, using latent factor process model (upper row) and our flexible model (lower row), based on $M=10$ trials of data over $N=200$ evenly spaced points. Dashed lines are true values, solid lines are estimates and shaded regions are $95\%$ credible bands.} \label{fig:periodic_constrast} \end{figure} \subsubsection{Full Flexibility}\label{sec:flexibility} Our method \eqref{eq:strt_dyn_model} grants full flexibility because it models mean, variance and correlation processes separately. This is particularly useful if they behave differently. It contrasts with latent factor based models that tie mean and covariance processes together. One of the state-of-the-art models of this type is Bayesian nonparametric covariance regression \citep{fox15}: \begin{equation}\label{eq:lpf_bncr} y(x) \sim \mathcal N_D(\mu(x),\varSigma(x)), \qquad \mu(x) = \Lambda(x) \psi(x), \quad \varSigma(x) = \Lambda(x) \tp{\Lambda(x)} + \varSigma_0 \end{equation} We tweak the simulated example \eqref{eq:periodic} for $D=2$ to let mean and correlation processes have higher frequency than variance processes, as shown in the dashed lines in Figure \ref{fig:periodic_constrast}. We generate $M=10$ trials of data over $N=200$ evenly spaced points. In this case, the true mean processes $\mu(x)$ and true covariance processes $\varSigma(x)$ behave differently but are modeled with a common loading matrix $\Lambda(x)$ in model \eqref{eq:lpf_bncr}. This imposes difficulty on \eqref{eq:lpf_bncr} to have a latent factor process $\psi(x)$ that could properly accommodate the heterogeneity in mean and covariance processes. Figure \ref{fig:periodic_constrast} shows that due to this reason, latent factor based model \eqref{eq:lpf_bncr} (upper row) fails to generate satisfactory fit for all of the mean, variance and correlation processes. Our fully flexible model \eqref{eq:strt_dyn_model} (bottom row), on the contrary, successfully produces more accurate characterization for all of them. Note that this artificial example is used to demonstrate the flexibility of our dynamic model \eqref{eq:strt_dyn_model}. For cases that are not as extreme, \eqref{eq:lpf_bncr} may performance equally well. See more discussion in Section \ref{sec:conclusion} and more details in Section E.2 in the supplementary file. \begin{figure} \caption{Posterior estimation of the underlying correlation functions $P_t$ (left) and its 2-norm distance to the truth (right) of 100-dimensional periodic processes with 2-band structure, based on $M=100$ trials of data over $N=100$ discretization points. Dashed lines are true values, solid lines are estimates and shaded regions are $95\%$ credible bands.} \label{fig:periodic_wbd_post} \end{figure} \subsubsection{Scalability}\label{sec:scalability} Now we use the same simulation model \eqref{eq:periodic} for $D=100$ dimensions to test the scalability of our dynamic model \eqref{eq:strt_dyn_model}. However instead of the full covariance, we only consider a diagonal covariance matrix plus 4 non-zero off-diagonal entries $\sigma_{1,2}$ ($\sigma_{2,1}$) and $\sigma_{99,100}$ ($\sigma_{100,99}$). We focus on the correlation process in this example thus set $\mu_t\equiv 0$ and $\sigma_t\equiv 1$, for $t\in[0,1]$. More specifically when generating data $\{y_t\}$ with \eqref{eq:periodic}, if $i\notin\{2,100\}$ we set $i$-th rows $L_i=S_i=e_i$ with $e_i$ being the $i$-th row of identity matrix. To apply our dynamical model \eqref{eq:strt_dyn_model} in this setting, we let ${\bf L}_t$ have `$w$-band' structure with $w=2$ at each time $t$. Setting $s=2$, $a=1$, $b=0.1$, $m=0$ and $V=10^{-3}$, $N=100$ and $M=100$, we repeat the MCMC runs for $7.5\times 10^4$ iterations, burn in the first $2.5\times 10^4$ and subsample 1 for every 10 to obtain $5\times10^3$ posterior samples in the end. Based on those samples, we estimate the underlying correlation functions and only plot $\rho_{1,2}$, $\rho_{49,50}$ and $\rho_{99,100}$ in Figure \ref{fig:periodic_wbd_post}. With the `$w$-band' structure, we have less entries in the covariance matrix and focus on the `in-group' correlation. Our dynamical model \eqref{eq:strt_dyn_model} is sensitive enough to discern the informative non-zero components from the non-informative ones in these correlation functions. Unit-vector GP priors provide flexibility for the model to capture the changing pattern of informative correlations. The left panel of Figure \ref{fig:periodic_wbd_post} shows that the model \eqref{eq:strt_dyn_model} correctly identify the non-zero components $\rho_{1,2}$ and $\rho_{99,100}$ and characterize their evolution. The right panel shows that the 2-norm distance between the estimated and true correlation matrices, $\Vert \widehat P(t)-P(t)\Vert_2$, is small, indicating that our dynamic model \eqref{eq:strt_dyn_model} performs well with higher dimension in estimating complex dependence structure among multiple stochastic processes. \section{Analysis of Local Field Potential Activity}\label{sec:lfp} Now we use the proposed model \eqref{eq:strt_dyn_model} to analyze a local field potential (LFP) activity dataset. The goal of this analysis is to elucidate how memory encoding, retrieval and decision-making arise from functional interactions among brain regions, by modeling how their dynamic connectivity varies during performance of complex memory tasks. Here we focus on LFP activity data recorded from 24 electrodes spanning the dorsal CA1 subregion of the hippocampus as rats performed a sequence memory task \citep{allen14,allen16,ng17,holbrook17}. The task involves repeated presentations of a sequence of odors (e.g., ABCDE) at a single port and requires rats to correctly determine whether each odor is presented `in sequence' (InSeq; e.g., ABCDE; by holding their nosepoke response until the signal at 1.2s) or `out of sequence' (OutSeq; e.g., AB\underline{D}DE; by withdrawing their nose before the signal). In previous work using the same dataset, \cite{holbrook16} used a direct MCMC algorithm to study the spectral density matrix of LFP from 4 selected channels. However, they did not examine how their correlations varied across time and recording site. These limitations are addressed in this paper. \begin{figure} \caption{LFP signals on ``in sequence" and ``out of sequence" trials. It is difficult to identify differences between the two conditions based on a mere visual inspection of the LPFs.} \label{fig:LFP_raw_data} \end{figure} We focus our analyses on the time window from 0ms to 750ms (with 0 corresponding to when the rat's nose enters the odor port). Critically, this includes a time period during which the behavior of the animal is held constant (0-500ms) so differences in LFP reflect the cognitive processes associated with task performance, and, to serve as a comparison, a time period near 750ms during which the behavioral state of the animal is known to be different (i.e., by 750ms the animal has already withdrawn from the port on the majority of OutSeq trials, but is still in the port on InSeq trials). We also focus our analyses on two sets of adjacent electrodes (electrodes 20 and 22, and electrodes 8 and 9), which allows for comparisons between probes that are near each other ($<$1mm; i.e., 20:22 and 8:9) or more distant from each other ($>$2mm; i.e., 20:8, 20:9, 22:8, and 22:9). Figure \ref{fig:LFP_raw_data} shows $M = 20$ trials of these LFP signals from $D = 4$ channels under both InSeq and OutSeq conditions. Our main objective is to quantify how correlations among these LFP channels varied across trial types (InSeq vs OutSeq) and over time (within the first 750ms of trials). To do so, we discretize the time window of $0.75$ seconds into $N = 300$ equally-spaced small intervals. Under each experiment condition (InSeq or OutSeq), we treat all the signals as a 4 dimensional time series and fit them using our proposed dynamic correlation model \eqref{eq:strt_dyn_model} in order to discover the evolution of their relationship. Note that we model the mean, variance, and correlation processes separately but only report findings about the evolution of correlation among those brain signals. \begin{figure} \caption{Estimated correlation processes of LFPs (beta) under in-sequence condition (top), out-of-sequence condition (middle) and the (Frobenius) distance between two correlation matrices (bottom).} \label{fig:LFP_est_beta} \end{figure} We set $s=2$, $a=(1,1,1)$, $b=(1,0.1,0.2)$, $m=(0,0,0)$, $V=(1,1.2,2)$; and the general results are not very sensitive to the choice of these fine-tuning parameters. We also scale the discretized time points into $(0,1]$ and add an additional nugget of $10^{-5}I_n$ to the covariance kernel of GPs. We follow the same procedure in Section \ref{sec:sample_procedure} to collect $7.5\times 10^4$ samples, burn in the first $2.5\times 10^4$ and subsample 1 for every 10. The resulting $10^4$ samples yield estimates of correlation processes as shown in Figure \ref{fig:LFP_est_beta} for beta-filtered traces (20-40Hz) but similar patterns were also observed for theta-filtered traces (4-12Hz; see the supplement). The bottom panel of Figure \ref{fig:LFP_est_beta} shows the dissimilarity between correlation processes under different conditions measured by the Frobenius norm of their difference. Our approach revealed many important patterns in the data. First, it showed that electrodes near each other (20:22 and 8:9) displayed remarkably high correlations in their LFP activity on InSeq and OutSeq trials, whereas correlations were considerably lower among more distant electrodes (20:8, 20:9, 22:8, and 22:9). Second, it revealed that the correlations between InSeq and OutSeq matrices evolved during the presentation of individual trials. These results are consistent with other analyses on learning \citep[see, e.g.,][]{fiecas16}. As expected, InSeq and OutSeq activity was very similar at the beginning of the time window (e.g., before 350ms), which is before the animal has any information about the InSeq or OutSeq status of the presented odor, but maximally different at the end of the time window, which is after it has made its response on OutSeq trials. Most important, however, is the discovery of InSeq vs OutSeq differences before 500ms, which reveal changes in neural activity associated with the complex cognitive process of identifying if events occurred in their expected order. These findings highlight the sensitivity of our novel approach, as such differences have not been detected with traditional analyses. Interested readers can find more results about all the 12 channels in Section E.3 of the supplementary file. \section{Conclusion}\label{sec:conclusion} In this paper, we propose a novel Bayesian framework that grants full flexibility in modeling covariance and correlation matrices. It extends the separation strategy proposed by \cite{barnard00} and uses the Cholesky decomposition to maintain the positive definiteness of the correlation matrix. By defining distributions on spheres, a large class of flexible priors can be induced for covariance matrix that go beyond the commonly used but restrictive inverse-Wishart distribution. Furthermore, the structured models we propose maintain the interpretability of covariance in terms of variance and correlation. Adaptive $\Delta$-Spherical HMC is introduced to handle the intractability of the resulting posterior. Furthermore, we extend this structured scheme to dynamical models to capture complex dependence among multiple stochastic processes, and demonstrate the effectiveness and efficiency in Bayesian modeling covariance and correlation matrices using a normal-inverse-Wishart problem, a simulated periodic process, and an analysis of LFP data. In addition, we provide both theoretic characterization and empirical investigation of posterior contraction for dynamically covariance modeling, which to our best knowledge, is a first attempt. In this work, we consider the \emph{marginal (pairwise)} dependence among multiple stochastic processes. The priors for correlation matrix specified through the sphere-product representation are in general dependent among component variables. For example, the method we use to induce uncorrelated prior between $y_i$ and $y_j$ ($i<j$) by setting $l_{jk} \approx 0$ for $k\leq i$ has a direct consequence that $\mathrm{Cor}(y_{i'}, y_j)\approx 0$ for $i'\leq i$. In another word, more informative priors (part of the components are correlated) may require careful ordering in $\{y_i\}$. To avoid this issue, one might consider the inverse of covariance (precision) matrices instead. This leads to modeling the \emph{conditional} dependence, or \emph{Markov network} \citep{dempster72,friedman08}. Our proposed methodology applies directly to (dynamic) precision matrices/processes, which will be our future direction. To further scale our method to problems of greater dimensionality in future, one could explore the low-rank structure of covariance and correlation matrices, e.g. by adopting the similar factorization as in \citep{fox15} and assuming $\tp\mathrm{vech}({\bf L}_t)\in (\mathcal S^k)^D$ for some $k\ll D$, or impose some sparse structure on the precision matrices. We have proved that the posterior of covariance function contracts at a rate given by the general form of concentration function \citep{vanderVaart08a}. Empirical evidence (Section \ref{sec:pc_sim}) shows that the posterior of covariance contracts slower than that of mean. More theoretical works is needed to compare their contraction rates. Also, future research could involve investigating posterior contraction in covariance regression with respect to the optimal rates under different GP priors. While our research has generated interesting new findings regarding brain signals during memory tasks, one limitation of our current analysis on LFP data is that it is conducted on a single rat. The proposed model can be generalized to account for variation among rats. In the future, we will apply this sensitive approach to other datasets, including simultaneous LFP recordings from multiple brain regions in rats as well as BOLD fMRI data collected from human subjects performing the same task. \begin{center} {\large\bf SUPPLEMENTARY MATERIAL} \end{center} \appendix \numberwithin{equation}{section} \numberwithin{figure}{section} \numberwithin{lem}{section} \numberwithin{thm}{section} \numberwithin{dfn}{section} \numberwithin{rk}{section} \section{Connection to Known Priors}\label{apx:conn2knownpriors} The following lemma is essential in proving that our proposed methods \eqref{eq:strt_cov} \eqref{eq:strt_corr} generalize existing methods in specifying priors, including the inverse-Wishart distribution, and two uniform distributions \citep{barnard00} as well. \begin{lem}\label{lem:chol_jacobians} Let $\vect\varSigma = {\bf U} \tp{\bf U}$ be the reversed Cholesky decomposition of $\vect\varSigma$. The Jocobian of the transformation $\vect\varSigma\mapsto {\bf U}$ is \begin{equation*} \left| \frac{d_h \vect\varSigma}{d_h \tp{\bf U}} \right| := \left| \frac{\partial \mathrm{vech} \vect\varSigma}{\partial \mathrm{vech} \tp{\bf U}} \right| = 2^D \prod_{i=1}^D |u_{ii}|^i \end{equation*} Let $\vect\mathrm{P}={\bf L}\tp{\bf L}$ be the Cholesky decomposition of $\vect\mathrm{P}$. The Jacobian of the transformation ${\bf L}\mapsto \vect\mathrm{P}$ is \begin{equation*} \left| \frac{d_h {\bf L}}{d_h \vect\mathrm{P}} \right| := \left| \frac{\partial \mathrm{vech} {\bf L}}{\partial \mathrm{vech} \vect\mathrm{P}} \right| = 2^{-D} \prod_{i=1}^D |l_{ii}|^{i-(D+1)} \end{equation*} \end{lem} \begin{proof} Note we have \begin{equation*} d \vect\varSigma = d {\bf U}\tp{\bf U} + {\bf U} d\tp{\bf U} \end{equation*} Taking $\mathrm{vec}$ on both sides and applying its property \begin{equation*} d \mathrm{vec}\vect\varSigma = ({\bf U} \otimes {\bf I}) d \mathrm{vec}{\bf U} + ({\bf I} \otimes {\bf U}) d\mathrm{vec}\tp{\bf U} \end{equation*} Applying the elimination ${\bf L}_D$ on both sides \begin{equation*} \begin{aligned} d \mathrm{vech}\vect\varSigma &= {\bf L}_D [ ({\bf U} \otimes {\bf I}) {\bf K}_D d \mathrm{vec}\tp{\bf U} + ({\bf I} \otimes {\bf U}) d\mathrm{vec}\tp{\bf U} ] = {\bf L}_D ({\bf K}_D + {\bf I}) ({\bf I} \otimes {\bf U}) d\mathrm{vec} \tp{\bf U}\\ &= 2 {\bf L}_D {\bf N}_D ({\bf I} \otimes {\bf U}) \tp{\bf L}_D d\mathrm{vech} \tp{\bf U} = 2 {\bf L}_D {\bf N}_D \tp{\bf L}_D \tp{\bf D}_D ({\bf I} \otimes {\bf U}) \tp{\bf L}_D d\mathrm{vech} \tp{\bf U} \end{aligned} \end{equation*} where ${\bf K}_D$ is the \emph{commutation matrix} such that ${\bf K}_D \mathrm{vec}{\bf A}=\mathrm{vec}\tp{\bf A}$ for matrix ${\bf A}_{D\times D}$, ${\bf N}_D:=({\bf K}_D+{\bf I})/2$, and ${\bf D}_D$ is the \emph{duplication matrix} which is regarded as the inverse of the elimination matrix ${\bf L}_D$. The last equation is by ${\bf D}_D {\bf L}_D {\bf N}_D = {\bf N}_D = \tp{\bf N}_D$ \citep[Lemma2.1 and Lemma3.5 in][]{magnus80}. Thus according to \cite[Lemma3.4 and Lemma4.1 in][]{magnus80} we have \begin{equation*} \begin{aligned} \left| \frac{d_h \vect\varSigma}{d_h \tp{\bf U}} \right| &= \left| \frac{\partial \mathrm{vech} \vect\varSigma}{\partial \mathrm{vech} \tp{\bf U}} \right| = |2 {\bf L}_D {\bf N}_D \tp{\bf L}_D \tp{\bf D}_D ({\bf I} \otimes {\bf U}) \tp{\bf L}_D|\\ &= 2^{D(D+1)/2} |{\bf L}_D {\bf N}_D \tp{\bf L}_D| |{\bf L}_D ({\bf I} \otimes \tp{\bf U}) {\bf D}_D| = 2^D \prod_{i=1}^D |u_{ii}^i| \end{aligned} \end{equation*} By similar argument, we have \begin{equation*} \begin{aligned} \left| \frac{d_h \vect\mathrm{P}}{d_h {\bf L}} \right| &= \left| \frac{\partial \mathrm{vech} \vect\mathrm{P}}{\partial \mathrm{vech} {\bf L}} \right| = |2 {\bf L}_D {\bf N}_D \tp{\bf L}_D \tp{\bf D}_D ({\bf L} \otimes {\bf I}) \tp{\bf L}_D|\\ &= 2^{D(D+1)/2} |{\bf L}_D {\bf N}_D \tp{\bf L}_D| |{\bf L}_D (\tp{\bf L} \otimes {\bf I}) {\bf D}_D| = 2^D \prod_{i=1}^D |l_{ii}|^{D+1-i} \,. \end{aligned} \end{equation*} Thus it completes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:conn_iWishart}] We know that the density of $\vect\varSigma\sim \mathcal W^{-1}_D(\vect\Psi,\nu)$ is \begin{equation*} p_{\mathcal W^{-1}}(\vect\varSigma\, ; \vect\Psi,\nu) = \frac{|\vect\Psi|^{\nu/2}}{2^{D\nu/2} \Gamma_D(\nu/2)} |\vect\varSigma|^{-(\nu+D+1)/2} \exp\left(-\frac12\mathrm{tr} (\vect\Psi {\vect\varSigma}^{-1}) \right) \end{equation*} By Lemma \ref{lem:chol_jacobians} we have \begin{equation*} \begin{aligned} p({\bf U}) &= p(\vect\varSigma) \left| \frac{d_h \vect\varSigma}{d_h \tp{\bf U}} \right| = 2^D p_{\mathcal W^{-1}}({\bf U} \tp{\bf U}\, ; \vect\Psi,\nu) \prod_{i=1}^D |u_{ii}^i| \\ &= \frac{|\vect\Psi|^{\nu/2}}{2^{D(\nu-2)/2} \Gamma_D(\nu/2)} |{\bf U}|^{-(\nu+D+1)} \prod_{i=1}^D u_{ii}^i \exp\left(-\frac12\mathrm{tr} (\vect\Psi {\bf U}^{-\mathsf T} {{\bf U}^{-1}}) \right) \,. \end{aligned} \end{equation*} Then the proof is completed. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:conn_unif}] To prove the first result, we use Lemma \ref{lem:chol_jacobians} \begin{equation*} p(\vect\mathrm{P}) = p({\bf L}) \left| \frac{d_h {\bf L}}{d_h \vect\mathrm{P}} \right| \propto \prod_{i=2}^D |{\bf l}_i|^{2\vect\alpha_i-1} \prod_{i=1}^D |l_{ii}|^{i-(D+1)} = \prod_{i=1}^D |l_{ii}|^{(i-3)(D+1)} \end{equation*} On the other hand, from Equation (8) in \cite{barnard00}, we have the density of marginally uniform distribution: \begin{equation*} p(\vect\mathrm{P}) \propto |\vect\mathrm{P}|^{\frac{D(D-1)}{2}-1} (\prod_{i} \vect\mathrm{P}_{ii})^{-\frac{D+1}{2}} = (\prod_{j=1}^D l_{jj}^2)^{\frac{D(D-1)}{2}-1} (\prod_{j=1}^D \prod_{i=1}^j l_{ii}^2)^{-\frac{D+1}{2}} = \prod_{j=1}^D |l_{jj}|^{(j-3)(D+1)} \end{equation*} where $\vect\mathrm{P}_{ii}$ is the $i$-th principal minor of $\vect\mathrm{P}$. Similarly by Lemma \ref{lem:chol_jacobians} we can prove the second result \begin{equation*} p(\vect\mathrm{P}) = p({\bf L}) \left| \frac{d_h {\bf L}}{d_h \vect\mathrm{P}} \right| \propto \prod_{i=2}^D |{\bf l}_i|^{2\vect\alpha_i-1} \prod_{i=1}^D |l_{ii}|^{i-(D+1)} \propto 1 \,. \end{equation*} Therefore we have finished the proof. \end{proof} \section{Posterior Contraction}\label{apx:post_contr} For the Gaussian likelihood models $p_i \sim \mathcal N(\vect\mu_i(t), \vect\varSigma_i(t))$ for $i=0,1$, we first bound the Hellinger distance, Kullback-Leibler distance and variance distance $V(p_0,p_1)= \mathrm E_0 (\log (p_0/p_1))^2$ with the uniform norm in the following lemma. Notation $\lesssim$ means ``smaller than or equal to a universal constant times". \begin{lem} \label{lem:hKVbd} For any bounded measurable functions $\vect\varSigma_i : \mathcal X\rightarrow \mathbb R^{D^2}$ with Cholesky decompositions $\vect\varSigma_i={\bf L}_i\tp{{\bf L}_i}$, assume $\min_{1\leq j\leq D} \inf_{t\in \mathcal X} | l_{i,jj}(t)| \geq c_0>0$, $i=0,1$. Then we have \begin{itemize} \item $h(p_0, p_1) \lesssim \Vert {\bf L}_0 - {\bf L}_1\Vert_\infty^\frac12$ \item $K(p_0, p_1) \lesssim \Vert {\bf L}_0 - {\bf L}_1\Vert_\infty$ \item $V(p_0, p_1) \lesssim \Vert {\bf L}_0 - {\bf L}_1\Vert_\infty^2$ \end{itemize} \end{lem} \begin{proof} First we calculate \begin{equation}\label{eq:logpdf_diff} \begin{aligned} \log p_0 -\log p_1 &= \frac12 \left\{ \log \frac{|\vect\varSigma_1|}{|\vect\varSigma_0|} + \tp{({\bf y}-\vect\mu_*)} \vect\varSigma_*^{-1} ({\bf y}-\vect\mu_*) + ** \right\} \\ \vect\varSigma_*^{-1} &= \vect\varSigma_1^{-1} - \vect\varSigma_0^{-1}, \quad \vect\mu_* = \vect\varSigma_* (\vect\varSigma_1^{-1}\vect\mu_1 - \vect\varSigma_0^{-1}\vect\mu_0) \\ ** &= - \tp{(\vect\mu_1-\vect\mu_0)} \vect\varSigma_1^{-1} \vect\varSigma_* \vect\varSigma_0^{-1} (\vect\mu_1-\vect\mu_0) \end{aligned} \end{equation} Taking expectation of \eqref{eq:logpdf_diff} with respect to $p_0$ yields the following Kullback-Leibler divergence \begin{equation*} K(p_0, p_1) = \frac12 \left\{ \mathrm{tr} (\vect\varSigma_1^{-1}\vect\varSigma_0) + \tp{(\vect\mu_1-\vect\mu_0)} \vect\varSigma_1^{-1} (\vect\mu_1-\vect\mu_0) - D + \log \frac{|\vect\varSigma_1|}{|\vect\varSigma_0|} \right\} \end{equation*} Consider $\vect\mu_i\equiv 0$. By the non-negativity of K-L divergence we have for general $\vect\varSigma_i>0$, \begin{equation}\label{eq:logdetbd} \log \frac{|\vect\varSigma_0|}{|\vect\varSigma_1|} \leq \mathrm{tr} (\vect\varSigma_1^{-1}\vect\varSigma_0 - {\bf I}) \end{equation} Therefore we can bound K-L divergence \begin{equation*} K(p_0, p_1) \leq \frac12 \{ \mathrm{tr} (\vect\varSigma_1^{-1}\vect\varSigma_0 - {\bf I}) + \mathrm{tr} (\vect\varSigma_0^{-1}\vect\varSigma_1 - {\bf I}) \} \leq C(D, c_0) \Vert \vect\varSigma_0 - \vect\varSigma_1\Vert_\infty \lesssim \Vert {\bf L}_0 - {\bf L}_1\Vert_\infty \end{equation*} where we bound each term involving trace \begin{equation}\label{eq:tracebd} \begin{aligned} & \mathrm{tr} (\vect\varSigma_1^{-1}\vect\varSigma_0 - {\bf I}) = \mathrm{tr} (\vect\varSigma_1^{-1}(\vect\varSigma_0 - \vect\varSigma_1)) \leq \Vert \vect\varSigma_1^{-1} \Vert_F \Vert \vect\varSigma_0 - \vect\varSigma_1\Vert_F \\ & \leq D^{3/2} \Vert \vect\varSigma_1^{-1} \Vert_2 \Vert \vect\varSigma_0 - \vect\varSigma_1\Vert_\infty \lesssim c_0^{-2D} (\Vert {\bf L}_0 \Vert_\infty+\Vert {\bf L}_1 \Vert_\infty) \Vert {\bf L}_0 - {\bf L}_1\Vert_\infty \end{aligned} \end{equation} Note the last inequality holds because \begin{align*} c_0^D &\leq \Pi_{j=1}^D | l_{jj} | = \left[ \Pi_{j=1}^D \lambda_j(\vect\varSigma)\right]^\frac12 \leq \lambda_{\min}^\frac12 \left[\frac{\mathrm{tr} (\vect\varSigma)^{D-1}}{D-1}\right]^\frac12 \leq \lambda_{\min}^\frac12 \left[\frac{\Vert {\bf L} \Vert_F^{2(D-1)}}{D-1}\right]^\frac12 \\ &\leq \lambda_{\min}^\frac12 \left[\frac{D^{2(D-1)}}{D-1}\right]^\frac12 \Vert {\bf L} \Vert_\infty^{D-1} = \Vert \vect\varSigma_1^{-1} \Vert_2^{-\frac12} \left[\frac{D^{2(D-1)}}{D-1}\right]^\frac12 \Vert {\bf L} \Vert_\infty^{D-1} \end{align*} and we can write \begin{equation*} \vect\varSigma_0 - \vect\varSigma_1 = \frac12 \left[ ({\bf L}_0 - {\bf L}_1) \tp{({\bf L}_0 + {\bf L}_1)} + ({\bf L}_0 + {\bf L}_1)\tp{({\bf L}_0 - {\bf L}_1)}\right] \end{equation*} Now take expectation of squared \eqref{eq:logpdf_diff} with respect with $p_0$ to get the following variance distance \begin{equation*} V(p_0,p_1) = \frac12 \mathrm{tr} ((\vect\varSigma_1^{-1}\vect\varSigma_0 - {\bf I})^2) + \tp{(\vect\mu_1-\vect\mu_0)} \vect\varSigma_1^{-1} \vect\varSigma_0 \vect\varSigma_1^{-1} (\vect\mu_1-\vect\mu_0) + K^2(p_0, p_1) \end{equation*} Consider $\vect\mu_i\equiv 0$ and we can bound the variance distance by similar argument as \eqref{eq:tracebd} \begin{equation*} V(p_0,p_1) \leq \frac12 \Vert \vect\varSigma_1^{-1}(\vect\varSigma_0 - \vect\varSigma_1) \Vert_F^2 + K^2(p_0, p_1) \leq C \Vert \vect\varSigma_1^{-1} \Vert_F^2 \Vert \vect\varSigma_0 - \vect\varSigma_1\Vert_F^2 + K^2(p_0, p_1) \lesssim \Vert {\bf L}_0 - {\bf L}_1\Vert_\infty^2 \end{equation*} where we use the fact $\Vert {\bf A}{\bf B}\Vert_F \leq \Vert {\bf A}\Vert_F \Vert {\bf B}\Vert_F$. Lastly, the squared Hellinger distance for multivariate Gaussians can be calculated \begin{equation*} h^2(p_0, p_1) = 1 - \frac{|\vect\varSigma_0 \vect\varSigma_1|^{1/4}}{\left|\frac{\vect\varSigma_0+\vect\varSigma_1}{2}\right|^{1/2}} \exp\left\{-\frac18 \tp{(\vect\mu_0-\vect\mu_1)}\left(\frac{\vect\varSigma_0+\vect\varSigma_1}{2}\right)^{-1}(\vect\mu_0-\vect\mu_1)\right\} \end{equation*} Consider $\vect\mu_i\equiv 0$. Notice that $1-x\leq -\log x$, and by \eqref{eq:logdetbd} we can bound the squared Hellinger distance using similar argument in \eqref{eq:tracebd} \begin{equation*} \begin{aligned} h^2(p_0, p_1) &\leq \log \frac{\left|\frac{\vect\varSigma_0+\vect\varSigma_1}{2}\right|^{1/2}}{|\vect\varSigma_0 \vect\varSigma_1|^{1/4}} \leq \frac12 \mathrm{tr} ({\bf L}_0^{-\mathsf T} {\bf L}_1^{-1} (\vect\varSigma_0+\vect\varSigma_1)/2 - {\bf I} ) \\ & = \frac14 \{ \mathrm{tr} ({\bf L}_1^{-1}{\bf L}_0 - {\bf I}) + \mathrm{tr}({\bf L}_0^{-\mathsf T} \tp{\bf L}_1 - {\bf I}) \} \lesssim c_0^{-D} \Vert {\bf L}_0 - {\bf L}_1\Vert_\infty \end{aligned} \end{equation*} where we use $\Vert{\bf L}^{-1}\Vert_2=\lambda_{\min}^{-1}({\bf L}) \leq c_0^{-D}\left[\frac{D\Vert{\bf L}\Vert_\infty}{D-1}\right]^{D-1}$. \end{proof} Define the following \emph{coordinate concentration} function as in \eqref{eq:concent_1} \begin{equation}\label{eq:concent_coord} \phi_{l_{0,ij}}(\varepsilon) = \inf_{h_{ij}\in\mathbb H_{ij} : \,\Vert h_{ij}-l_{0,ij}\Vert_{ij} < \varepsilon} \Vert h\Vert_{\mathbb H_{ij}}^2 - \log \Pi(l_{ij} :\, \Vert l_{ij} \Vert_{ij} < \varepsilon) \end{equation} It is easy to see that $\phi_{l_{0,ij}}(\varepsilon)\leq \phi_{{\bf L}_0}(\varepsilon)$ for $\forall 1\leq j \leq i\leq D$. Let $\Vert \cdot\Vert:=\max_{i,j}\Vert\cdot\Vert_{ij}$. For $\varepsilon>0$, let $N(\varepsilon, B, d)$ denote the minimum number of balls of radius $\varepsilon$ that a cover $B$ in a metric space with metric $d$, which is named \emph {$\varepsilon$-covering number} for $B$. Now we are ready to prove the theorem of posterior concentration. \begin{proof}[Proof of Theorem \ref{thm:post_contr}] We use Theorem 4 of \cite{ghosal07} and it suffices to verify three conditions (the entropy condition 3.2, the complementary asertion 3.3 and the prior mass condition 3.4) as follows: \begin{align} \sup_{\varepsilon>\varepsilon_n} \log N(\varepsilon/36, \{{\bf L}\in \Theta_n: d_n({\bf L},{\bf L}_0)<\varepsilon\},d_n) &\leq n\varepsilon_n^2 \label{eqa:entropy}\\ \frac{\Pi_n(\Theta\backslash \Theta_n)}{\Pi_n(\overline B_n({\bf L}_0, \varepsilon_n))} &= o(e^{-2n\varepsilon_n^2}) \label{eqa:complement}\\ \frac{\Pi_n({\bf L}\in\Theta_n: \kappa\varepsilon_n<d_{n,H}({\bf L},{\bf L}_0)<2\kappa\varepsilon_n)}{\Pi_n(\overline B_n({\bf L}_0, \varepsilon_n))} & \leq e^{n\varepsilon_n^2\kappa^2/4}, \; \text{for\, large}\; \kappa \label{eqa:priormass} \end{align} where $\overline B_n({\bf L}_0, \varepsilon):=\{{\bf L}\in\Theta: \frac1n\sum_{i=1}^n K_i({\bf L}_0,{\bf L})\leq \varepsilon^2, \frac1n\sum_{i=1}^n V_i({\bf L}_0,{\bf L})\leq \varepsilon^2\}$, with $\Theta=L^\infty(\mathcal X)^{D(D+1)/2}$, $K_i({\bf L}_0,{\bf L})=K(P_{{\bf L}_0,i}, P_{{\bf L},i})$ and $V_i({\bf L}_0,{\bf L})=V(P_{{\bf L}_0,i}, P_{{\bf L},i})$. Applying Theorem 2.1 of \cite{vanderVaart08a} to each Gaussian random element $l_{ij}$ in $\mathbb B_{ij}=L^\infty(\mathcal X)$ for $1\leq j\leq i\leq D$, with $l_{0,ij}\in\overline{\mathbb H}_{ij}$, we have $C>2$ and the measurable set $B_{n,ij}\subset \mathbb B_{ij}$ such that \begin{align} \log N(3\varepsilon_n, B_{n,ij}, \Vert \cdot \Vert_\infty) &\leq 6 Cn\varepsilon_n^2 \label{eqa:gp_entropy}\\ \Pi(l_{ij}\notin B_{n,ij}) &\leq e^{-Cn\varepsilon_n^2} \label{eqa:gp_complement}\\ \Pi(\Vert l_{ij}-l_{0,ij}\Vert_\infty <2\varepsilon_n) &\geq e^{-n\varepsilon_n^2} \label{eqa:gp_priormass} \end{align} Now set $\Theta_n=\{{\bf L}: l_{ij} \in B_{n,ij}\}$, and $N(\varepsilon_n, \Theta_n, d_n) = \max_{1\leq j\leq i\leq D} N(3\varepsilon_{n,ij}, B_{n,ij}, \Vert \cdot \Vert_\infty)$. By Lemma \ref{lem:hKVbd} and \eqref{eqa:gp_entropy}, we have the following global entropy bound because $d_n^2({\bf L},{\bf L}')\leq \Vert {\bf L}-{\bf L}'\Vert_\infty\leq \varepsilon_n^2$ for $\forall {\bf L}, {\bf L}'\in\Theta_n$. \begin{equation*} \log N(\varepsilon_n, \Theta_n, d_n) \leq 6Cn(\varepsilon_n^2)^2 \leq C'n\varepsilon_n^4 \leq n\varepsilon^2 \end{equation*} which is stronger than the local entropy condition \eqref{eqa:entropy}. Now by Lemma \ref{lem:hKVbd} and \eqref{eqa:gp_priormass} we have \begin{align*} \Pi_n(\overline B_n({\bf L}_0, \varepsilon_n)) & \geq \Pi_n(\Vert {\bf L}_0-{\bf L}\Vert_\infty\leq \varepsilon_n^2, \Vert {\bf L}_0-{\bf L}\Vert_\infty^2\leq \varepsilon_n^2) = \Pi_n(\Vert {\bf L}_0-{\bf L}\Vert_\infty\leq \varepsilon_n^2) \\ &= \Pi(\Vert l_\mathrm{argmax}-l_{0,\mathrm{argmax}}\Vert_\infty <\varepsilon_n^2) \geq e^{-n(\varepsilon_n^2/2)^2} = e^{-n\varepsilon_n^4/4} \geq e^{-n\varepsilon_n^2\kappa^2/4} \end{align*} Then \eqref{eqa:priormass} is immediately satisfied because the numerator is bounded by 1. Finally, by \eqref{eqa:gp_complement} we have \begin{align*} \Pi_n(\Theta\backslash \Theta_n) \leq \sum_{i=1}^D\sum_{j=1}^i \Pi(l_{ij}\notin B_{n,ij}) \leq \frac{D(D+1)}2 e^{-Cn\varepsilon_n^2} = o(e^{-2n\varepsilon_n^2}) \end{align*} Then \eqref{eqa:complement} holds because the denominator is bounded below by a term ($e^{-n\varepsilon_n^4/4}$) of smaller order. Therefore the proof is completed. \end{proof} \section{Spherical Hamiltonian Monte Carlo}\label{apx:sphHMC} \subsection{Derivation of the geometric integrator for SphHMC} The Lagrangian dynamics \eqref{eq:LD} on the sphere $\mathcal S^{D-1}(r)$ with the first $(D-1)$ coordinates can be split into the following two smaller dynamics: \noindent \begin{subequations} \begin{minipage}{.5\textwidth} \begin{equation}\label{eq:geodesic} \begin{dcases} \begin{aligned} \dot {\bf q}_{-D} & = {\bf v}_{-D}\\ \dot {\bf v}_{-D} & = - \tp {\bf v}_{-D} \vect\Gamma({\bf q}_{-D}) {\bf v}_{-D} \end{aligned} \end{dcases} \end{equation} \end{minipage} \begin{minipage}{.5\textwidth} \begin{equation}\label{eq:residual} \begin{dcases} \begin{aligned} \dot {\bf q}_{-D} & = {\bf 0}\\ \dot {\bf v}_{-D} & = - \nabla^{-1}_{{\bf q}_{-D}} \tilde U({\bf q}_{-D}) \end{aligned} \end{dcases} \end{equation} \end{minipage}\par \noindent \end{subequations} where \eqref{eq:geodesic} is the equation of geodesic on manifold $\mathcal S^D$ which has analytical solution; and \eqref{eq:residual} has analytical solution. Both define volume preserving maps. The mapping $\mathcal I:\, {\bf q}_{-D} \mapsto {\bf q}=({\bf q}_{-D}, q_D)$ can be viewed as an imbedding of $\mathcal S^{D-1}_+$ into $\mathbb R^{D}$. Denote its Jacobian as $d \mathcal I({\bf q}):=\begin{bmatrix} {\bf I}_{D-1}\\ -\frac{\tp{{\bf q}}_{-D}}{q_D}\end{bmatrix}$. Then we have \begin{equation*} \begin{aligned} \tp{d \mathcal I({\bf q})} d \mathcal I({\bf q}) &= {\bf G}({\bf q}_{-D}), & d \mathcal I({\bf q}) {\bf G}({\bf q}_{-D})^{-1} \tp{d \mathcal I({\bf q})} &=\mathcal P({\bf q}) = {\bf I} - r^{-2} {\bf q}\tp {\bf q} \\ \nabla_{{\bf q}_{-D}} \tilde U({\bf q}) &= \tp{d \mathcal I({\bf q})} \nabla_{{\bf q}} \tilde U({\bf q}), & {\bf v} = d \mathcal I({\bf q}) {\bf v}_{-D}, \qquad & \tp {\bf v} {\bf v} = \tp {\bf v}_{-D} {\bf G}({\bf q}_{-D}) {\bf v}_{-D} \end{aligned} \end{equation*} Then Equation \eqref{eq:geodesic} has the following solution with full coordinates \begin{equation}\label{eq:geod}\small \begin{aligned} \begin{bmatrix} {\bf q}(t) \\ {\bf v}(t) \end{bmatrix} &= \begin{bmatrix} {\bf I} & {\bf 0}\\ \tp{\bf 0} & r^{-1}\Vert {\bf v}(0)\Vert_2 \end{bmatrix} \begin{bmatrix} \cos(r^{-1}\Vert {\bf v}(0)\Vert_2 t) & \sin(r^{-1}\Vert {\bf v}(0)\Vert_2 t)\\ -\sin(r^{-1}\Vert {\bf v}(0)\Vert_2 t) & \cos(r^{-1}\Vert {\bf v}(0)\Vert_2 t) \end{bmatrix} \begin{bmatrix} {\bf I} & {\bf 0}\\ \tp{\bf 0} & r\Vert {\bf v}(0)\Vert_2^{-1} \end{bmatrix} \begin{bmatrix} {\bf q}(0) \\ {\bf v}(0) \end{bmatrix}\\ &= \begin{bmatrix} {\bf q}(0) \cos(r^{-1}\Vert {\bf v}(0)\Vert_2 t) + r {\bf v}(0) \Vert {\bf v}(0)\Vert_2^{-1} \sin(r^{-1}\Vert {\bf v}(0)\Vert_2 t)\\ - r^{-1} {\bf q}(0) \Vert {\bf v}(0)\Vert_2 \sin(r^{-1}\Vert {\bf v}(0)\Vert_2 t) + {\bf v}(0) \cos(r^{-1}\Vert {\bf v}(0)\Vert_2 t) \end{bmatrix} \end{aligned} \end{equation} and Equation \eqref{eq:residual} has the following solution in full coordinates \begin{equation}\label{eq:resid} \begin{aligned} {\bf q}(t) &= {\bf q}(0)\\ {\bf v}(t) &= {\bf v}(0) - \frac{t}{2} d \mathcal I({\bf q}(0)) \nabla^{-1}_{{\bf q}_{-D}} \tilde U({\bf q}(0)) = {\bf v}(0) - \frac{t}{2} \mathcal P({\bf q}) \nabla_{{\bf q}} \tilde U({\bf q}(0)) \end{aligned} \end{equation} So numerically updating \eqref{eq:resid} for $h/2$, updating \eqref{eq:geod} for $h$ and updating \eqref{eq:resid} for another $h/2$ yield the integrator \eqref{eq:sphHMC_prop}. \subsection{Reformulating Acceptance} At the end of the numerical simulation, a proposal $({\bf q}_T, {\bf v}_T)$ is accepted according to the following probability \begin{equation}\label{eq:sphHMC_acpt_classic} a_{sphHMC} = 1 \wedge \exp(-\Delta E),\qquad \Delta E = E({\bf q}_T, {\bf v}_T) - E({\bf q}_0, {\bf v}_0) \end{equation} Such classic definition of acceptance probability can be reformulated by replacing $\Delta E$ in \eqref{eq:sphHMC_acpt_classic} with \begin{equation*}\label{eq:sphHMC_acpt_reform} \Delta E=\sum_{\tau=1}^T \Delta E_{\tau} \qquad \Delta E_{\tau} = E({\bf q}_{\tau}, {\bf v}_{\tau}) - E({\bf q}_{\tau-1}, {\bf v}_{\tau-1}) \end{equation*} With \eqref{eq:sphHMC_prop} we can write \[\small \begin{split} \Delta E' =& E({\bf q}',{\bf v}') - E({\bf q},{\bf v})\\ =& \tilde U({\bf q}') - \tilde U({\bf q}) + \frac12 \tp{{\bf v}'}_{-D} {\bf G}({\bf q}'_{-D}) {\bf v}'_{-D} - \frac12 \tp {\bf v}_{-D} {\bf G}({\bf q}_{-D}) {\bf v}_{-D} \\ =& \Delta \tilde U - \frac12 \Vert {\bf v} \Vert_2^2 + \frac12 \left\Vert {\bf v}^+ - \frac{h}{2} \mathcal P({\bf q}') \nabla_{{\bf q}}\tilde U({\bf q}') \right\Vert_2^2\\ =& \Delta \tilde U - \frac12 \Vert {\bf v} \Vert_2^2 + \frac12 \tp{{\bf v}^+} {\bf v}^+ - \frac{h}{2} \tp{{\bf v}^+} \mathcal P({\bf q}') \nabla_{{\bf q}}\tilde U({\bf q}') + \frac{h^2}{8} \tp{\nabla_{{\bf q}} \tilde U({\bf q}')} \mathcal P({\bf q}') \nabla_{{\bf q}} \tilde U({\bf q}') \\ =& \Delta \tilde U - \frac12 \Vert {\bf v} \Vert_2^2 + \frac12 \Vert {\bf v}^-\Vert_2^2 - \frac{h}{2} \tp{{\bf v}^+} \nabla_{{\bf q}}\tilde U({\bf q}') + \frac{h^2}{8} \Vert \nabla_{{\bf q}}\tilde U({\bf q}')\Vert_{\mathcal P({\bf q}')}^2\\ =& \Delta \tilde U - \frac12 \Vert {\bf v} \Vert_2^2 - \frac{h}{2} \tp{{\bf v}^+} \nabla_{{\bf q}}\tilde U({\bf q}') + \frac{h^2}{8} \Vert \nabla_{{\bf q}}\tilde U({\bf q}')\Vert_{\mathcal P({\bf q}')}^2 + \frac12 \Vert {\bf v} \Vert_2^2 - \frac{h}{2} \tp {\bf v} \nabla_{{\bf q}}\tilde U({\bf q}) + \frac{h^2}{8} \Vert \nabla_{{\bf q}}\tilde U({\bf q})\Vert_{\mathcal P({\bf q})}^2 \\ =& \Delta \tilde U - \frac{h}{2} \left[ \tp{{\bf v}'} \nabla_{{\bf q}}\tilde U({\bf q}') + \tp {\bf v} \nabla_{{\bf q}}\tilde U({\bf q}) \right] - \frac{h^2}{8} \left[ \Vert \nabla_{{\bf q}}\tilde U({\bf q}')\Vert_{\mathcal P({\bf q}')}^2 - \Vert \nabla_{{\bf q}}\tilde U({\bf q})\Vert_{\mathcal P({\bf q})}^2 \right] \end{split} \] where $\mathcal P({\bf q}') {\bf v}^+ = {\bf v}^+$, $\mathcal P({\bf q}) {\bf v}^-={\bf v}^-$, and $\Vert {\bf v}^+\Vert_2^2 = \Vert {\bf v}^-\Vert_2^2$. Accumulating the above terms over $\tau=1,\cdots,T$ yields the reformulated acceptance probability \eqref{eq:sphHMC_acpt}. We now prove the energy conservation theorem \ref{thm:energy_conserv} \citep{beskos11}. \begin{proof}[Proof of Theorem \ref{thm:energy_conserv}] With the second equation of Lagrangian dynamics \eqref{eq:LD} we have \begin{equation*} \begin{aligned} - \langle {\bf v}(t), {\bf g}({\bf q}(t))\rangle &= \tp{{\bf v}(t)} \nabla_{{\bf q}}\tilde U({\bf q}(t)) = \tp{{\bf v}_{-D}(t)} \tp{d \mathcal I({\bf q})} \nabla_{{\bf q}}\tilde U({\bf q}(t)) = \tp{{\bf v}_{-D}(t)} \nabla_{{\bf q}_{-D}}\tilde U({\bf q}(t)) \\ &= \tp{{\bf v}_{-D}(t)} {\bf G}({\bf q}_{-D}(t)) \left[ \dot {\bf v}_{-D}(t) + \tp {\bf v}_{-D}(t)\vect\Gamma({\bf q}_{-D}(t)) {\bf v}_{-D}(t) \right] \\ &= \tp{{\bf v}_{-D}(t)} {\bf G}({\bf q}_{-D}(t)) \dot {\bf v}_{-D}(t) + \frac12 \tp{{\bf v}_{-D}(t)} d{\bf G}({\bf q}_{-D}(t)) {\bf v}_{-D}(t) \\ &= \frac{d}{dt} \frac12 \tp{{\bf v}_{-D}(t)} {\bf G}({\bf q}_{-D}(t)) {\bf v}_{-D}(t) = \frac{d}{dt} \frac12 \Vert {\bf v}(t) \Vert_2^2 \end{aligned} \end{equation*} Then we have the first equality hold because \begin{equation*} - \int_0^{T} \langle {\bf v}(t), {\bf g}({\bf q}(t))\rangle dt = \frac12 \Vert {\bf v}(T) \Vert_2^2 - \frac12 \Vert {\bf v}(0) \Vert_2^2 \end{equation*} Lastly, from the first equation of Lagrangian dynamics \eqref{eq:LD} \begin{equation*} \tilde U({\bf q}(T)) - \tilde U({\bf q}(0)) = \int_0^T \dot{\tilde U}({\bf q}(t)) = \int_0^T \langle \dot {\bf q}(t), \nabla_{{\bf q}} \tilde U({\bf q}(t)) \rangle dt = \int_0^{T} \langle {\bf v}(t), {\bf g}({\bf q}(t))\rangle dt \end{equation*} Therefore the second equality is proved. \end{proof} \begin{algorithm}[t] \caption{Adaptive Spherical HMC (adp-SphHMC)} \label{alg:adp-sphHMC} \begin{algorithmic}\small \STATE Given ${\bf q}_0, a_0, N, N^{\textrm adapt}$. \STATE Set $h_0=1$ or using Algorithm 4 of \cite{hoffman14}, $\mu=\log(10h_0), \overline h_0=1, \overline A_0=0, \gamma=0.05, n_0=10, \kappa=0.75$. \FOR{$n=1$ to $N$} \STATE Sample a new velocity ${\bf v}_{n-1} \sim \mathcal N({\bf 0},{\bf I}_D)$, and set ${\bf v}_{n-1} = \mathcal P({\bf q}_{n-1}) {\bf v}_{n-1}$. \STATE Set ${\bf q}^{(0)} = {\bf q}_{n-1}$, ${\bf v}^{(0)} = {\bf v}_{n-1}$. \FOR{$\tau=0$ to $T-1$ ($T =T_{2orth}\; \textrm{or}\; T_{stoch}$)} \STATE Run leapfrog step \eqref{eq:sphHMC_prop} to update $({\bf q}^{(\tau+1)}, {\bf v}^{(\tau+1)}) \leftarrow \mathcal T_{h_{n-1}}({\bf q}^{(\tau)}, {\bf v}^{(\tau)})$. \IF{Stopping criterion \eqref{eq:sphHMC_2orth} (or \eqref{eq:sphHMC_stoch}) is satisfied} \STATE Break \ENDIF \ENDFOR \STATE Accept the proposal $({\bf q}^{(T)},{\bf v}^{(T)})$ with probability $a^{sphHMC}_n$ in \eqref{eq:sphHMC_acpt} and set ${\bf q}_n = {\bf q}^{(T)}$; otherwise set ${\bf q}_n = {\bf q}_{n-1}$. \IF{$n\leq N^{\textrm adapt}$} \STATE Set $\overline A_n = \left(1-\frac{1}{n+n_0}\right) \overline A_{n-1} + \frac{1}{n+n_0} (a_0-a_n)$. \STATE Set $\log h_n = \mu - \frac{\sqrt n}{\gamma} \overline A_n$, and $\log \overline h_n = n^{-\kappa} \log h_n + (1- n^{-\kappa}) \log \overline h_{n-1}$. \ELSE \STATE Set $h_n=\overline h_{N^{\textrm adpat}}$. \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \section{Gradient Calculation in Normal-inverse-Wishart Problem}\label{apx:calculations} We use the representation \eqref{eq:strt_corr} and derive log-posterior (log-likelihood and log-prior) and the corresponding gradients for \eqref{eq:conj-iwishart} using matrix calculus. \subsection{Gradients of log-likelihood} Denote ${\bf y}_n^*:=({\bf y}_n-\vect\mu_0)/\vect\sigma$. Then the log-likelihood becomes \begin{equation*} \ell ({\bf y}^*; \vect\sigma, \vect\mathrm{P}) = - N \tp{\bm 1}_D \log\vect\sigma -\frac{N}{2} \log|\vect\mathrm{P}| -\frac12 \sum_{n=1}^N \tp{{\bf y}_n^*} \vect\mathrm{P}^{-1} {\bf y}_n^* \end{equation*} \noindent $\left[\frac{\partial \ell}{\partial \vect\tau}\right]$. \quad We calculate the gradient of log-likelihood with respect to $\vect\sigma$ \begin{equation*} \frac{\partial \ell}{\partial \sigma_k} = -N\sigma_k^{-1} + \sum_{n=1}^N \sum_i \frac{y^*_{ni}}{\sigma_i} \delta_{ik} (\vect\mathrm{P}^{-1} {\bf y}_n^*)_i, \quad i.\,e.\; \frac{\partial \ell}{\partial \vect\sigma} = -N \vect\sigma^{-1} + \sum_{n=1}^N \diag({\bf y}^*_n/\vect\sigma) (\vect\mathrm{P}^{-1} {\bf y}_n^*) \end{equation*} And with the transformation $\vect\tau=\log(\vect\sigma)$ it becomes \begin{equation*} \frac{\partial \ell}{\partial \vect\tau} = \frac{\tp{d\vect\sigma}}{d\vect\tau} \frac{\partial \ell}{\partial \vect\sigma} = \diag(\vect\sigma) \left[ -\frac{N}{\vect\sigma} + \sum_{n=1}^N \diag({\bf y}^*_n/\vect\sigma) (\vect\mathrm{P}^{-1} {\bf y}_n^*) \right] = -N {\bm 1}_D + \sum_{n=1}^N \diag({\bf y}^*_n) (\vect\mathrm{P}^{-1} {\bf y}_n^*) \end{equation*} \noindent $\left[\frac{\partial \ell}{\partial {\bf U}^*} \; \left( \frac{\partial \ell}{\partial {\bf L}} \right)\right]$. \quad When $\vect\mathrm{P} = {\bf U}^* \tp{({\bf U}^*)}$, $\frac12 \log|\vect\mathrm{P}| = \log|{\bf U}^*| = \tp{\bm 1}_D \log|\diag({\bf U}^*)|$ and thus we have \begin{equation*} \frac{\partial \ell}{\partial {\bf U}^*} = - \frac{N{\bf I}_D}{{\bf U}^*} + \sum_{n=1}^N \frac{d g_n(\widetilde{\bf U})}{d{\bf U}^*} \end{equation*} where $\frac{{\bf I}_D}{{\bf U}^*}=\diag(\{(u^*_{ii})^{-1}\})$ is a diagonal matrix formed by element-wise division, $\widetilde{\bf U}:=({\bf U}^*)^{-1}$ and $g_n(\widetilde{\bf U}):= -\frac12 \tp{{\bf y}_n^*} \tp{\widetilde{\bf U}} \widetilde{\bf U} {\bf y}_n^*$. Taking differential directly on $g_n({\bf U}^*):= -\frac12 \tp{{\bf y}_n^*} ({\bf U}^*)^{-\mathsf T} ({\bf U}^*)^{-1} {\bf y}_n^*$, and noting that differential and trace operators are exchangeable, we have \begin{equation*} \begin{aligned} d g_n({\bf U}^*) &= -\frac12 \mathrm{tr} ( \tp{{\bf y}_n^*} d ({\bf U}^*)^{-\mathsf T} ({\bf U}^*)^{-1} {\bf y}_n^* + \tp{{\bf y}_n^*} ({\bf U}^*)^{-\mathsf T} d ({\bf U}^*)^{-1} {\bf y}_n^* ) \\ &= \frac12 \left[ \mathrm{tr} (\tp{{\bf y}_n^*} ({\bf U}^*)^{-\mathsf T} d \tp{({\bf U}^*)} \vect\mathrm{P}^{-1} {\bf y}_n^* ) + \mathrm{tr} ( \tp{{\bf y}_n^*} \vect\mathrm{P}^{-1} d {\bf U}^* ({\bf U}^*)^{-1} {\bf y}_n^* ) \right] \\ &= \mathrm{tr} ( \tp{{\bf y}_n^*} \vect\mathrm{P}^{-1} d {\bf U}^* ({\bf U}^*)^{-1} {\bf y}_n^* ) = \mathrm{tr} ( ({\bf U}^*)^{-1} {\bf y}_n^* \tp{{\bf y}_n^*} \vect\mathrm{P}^{-1} d {\bf U}^* ) \end{aligned} \end{equation*} Conversion from differential to normal derivative form in the numerator layout \citep{minka00} yields \begin{equation*} \frac{\partial g_n({\bf U}^*)}{\partial \tp{({\bf U}^*)}} = \mathrm{tril} ( ({\bf U}^*)^{-1} {\bf y}_n^* \tp{{\bf y}_n^*} \vect\mathrm{P}^{-1} ) , \quad i.e.,\; \frac{\partial g_n({\bf U}^*)}{\partial {\bf U}^*} = \mathrm{triu} ( \vect\mathrm{P}^{-1} {\bf y}_n^* \tp{{\bf y}_n^*} ({\bf U}^*)^{-\mathsf T} ) \end{equation*} Finally, we have \begin{equation*} \frac{\partial \ell}{\partial {\bf U}^*} = - \frac{N{\bf I}_D}{{\bf U}^*} + \mathrm{triu} ( \vect\mathrm{P}^{-1} \sum_{n=1}^N {\bf y}_n^* \tp{{\bf y}_n^*} ({\bf U}^*)^{-\mathsf T} ) \end{equation*} When $\vect\mathrm{P} = {\bf L} \tp{\bf L}$, by similar argument as above, we have \begin{equation*} \frac{\partial \ell}{\partial {\bf L}} = - \frac{N{\bf I}_D}{{\bf L}} + \mathrm{tril} ( \vect\mathrm{P}^{-1} \sum_{n=1}^N {\bf y}_n^* \tp{{\bf y}_n^*} {\bf L}^{-\mathsf T} ) \end{equation*} \subsection{Gradients of log-priors} The logarithm of conditional prior $p(\vect\sigma| {\bf U}^*)$ after transformation $\vect\tau=\log(\vect\sigma)$ becomes \begin{equation*} \begin{aligned} \log p(\vect\tau| {\bf U}^*) = \log p(\vect\sigma| {\bf U}^*) + \log \left| \frac{d \vect\sigma}{d \vect\tau}\right| &= \sum_{i=1}^D (i-(\nu+D))\tau_i -\frac12\mathrm{tr} (\vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} \diag(e^{-\vect\tau})) \end{aligned} \end{equation*} \noindent $\left[\frac{d}{d\vect\tau} \log p(\vect\tau| {\bf U}^*)\right]$. \quad We calculate the derivative of $\log p(\vect\tau| {\bf U}^*)$ with respect to $\vect\tau$ \begin{equation*} \frac{d}{d\vect\tau} \log p(\vect\tau| {\bf U}^*) = {\bf i} -(\nu+D) + \frac{d g(\vect\tau)}{d \vect\tau} \end{equation*} where ${\bf i}=\tp{[1,\cdots,D]}$, and $g(\vect\tau) = -\frac12\mathrm{tr} (\vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} \diag(e^{-\vect\tau}))$. Noting that differential and trace operators are exchangeable, we have \begin{equation*} \begin{aligned} d g(\vect\tau) &= -\frac12\mathrm{tr} (\vect\Psi d \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} \diag(e^{-\vect\tau}) + \vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} d \diag(e^{-\vect\tau})) \\ &= \frac12 \left[ \mathrm{tr} (\vect\mathrm{P}^{-1} \diag(e^{-\vect\tau}) \vect\Psi \diag(e^{-\vect\tau}) \diag(d \vect\tau) ) + \mathrm{tr} ( \vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} \diag(e^{-\vect\tau}) \diag(d \vect\tau) ) \right] \\ &= \sum_{i=1}^D d \tau_i \sum_{j=1}^D \psi_{ij} e^{-\tau_j} \rho^{ji} e^{-\tau_i} \end{aligned} \end{equation*} Thus \begin{equation*} \frac{d g(\vect\tau)}{d \vect\tau} = \diag (\vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1}) \diag(e^{-\vect\tau}) = \diag (\vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1}) \circ e^{-\vect\tau} \end{equation*} where $\diag$ acting on a vector forms a diagonal matrix while the action a matrix means extracting the diagonal vector. $\circ$ is the Hadamard product (a.k.a. Schur product), i.e. the entrywise product. \noindent $\left[\frac{d}{d{\bf U}^*} \log p({\bf U}^*|\vect\tau)\right]$. \quad Now consider the derivative of $\log p({\bf U}^*|\vect\tau)$ with respect to the matrix ${\bf U}^*$. We have \begin{equation*} \frac{d}{d{\bf U}^*} \log p({\bf U}^*|\vect\tau) = \frac{\diag({\bf i}-(\nu+D+1))}{{\bf U}^*} + \frac{d g({\bf U}^*)}{d {\bf U}^*} \end{equation*} where $g({\bf U}^*)=-\frac12\mathrm{tr} (\vect\Psi \diag(e^{-\vect\tau}) ({\bf U}^*)^{-\mathsf T} ({\bf U}^*)^{-1} \diag(e^{-\vect\tau}))$, and $\frac{\diag({\bf i})}{{\bf U}^*}$ is a diagonal matrix formed by element-wise division. Again by the exchangeability between differential and trace, we have \begin{equation*}\small \begin{aligned} &d g({\bf U}^*)\\ &= -\frac12\mathrm{tr} (\vect\Psi \diag(e^{-\vect\tau}) d ({\bf U}^*)^{-\mathsf T} ({\bf U}^*)^{-1} \diag(e^{-\vect\tau}) + \vect\Psi \diag(e^{-\vect\tau}) ({\bf U}^*)^{-\mathsf T} d ({\bf U}^*)^{-1} \diag(e^{-\vect\tau})) \\ &= \frac12 \left[ \mathrm{tr} (\vect\Psi \diag(e^{-\vect\tau}) ({\bf U}^*)^{-\mathsf T} d \tp{({\bf U}^*)} \vect\mathrm{P}^{-1} \diag(e^{-\vect\tau}) ) + \mathrm{tr} ( \vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} d {\bf U}^* ({\bf U}^*)^{-1} \diag(e^{-\vect\tau}) ) \right] \\ &= \frac12 \left[ \mathrm{tr} (\diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} d {\bf U}^* ({\bf U}^*)^{-1} \diag(e^{-\vect\tau}) \vect\Psi ) + \mathrm{tr} ( \vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} d {\bf U}^* ({\bf U}^*)^{-1} \diag(e^{-\vect\tau}) ) \right] \\ &= \mathrm{tr} ( ({\bf U}^*)^{-1} \diag(e^{-\vect\tau}) \vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} d {\bf U}^* ) \end{aligned} \end{equation*} Therefore we have \begin{equation*} \frac{d g({\bf U}^*)}{d \tp{({\bf U}^*)}} = \mathrm{tril}( ({\bf U}^*)^{-1} \diag(e^{-\vect\tau}) \vect\Psi \diag(e^{-\vect\tau}) \vect\mathrm{P}^{-1} ) \,, \end{equation*} that is, \begin{equation*} \frac{d g({\bf U}^*)}{d {\bf U}^*} = \mathrm{triu}( \vect\mathrm{P}^{-1} \diag(e^{-\vect\tau}) \vect\Psi \diag(e^{-\vect\tau}) ({\bf U}^*)^{-\mathsf T} ) \,, \end{equation*} \noindent $\left[\frac{d}{d\vect\tau} \log p(\vect\tau),\; \frac{d}{d{\bf L}} \log p({\bf L})\right]$. \quad Lastly, the log-priors for \eqref{eq:nonconj-iwishart} and their gradients after transformation $\vect\tau:=\log(\vect\sigma)$ are calculated \begin{equation*} \begin{aligned} \log p(\vect\tau) &= -\frac12 \tp{\vect\tau} \vect\tau, & \frac{d}{d\vect\tau} \log p(\vect\tau) &= -\vect\tau \\ \log p({\bf l}_i) &= \log p({\bf l}_i^2) + \tp{\bm 1}_i\log|2{\bf l}_i| = \tp{(2(\vect\alpha_i-1)+{\bm 1}_i)} \log |{\bf l}_i|, & \frac{d}{d{\bf l}_i} \log p({\bf l}_i) &= \frac{2(\vect\alpha_i-1)+{\bm 1}_i}{{\bf l}_i} \end{aligned} \end{equation*} The bottom row can be written as \begin{equation*} \begin{aligned} \log p({\bf L}) &= \sum_{i=1}^D \tp{(2\vect\alpha_i-1)} \log |{\bf l}_i|, & \frac{d}{d\bf L} \log p({\bf L}) &= \frac{2\vect\alpha-1}{\bf L} \end{aligned} \end{equation*} where $\frac{1}{\bf L}$ denotes a lower-triangular matrix with $l_{ij}^{-1}$ being its $(i,j)$ entry $(i\ge j)$. \section{More Numerical Results} \subsection{Flexibility of von Mises-Fisher Prior and Bingham Prior}\label{apx:morepriors} \begin{figure} \caption{Marginal posterior, prior (induced from von Mises-Fisher distribution) densities of correlations and MLEs with different settings for concentration parameter $\kappa$, estimated with $10^6$ samples.} \label{fig:post_corr_vmf} \end{figure} \begin{dfn}[Fisher-Bingham / Kent distribution] The probability density function of the Kent distribution for the random vector ${\bf l}_i\in\mathcal S^{i-1}$ is given by \begin{equation*}\label{eq:Kent_density} p({\bf l}_i) \propto \exp\left\{\kappa \tp{\vect\gamma_1} {\bf l}_i + \sum_{k=2}^i \beta_k (\tp{\vect\gamma_k} {\bf l}_i)^2\right\} \end{equation*} where $\sum_{k=2}^i \beta_k=0$ and $0\leq 2|\beta_k|<\kappa$ and the vectors $\{\vect\gamma_k\}_{k=1}^i$ are orthonormal. \end{dfn} \begin{rk} The parameters $\kappa$ and $\vect\gamma_1$ are called the \emph{concentration} and the \emph{mean direction} parameter, respectively. The greater the value of $\kappa$, the higher the concentration of the distribution around the mean direction $\vect\gamma_1$. The choice of $\vect\gamma_1$ could impact our priors when modeling correlations. Parameters $\{\beta_k\}_{k=2}^i$ determine the ellipticity of the contours of equal probability. The vectors $\{\vect\gamma_k\}_{k=2}^i$ determine the orientation of the equal probability contours on the sphere. \end{rk} \begin{rk} If $\beta_k=0$ for $k=2,\cdots, i$, then this distribution reduces to \emph{von Mises-Fisher} distribution \citep{fisher53,mardia09}, denoted as $\mathrm{vMF}(\kappa, \vect\gamma_1)$. If $\kappa=0$, then it defines an antipodally symmetric distribution, named \emph{Bingham} distribution \citep{bingham74}, denoted as $\mathrm{Bing}({\bf A})$, with $\tp{\bf l}_i {\bf A} {\bf l}_i=\sum_{k=2}^i \beta_k (\tp{\vect\gamma_k} {\bf l}_i)^2$. \end{rk} As before, to induce smaller correlations, one can put higher prior probabilities for ${\bf l}_i$ on the poles of $\mathcal S^{i-1}$. For example, we might consider ${\bf l}_i \sim \mathrm{vMF}(\kappa, {\bf n}_i)$, or ${\bf l}_i \sim \mathrm{Bing}(\zeta\diag({\bf n}_i))$, where ${\bf n}_i:=\tp{(0,\cdots,0,1)}$ is denoted as the north pole. Now let's consider the following von Mises-Fisher prior \citep{fisher87,fisher53,mardia09} for ${\bf l}_i$, the $i$-th row of the Cholesky factor ${\bf L}$ of correlation matrix $\vect\mathrm{P}$ in the structured model \eqref{eq:strt_corr}. \begin{dfn}[Von Mises-Fisher distribution] The probability density function of the von Mises-Fisher distribution for the random vector ${\bf l}_i\in\mathcal S^{i-1}$ is given by \begin{equation*}\label{eq:vMF_density} p({\bf l}_i) = C_i(\kappa) \exp(\kappa \tp{\vect\mu} {\bf l}_i) \end{equation*} where $\kappa\geq 0$, $\Vert \vect\mu\Vert=1$ and the normalization constant $C_i(\kappa)$ is equal to \begin{equation*} C_i(\kappa) = \frac{\kappa^{i/2-1}}{(2\pi)^{i/2} I_{i/2-1}(\kappa)} \end{equation*} where $I_v$ denotes the modified \emph{Bessel} function of the first kind at order $v$. Denote ${\bf l}_i \sim \mathrm{vMF}(\kappa, \vect\mu)$. \end{dfn} Since we have no prior knowledge about the mean direction $\vect\mu$, we choose $\vect\mu={\bf n}_i=\tp{({\bm 0}_{i-1},1)}$ that favors the polar direction, i.e. \begin{equation*}\label{eq:nonconj-iwishart2} {\bf l}_i \sim \mathrm{vMF}(\kappa, {\bf n}_i), \qquad p({\bf l}_i) \propto \exp(\kappa l_{ii}), \quad i=2,\cdots, D \end{equation*} where we consider i) $\kappa=1$; ii) $\kappa=10$; iii) $\kappa=100$. With the von Mises-Fisher prior, we have \begin{equation*} \log p({\bf L}) = \sum_{i=1}^D \kappa l_{ii} = \kappa \mathrm{tr}({\bf L}), \quad \frac{d}{d\bf L} \log p({\bf L}) = \kappa {\bf I} \end{equation*} We repeat the experiment in Section \ref{sec:prieff} with the von Mises-Fisher prior for ${\bf l}_i$. The posteriors, priors and maximal likelihood estimates (MLE) of correlations with different $\kappa$'s are plotted in Figure \ref{fig:post_corr_vmf} respectively. With larger concentration parameter $\kappa$, the posterior is pulled more towards $0$. \begin{figure} \caption{Marginal posterior, prior (induced from Bingham distribution) densities of correlations and MLEs with different settings for concentration parameter $\zeta$, estimated with $10^6$ samples.} \label{fig:post_corr_bing} \end{figure} Finally, we consider the following Bingham prior \citep{bingham74,onstott80} for ${\bf l}_i$ in the structured model \eqref{eq:strt_corr}. \begin{dfn}[Bingham distribution] The probability density function of the Bingham distribution for the random vector ${\bf l}_i\in\mathcal S^{i-1}$ is given by \begin{equation*}\label{eq:Bingham_density} p({\bf l}_i) = _1\!\!F_1(\frac12;\,\frac{n}{2};\,{\bf Z})^{-1} \exp (\tp{\bf l}_i {\bf M} {\bf Z} \tp{\bf M} {\bf l}_i) \end{equation*} where ${\bf M}$ is an orthogonal orientation matrix, ${\bf Z}$ is a diagonal concentration matrix, and $_1F_1(\cdot;\,\cdot;\,\cdot)$ is a \emph{confluent hypergeometric function} of matrix argument. Denote ${\bf l}_i \sim \mathrm{Bing}({\bf M}, {\bf Z})$. \end{dfn} Note, according to \cite{bingham74}, this distribution is defined for ${\bf Z}$ up to an arbitrary scalar matrix $\zeta_0{\bf I}$. Therefore, we consider ${\bf M}={\bf I}$ and ${\bf Z}=\zeta\diag({\bf n}_i)$ that favors the polar direction, i.e. \begin{equation*}\label{eq:nonconj-iwishart3} {\bf l}_i \sim \mathrm{Bing}({\bf I}, \zeta\diag({\bf n}_i)), \qquad p({\bf l}_i) \propto \exp(\zeta l_{ii}^2), \quad i=2,\cdots, D \end{equation*} where we consider i) $\zeta=1$; ii) $\zeta=10$; iii) $\zeta=100$. The log-prior and its gradient are calculated as follows \begin{equation*} \log p({\bf L}) = \sum_{i=1}^D \zeta l_{ii}^2 = \zeta \Vert\diag({\bf L})\Vert^2, \quad \frac{d}{d\bf L} \log p({\bf L}) = 2\zeta \diag({\bf L}) \end{equation*} We repeat the above experiment with the Bingham prior for ${\bf l}_i$. The posteriors, priors and maximal likelihood estimates (MLE) of correlations with different $\zeta$'s are plotted in Figure \ref{fig:post_corr_bing} respectively. With larger concentration parameter $\zeta$, the posteriors are pulled more towards the induced priors and concentrate on 0. \subsection{More Comparison to Latent Factor Process Model}\label{apx:more_latentfactor} The example of simulated periodic process in Section \ref{sec:flexibility} is consider for $D=2$ for simplicity and convenience of visualization. Here we consider higher dimension $D=10$. The purpose here is not to show the scalability, but rather to investigate the robustness of our dynamic model \eqref{eq:strt_dyn_model} in terms of full flexibility. \begin{figure} \caption{Simulated data $y$ over the underlying mean functions $\mu_t$ (left), the variance functions $\varSigma_t$, and the correlation functions $\mathrm{P}_t$ (right) of 10-dimension periodic processes.} \label{fig:periodic_process_D10} \end{figure} We generate $M=20$ trials of data over $N=100$ evenly spaced points over $[0,1]$. The true mean, variance and correlation functions are modified from the example \eqref{eq:periodic} using the Clausen functions \citep{clausen1832}. Seen from Figure \ref{fig:periodic_process_D10}, they behave more intricately with higher heterogeneity among those processes. This could impose further challenge for latent factor based models like \eqref{eq:lpf_bncr} compared to $D=2$. We repeat the experiments in Section \ref{sec:flexibility} and compare our dynamic model \eqref{eq:strt_dyn_model} with the latent factor process model \eqref{eq:lpf_bncr} by \cite{fox15}. To aid the visualization, we subtract the estimated process from their true values and plot the error functions in Figure \ref{fig:periodic_constrast_more}. Even if we have tried our best to tune the parameters, e.g. $L$, the number of basis functions, and $k$, the size of latent factors, the latent factor process model \citep{fox15} is outperformed by our flexible dynamic model \eqref{eq:strt_dyn_model} in reducing estimation errors. \begin{figure} \caption{Estimated error functions of the underlying mean $\mu_t$ (left column), variance $\sigma_t$ (middle column) and correlation $\rho_t$ (right column) of 10-dimensional periodic processes, using latent factor process model (upper row) and our flexible model (lower row), based on $M=20$ trials of data over $N=100$ evenly spaced points. Solid lines are estimated errors and shaded regions are $95\%$ credible bands.} \label{fig:periodic_constrast_more} \end{figure} \begin{figure} \caption{Locations of recorded LFP signals in CA1 subregion of the rat's hippocampus.} \label{fig:LFP_tetrode_locations} \end{figure} \subsection{More Results on the Analysis of LFP data}\label{apx:moreLFP} In Section \ref{sec:lfp}, we studied the LFP data collected from the hippocampus of rats performing a complex sequence memory task. Figure \ref{fig:LFP_tetrode_locations} shows 12 locations from CA1 subregion of the hippocampus of the rat where LFP signals are recorded. Figure \ref{fig:LFP_theta} shows the theta-filtered traces (4-12Hz; left panel) and the estimated correlation processes under different experiment conditions (InSeq vs OutSeq; right panel). Here we observe a similar dynamic pattern of correlation matrices under two conditions that diverge after 500ms, indicating the neural activity associated with the cognitive process of identifying whether events occurred in their expected order. \begin{figure} \caption{Results of LFP theta signals: data (left), estimation of correlations (right).} \label{fig:LFP_theta} \end{figure} We also did a study of the correlation evolution on the full 12 channels and revealed the block structure of those channels and the same changing pattern under different experimental conditions discovered with the chosen 4 channels in Section \ref{sec:lfp}. This video demonstrates the result of 12 channels \url{https://www.youtube.com/watch?v=NMUUic0IDsM}. \end{document}
arXiv
\begin{document} \title{Violation of all the second laws of thermal operations by inhomogeneous reservoirs} \author{Angeline Shu} \affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542.} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543.} \author{Yu Cai} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543.} \author{Stella Seah} \affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542.} \author{Stefan Nimmrichter} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543.} \author{Valerio Scarani} \affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542.} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543.} \begin{abstract} In the resource theory of thermodynamics, the decrease of the free energy based on von Neumann entropy is not a sufficient condition to determine free evolution. Rather, a whole family of generalised free energies $F_{\alpha}$ must be monotonically decreasing. We study the resilience of this result to relaxations of the framework. We use a toy collisional model, in which the deviations from the ideal situation can be described as arising from inhomogeneities of local fields or temperatures. For any small amount of perturbation, we find that there exist initial states such that both single-shot and averaged values of $F_{\alpha}$ do not decrease monotonically for all $\alpha>0$. A geometric representation accounts for the observed behavior in a graphic way. \end{abstract} \maketitle \section{Introduction} Foundationally, thermodynamics is a theory of states and their transformations. In quantum information science, the same can be said for entanglement theory. This analogy was noticed very early, and has later resulted in the development of the broad framework of resource theories. The \textit{resource theory of thermal operations} is a formalisation of the thermodynamics of systems in contact with thermal baths \cite{Brandao_2013}. The basic framework having been explored in depth (see \cite{Gour_2015,Goold_2016} for reviews), recent works have been focusing on the resilience of the results when some of the assumptions are relaxed \cite{Sparaciari_2017,Meer_2017,Mueller_2017,Baumer_2017}. Our work contributes to this line of research. With two thermal baths at different temperatures, one can run an engine: therefore, the lack of resources is described by what can be achieved with a single thermal bath at temperature $T$. More precisely, the free states are the thermal states $\tau$ at temperature $T$ and the free operations $U$ are those that conserve the total energy. Both notions are defined with respect to a reference Hamiltonian, usually taken as $H=H_S+H_R$ where $S$ indicate the system and $R$ a reservoir of auxiliary systems. Then, thermal states read $\tau=\tau_S\otimes\tau_R$ where $\tau_X=e^{-\beta H_X}/Z_X$, $Z_X=\mathrm{tr}(e^{-\beta H_X})$ and $\beta=1/kT$. Free operations $U$ are required to satisfy \footnote{We notice that, in a dynamical system, the approach to energy conservation is different. There, the whole evolution is generated by the Hamiltonian that includes interactions. Energy is then conserved if the total Hamiltonian is time-independent; if it is time-dependent, there is no reason for energy to be conserved.} \ba\label{conserve} [H,U]=0\,. \ea If the system is prepared in an arbitrary state $\rho$, a \textit{free evolution} (i.e. one that can be achieved without further resources) is then of the form \ba\label{dynamics} \mathcal{E}[\rho]&=& \mathrm{tr}_{R}\bigl[U (\rho\otimes\tau_{R})U^{\dagger}\bigr]\,. \ea The set of criteria under which a free evolution is possible can be seen as the analog of the second law of thermodynamics. So far, it has not been possible to reduce these criteria to a single one \cite{Gour_2015}. Brandao and coworkers \cite{Brandao_2015} based the \textit{second laws of thermal operations} on the monotonical decrease of a continuous family of \textit{generalised free energies} $F_{\alpha}(\rho||\tau_{S})$, $\alpha\in\mathbb{R}$. For states $\rho=\sum_jp_j\ket{j}\bra{j}$ diagonal in the eigenbasis of $H_S$, the expression is \ba F_{\alpha}(\rho||\tau_{S})&=&k_{\rm B}T\, \left[D_{\alpha}(\rho||\tau_{S})- \log Z_S\right]\,.\ea where the R\'enyi divergence is given by \begin{align} D_{\alpha}(\rho||\tau_{S})=\frac{\text{sgn}(\alpha)}{\alpha-1}\log\sum_{j}p_{j}^{\alpha}q_{j}^{1-\alpha}\label{eq:RenyiDiv} \end{align} with $q_j=e^{-\beta E_j}/Z_S$ the eigenvalues of $\tau_S$. These laws having been established, their robustness under modifications of the framework has been the object of recent studies. Imperfections in the initial and target state can be accounted for by using a suitable family of smoothed generalized free energies \cite{Meer_2017}. If the evolution involves a catalyst, whose state is allowed to be correlated with that of the system at the end of the evolution, then only the monotonicity of $F_{1}$ survives as both a necessary and sufficient condition for the state transitions \cite{Mueller_2017}. In this paper, we explore the stability of these second laws under deviations from the exact conservation constraint \eqref{conserve}. We use a specific toy model of a thermalising channel, initial states that are diagonal in the eigenbasis of the Hamiltonian, and what is arguably the simplest form of the perturbation. Even in this restricted context, we find that, however small the perturbation, there exist initial states whose evolution violates \textit{all} the second laws except for $\alpha=0$. The violation is not only visible in single-shot realisations, but also in the average over several such realisations. The plan of the paper is as follows. In section \ref{sec:ToyMod}, we introduce the toy model and derive analytical results for the average over several realisations. Section \ref{sec:results} presents and discusses the results obtained for qubit (\ref{sec:qubit}) and qutrit systems (\ref{sec:qutrit}). The physical relevance of the model is discussed in Section \ref{sec:PhyRel} before concluding in Section \ref{sec:conclusion}. \section{Toy model of the dynamics} \label{sec:ToyMod} \subsection{Structure of the model} We consider a collisional model that defines a thermalising channel \cite{Valerio_2002}. The system is a qudit, and the bath consists of qudits labelled by $r\in\{1,2,...,N\}$ with $N\gg 1$. The Hamiltonian is given by \ba H=H_S+H_R(\underline{g})&=&g_0 s_z^{(S)}\,+\,\sum_{r=1}^N g_r s_z^{(r)} \ea where $g_0>0$, $\underline{g}=(g_1,...,g_r)$ and where $s_z$ is the operator representing the spin in the direction $z$. For every qudit, the eigenstates of $s_z$ for the eigenvalue $\hbar \big(j-\frac{d-1}{2}\big)$ is denoted by $\ket{j}$ with $j\in\{0,1,...,d-1\}$. The qudits of the reservoir are prepared in the thermal state $\tau_R=\bigotimes_r \tau_r$ with $\tau_r=e^{-\beta g_r s_z}/Z_r$. The collisional character of the model is seen in the interaction, that is taken to be of the form $ U=U_{S,N}U_{S,N-1}...U_{S,1}$. We assume that all two-body interactions $U_{S,r}$ are given by the partial swap with mixing angle $\theta$: with $\ket{jj'}\equiv\ket{j}_{S}\ket{j'}_{r}$, \ba U_{S,r}&=&\sum_{j=1}^d \Big[\ket{jj}\bra{jj} \nonumber\\ &&+\sum_{j'\neq j} \big(\cos\theta\ket{jj'}+i \sin\theta\ket{j'j}\big)\bra{jj'}\Big]\,. \label{partialswap} \ea \subsection{Free thermal operation and perturbation} The unperturbed model is the one where $g_r=g_0$ for all $r$. In this case, $[H,U]=0$, since $U$ couples only degenerate eigenstates of $H$. Thus, this defines a free thermal operation. For any initial state, the dynamics \eqref{dynamics} can be solved analytically: \begin{align} \rho_{r}&=\rho_{r-1} \cos^2 \theta + \tau_S \sin^2 \theta \nonumber \\ &=\tau_S - \left(\tau_S-\rho_{0}\right)\cos^{2r} \theta. \label{eq:Analytic} \end{align} In particular, the state of the system converges to the thermal state $\tau_S$ in the limit $N\rightarrow\infty$. Recently, B\"{a}umer et al.~\cite{Baumer_2017} built on this model to derive an extension of the optimal protocol for work extraction \cite{Skrzypczyk_2014}. In their scheme, there are additional storage qudits. Even if the unitary is imperfect and thermalisation is partial, the maximal amount of energy $\Delta F_{1}=F_{1}(\rho||\tau_{S})-F_{1}(\tau_{S}||\tau_{S})$ can be extracted with sufficiently many steps. In order to study deviations from \eqref{conserve}, we consider the scalar perturbation \ba g_r\equiv g(\delta_r)&=&g_0\,(1+\delta_r) \ea where each $\delta_r$ is a random number drawn from a Gaussian distribution $G(\delta)$ centered at $\overline{\delta} = 0$ with variance $\overline{\delta^2}=\Delta^2$. The physical relevance of this perturbation is discussed in section \ref{sec:PhyRel}. The eigenstates of $H$ remain the same as in the unperturbed case; however, as soon as $\delta_r\neq 0$, $\ket{j}_{S}\ket{j'}_{r}$ and $\ket{j'}_{S}\ket{j}_{r}$ are no longer degenerate. Then, $[H,U]\neq 0$ as desired. Nevertheless $\mathrm{tr}(HU\rho\otimes\tau_{R}U^{\dagger})\approx\mathrm{tr}(H\rho\otimes\tau_R))$ because the fluctuations will cancel on average. The collisions are now described by a stochastic map which yields individual random trajectories for the system with no fixed point. There is no closed analytical formula of the dynamics for a specific realisation. However, for the ensemble-averaged states \ba\label{ensavg} \overline{\rho}_r&=&\int_{-\infty}^{\infty}G(\delta)\rho_r(\delta)\,\mathrm{d}\delta \ea the dynamics is \begin{align} \overline{\rho}_{r}&= \overline{\tau}- \left(\overline{\tau}-\rho_{0}\right) \cos^{2r} \theta\label{eq:Analytic_Noise} \end{align} where the ensemble-averaged thermal state $\overline{\tau}$ is obtained by replacing $\rho_r(\delta)$ with $\tau(\delta)=e^{-\beta g_0(1+\delta)s_z}/\mathrm{tr}(e^{-\beta g_0(1+\delta)s_z})$ in \eqref{ensavg}. Since $\tau(\delta)$ is not a linear function of $\delta$, $\overline{\tau}$ is different from $\tau_S$, and the latter appears in the expressions of the free energies. This is going to be a key observation for the interpretation of our results. \section{Results and discussion} \label{sec:results} In what follows, we shall particularize our study to qubits, then to qutrits. We shall study $D_{\alpha}(\rho||\tau_{S})$, omitting the energy factor $k_{\rm B}T$ and the constant offset $\log{Z_{S}}$. For every step $r$, we shall focus on three quantities: \begin{itemize} \item The single-shot values ${D}_{\alpha}(\rho_{r}||\tau_{S})$ for some specific realisation of the $\underline{\delta}$, obtained numerically \item The values $D_{\alpha}(\overline{\rho}_{r}||\tau_{S})$ for the ensemble-averaged states, obtained from \eqref{eq:Analytic_Noise}. \item The ensemble averages $\overline{D}_{\alpha}(\rho_{r}||\tau_{S})$, obtained numerically. \end{itemize} The $D_{\alpha}(\overline{\rho}_{r}||\tau_{S})$ are the most likely candidates for observable quantities, whereas the other two assume that single-shot measurements of free energy are possible. \subsection{Qubit Systems} \label{sec:qubit} We start with $d=2$. Before discussing the behavior of the free energies under perturbation, let us have a look at the thermal states. The reference state $\tau_S$ has a ground-state occupation $q_0=(1+e^{-\beta\hbar g_0})^{-1}$. The ensemble-averaged thermal state $\overline{\tau}$ has \ba \overline{q}_0&=&q_0\int G(\delta)\frac{1+e^{-\beta\hbar g_0}}{1+e^{-\beta\hbar g_0(1+\delta)}}\,d\delta\,. \ea For $\beta>0$, it's easy to prove that $\overline{q}_0\leq q_0$ \footnote{The simplest is to notice that $\int G(\delta)f(\delta)\,d\delta=\int G(\delta)f_{\textrm{even}}(\delta)\,d\delta$ with $f_{\textrm{even}}(\delta)=\frac{1}{2}(f(\delta)+f(-\delta))$. For our case, with $e^{-\beta\hbar g_0}\leq 1$, it is elementary to prove that the maximum of $f_{\textrm{even}}(\delta)$ is $f_{\textrm{even}}(0)=1$, then the function decreases monotonically towards $\lim_{\delta\rightarrow\pm\infty}f_{\textrm{even}}(\delta)=\frac{1}{2}(1+e^{-\beta\hbar g_0})$.}: in other words, $\overline{\tau}$ is more mixed that $\tau_S$. Let us now look at the dynamics \eqref{dynamics} of a state diagonal in the eigenbasis of $H_S$. In Fig.~\ref{fig:Finf_Steps_Qubits}, we plot $D_{\infty}(\rho_{r}||\tau_{S})$ for the ground state as the initial input state and for some choice of the parameters. We see an initial smooth decrease, then a partial increase which, for the case of single-shot realisations, is accompanied by fluctuations. By inspection, we have found that the behavior is qualitatively the same across parameter space, with both the amount of final increase and the size of the fluctuations being larger for larger values of $\alpha$ and for larger values of $\beta$ (low temperature). \begin{figure} \caption{In these plots, $\hbar g_0=2$, $\Delta^2=0.1$, $\beta=1$, $\theta=0.1$ and $\rho_{0}$ is the ground state.} \label{fig:Finf_Steps_Qubits} \label{fig:Finf_p0_Qubits} \end{figure} In order to understand this behavior, we plot $D_{\alpha}(\rho_{r}||\tau_{S})$ as a function of $p_0$, which is the only free parameter for states that are diagonal in the eigenbasis of $H_S$ (Fig.~\ref{fig:Finf_p0_Qubits}). From this plot, most of the previously observed features become very transparent: we have set the evolution to start with $p_0=1$ (ground state) and we know that it must end at $p_0=\overline{q}_0<q_0$. It must therefore pass through $p_0=q_0$, where all the $D_{\alpha}$ are equal to zero. Thus, for every $\alpha>0$ and for every $\Delta^2>0$ (that is, however small the perturbation), $F_{\alpha}(\overline{\rho}_{r}||\tau_{S})$ will decrease monotonically if and only if the initial state has $p_0\leq q_0$. In other words, all the second laws for the $F_{\alpha}(\overline{\rho}_{r}||\tau_{S})$ can be violated in our model, and this is seen by taking any initial state with $p_0>q_0$ \footnote{Recall that we have assumed $\beta>0$: if one were willing to consider negative temperatures, $\overline{q}_0\geq q_0$ would hold and the conditions would be reversed.}. We are left to explain the fluctuations of the single-shot trajectory obtained for a configuration of the $\delta_r$ chosen at random. We see in Fig.~\ref{fig:Finf_p0_Qubits} that the gradient of $F_\infty$ is very steep for $p_0<q_0$, in the vicinity of $q_0$, and less steep elsewhere. Thus, the fluctuations become significant only after the state crosses $\tau_S$. For our choice of parameters, the target state $\overline{\tau}$ is very close to $\tau_S$, and this is why the fluctuations do not subside. The generally low gradient of $F_1$ in the region of our trajectory, also shown in Fig.~\ref{fig:Finf_p0_Qubits}, explains why the fluctuations would be less prominent for that free energy. \subsection{Qutrit Systems} \label{sec:qutrit} We repeat a similar study for qutrits. Because with qutrits one can do all that can be done with qubits, the main conclusion will be the same: there exist initial states such that all the second laws, except for $\alpha=0$, are violated by an arbitrary small amount of perturbation. The picture however becomes richer and is worth presenting. \begin{figure} \caption{In these plots, $\hbar g_0=2$, $\Delta^2=0.1$, $\beta=1$ and $\theta=0.1$.} \label{fig:Finf_Steps_Qutrits} \label{fig:Contour_Qutrits} \end{figure} The behavior of the trajectories given the ground state as the initial state (Fig.~\ref{fig:Finf_Steps_Qutrits}) is the one we are familiar with. Like for qubits, it is very clarifying to plot the trajectories and the contour lines of the $F_\alpha$ in the parameter space. For diagonal states of qutrits, the parameter space is two-dimensional. We choose a parametric plot described in detail in Appendix \ref{apptriangles}. For the partial swap interaction, the time evolution of ensemble-averaged state \eqref{eq:Analytic_Noise} traces a straight line in the parametric plot, between the initial state and $\overline{\tau}$. Each single-shot realization corresponds to a noisy trajectory around that line. An increase of $F_{\alpha}$ will be observed whenever the trajectory crosses some contour lines twice. This is the case for the trajectories plotted in Fig.~\ref{fig:Finf_Steps_Qutrits}, as shown in the parametric plot of Fig.~\ref{fig:Contour_Qutrits}. Like for the case of qubits, all the second laws for $\alpha>0$ are violated if $\tau_S$ lies on the line connecting the initial and the final state. However, other trajectories may be such that only some of the second laws are violated. For instance, by taking an initial state that has the same $p_2$ as $\overline{\tau}$, the linear trajectory follows one of the contour lines of $F_{\infty}$ while violating all the second laws for finite $\alpha$ (solid light green line in Fig.~\ref{fig:Contour_Qutrits}). This example is peculiar: on a generic trajectory, the second law associated to $F_{\infty}$ seems to be the easiest to violate (larger increase, larger fluctuations), while here it is the only one that is respected. It must also be kept in mind that the linear trajectories are proper to the choice \eqref{partialswap} of $U_{S,r}$: modifications of the collisional model would lead to curved trajectories in the parametric plot (Appendix \ref{appothers}). \section{Physical relevance of the model} \label{sec:PhyRel} The collisional model that we studied is unquestionably a toy model, so it should not be over-interpreted. In particular, in a full treatment, the unitary $U$ should derive from an interaction Hamiltonian. That being said, it can be related to commonplace physical situations, and the differences do not seem to alter the heart of the matter. A first possible physical reading is that the qubits are spins immersed in an external magnetic field $B$ pointing in the $z$ direction. In this case, the coupling is given by $g=\gamma B$ where $\gamma$ is the gyromagnetic factor. The fact that $g$ depends on $r$ can be attributed to spatial inhomogeneity of the field: in this case, however, the interaction should not be by contact, because if the two qudits are at the same position they should feel the same field. It could be attributed to inhomogenities in $\gamma$ due to different chemical environments. A second reading is possible: since the perturbation is scalar, it appears only in products $\beta g_0 (1+\delta)$ and could therefore be attributed to $\beta$ rather than to the coupling. The auxiliary qudits are thermalised, so they have been in contact with a thermal bath. Attributing the inhomogeneity to $\beta$ amounts to saying that temperature in that bath is not homogeneous. In this reading, $[H,U]=0$ but the $\tau_r$ are thermal states at different temperatures. Thus, all the second laws can also be violated by departing from the free resources in this way. \section{Conclusion} \label{sec:conclusion} We have studied the robustness of the second laws of thermal operations under the relaxation of the framework. The study is based on a toy collisional dynamics, and the deviation from the framework is in the form of inhomogeneous scalar parameters that induces $[H,U]\neq 0$. We observe that all $\alpha>0$ second laws are violated for arbitrarily low perturbations. The violation is visible in single-shot trajectories, in the free energy of the ensemble-averaged state, and in the ensemble-averaged values of the free energy. A representation in parametric plots provides a compelling picture of the origin of this behavior. While the violation is usually more prominent for $F_{\infty}$, it is also present for $F_1$ that is supposed to capture the ``normal version'' of the second law in the resource theory framework. There is nothing paradoxical in violating theorems by departing from their assumptions. Nonetheless, in the context of our model, the departure amounts to adding a small inhomogeneity in rather simple external environments: the phenomenological second law has acquitted itself quite well describing far more uncontrolled situations like wet environments (see \cite{Wang_2013} for a recent result). These results indicate that the second laws of thermal operations cannot be used in the same way as the usual second law of thermodynamics. They add to the challenges posed by extending the resource theory of thermodynamics to imperfect reservoirs \cite{Sparaciari_2017}. \section*{acknowledgments} We acknowledge illuminating discussions with and useful feedback from Philippe Faist, Rodrigo Gallego, Kavan Modi, Markus M\"uller, Nelly Ng and Henrik Wilming. This research is supported by the Singapore Ministry of Education Academic Research Fund Tier 3 (Grant No. MOE2012-T3-1-009); by the National Research Fund and the Ministry of Education, Singapore, under the Research Centres of Excellence programme; and by the John Templeton Foundation Grant 60607 ``Many-box locality as a physical principle''. \begin{appendix} \section{Further observations for qutrits (and higher dimensions)} \subsection{The geometry of the $F_{\alpha}$} \label{apptriangles} Diagonal states of qutrits are parametrised by $p_0$, $p_1$ and $p_2=1-p_0-p_1$. In order to illustrate the contour lines of the $F_{\alpha}$ (the same as those of the $D_\alpha$), we choose a representation in which the three eigenstates are at the cusps of an equilateral triangle: the ground state is at the bottom right corner, the intermediate state is at the top, and the highest energy state is at the bottom left corner (Fig.~\ref{fig:Contour_Sweep}). The minimum is always achieved for the thermal state $\tau_S$, represented by a black dot in the figure. Any free evolution will be represented by a trajectory that crosses each contour line only once, for all values of $\alpha$. The trajectories need not be straight lines, as explained in the next subsection. \onecolumngrid \begin{center} \begin{figure} \caption{(Colour online) Contour plots of $F_{\alpha}$ for $\beta=0$ (infinite temperature, top) and $\beta=0.75$ (Bottom). The apex of the triangle stands for $p_{2}=1$, the bottom left corner for $p_{1}=1$ and bottom right corner for $p_{0}=1$. The black dot denotes thermal state of the system. The inset shows a magnified view of the contour lines closer to the thermal state} \label{fig:Contour_Sweep} \end{figure} \end{center} \twocolumngrid \subsection{Modifications of the collisional model} \label{appothers} Throughout the paper, we have assumed all the two-body interactions $U_{S,r}$ are given by partial swap with mixing angle $\theta$, which are but a subset of the larger and more general set of energy conserving operations. The most general energy preserving unitaries are those that do not cause any mixing between the energy subspaces. This is just a direct sum of the unitaries acting on each energy subspace \begin{align} U=\bigoplus_{k=0}^{2d-2} U_k \end{align} where $U_{k}$ is a unitary on the span$\{\ket{jj'}\}$ such that $j+j'=k$ and $d$ is the dimension of the system. For instance, $U_{1}$ is just a generalized unitary on the subspace $\{\ket{01},\ket{10}\}$. One can determine the number of free parameters for each $U_k$ by squaring the dimension of that energy subspace. Therefore for energy subspaces of one dimension, the only free parameter will be a phase factor; for energy subspaces of two and three dimensions, any unitary with four and nine free parameters respectively will suffice. Hence in the case of qubits, there will be a total of six free parameters (five, if we exclude a global phase), and for qutrits, there would be 18 (17) free parameters. Under these more general unitaries, the free evolution does not need to be a straight line as defined by Eq.~\eqref{eq:Analytic} for the partial swap \eqref{partialswap}. For instance, with partial swaps of different mixing angles $\theta_{k}$ between each energy subspace, one can obtain curved trajectories as in Fig.~\ref{fig:Curved}. \begin{center} \begin{figure} \caption{(Colour online) Contour plots for $F_{1}$ (solid) and $F_{\infty}$ (dash-dotted) overlaid with the ensemble averaged trajectory of $\overline{\rho}_{r}$ (solid, red) for the interaction with $\beta=0$, $hg_0=2$, $\Delta^2=0$, $\theta_{1}=0.075$, $\theta_{2}=0.05$ and $\theta_{3}=0.1$.} \label{fig:Curved} \end{figure} \end{center} \end{appendix} \end{document}
arXiv
Truncated order-7 square tiling In geometry, the truncated order-7 square tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{4,7}. Truncated order-7 square tiling Poincaré disk model of the hyperbolic plane TypeHyperbolic uniform tiling Vertex configuration8.8.7 Schläfli symbolt{4,7} Wythoff symbol2 7 | 4 Coxeter diagram Symmetry group[7,4], (*742) DualOrder-4 heptakis heptagonal tiling PropertiesVertex-transitive Related polyhedra and tiling *n42 symmetry mutation of truncated tilings: n.8.8 Symmetry *n42 [n,4] Spherical Euclidean Compact hyperbolic Paracompact *242 [2,4] *342 [3,4] *442 [4,4] *542 [5,4] *642 [6,4] *742 [7,4] *842 [8,4]... *∞42 [∞,4] Truncated figures Config. 2.8.8 3.8.8 4.8.8 5.8.8 6.8.8 7.8.8 8.8.8 ∞.8.8 n-kis figures Config. V2.8.8 V3.8.8 V4.8.8 V5.8.8 V6.8.8 V7.8.8 V8.8.8 V∞.8.8 Uniform heptagonal/square tilings Symmetry: [7,4], (*742) [7,4]+, (742) [7+,4], (7*2) [7,4,1+], (*772) {7,4} t{7,4} r{7,4} 2t{7,4}=t{4,7} 2r{7,4}={4,7} rr{7,4} tr{7,4} sr{7,4} s{7,4} h{4,7} Uniform duals V74 V4.14.14 V4.7.4.7 V7.8.8 V47 V4.4.7.4 V4.8.14 V3.3.4.3.7 V3.3.7.3.7 V77 References • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 19, The Hyperbolic Archimedean Tessellations) • "Chapter 10: Regular honeycombs in hyperbolic space". The Beauty of Geometry: Twelve Essays. Dover Publications. 1999. ISBN 0-486-40919-8. LCCN 99035678. See also Wikimedia Commons has media related to Uniform tiling 7-8-8. • Uniform tilings in hyperbolic plane • List of regular polytopes External links • Weisstein, Eric W. "Hyperbolic tiling". MathWorld. • Weisstein, Eric W. "Poincaré hyperbolic disk". MathWorld. • Hyperbolic and Spherical Tiling Gallery • KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings • Hyperbolic Planar Tessellations, Don Hatch Tessellation Periodic • Pythagorean • Rhombille • Schwarz triangle • Rectangle • Domino • Uniform tiling and honeycomb • Coloring • Convex • Kisrhombille • Wallpaper group • Wythoff Aperiodic • Ammann–Beenker • Aperiodic set of prototiles • List • Einstein problem • Socolar–Taylor • Gilbert • Penrose • Pentagonal • Pinwheel • Quaquaversal • Rep-tile and Self-tiling • Sphinx • Socolar • Truchet Other • Anisohedral and Isohedral • Architectonic and catoptric • Circle Limit III • Computer graphics • Honeycomb • Isotoxal • List • Packing • Problems • Domino • Wang • Heesch's • Squaring • Dividing a square into similar rectangles • Prototile • Conway criterion • Girih • Regular Division of the Plane • Regular grid • Substitution • Voronoi • Voderberg By vertex type Spherical • 2n • 33.n • V33.n • 42.n • V42.n Regular • 2∞ • 36 • 44 • 63 Semi- regular • 32.4.3.4 • V32.4.3.4 • 33.42 • 33.∞ • 34.6 • V34.6 • 3.4.6.4 • (3.6)2 • 3.122 • 42.∞ • 4.6.12 • 4.82 Hyper- bolic • 32.4.3.5 • 32.4.3.6 • 32.4.3.7 • 32.4.3.8 • 32.4.3.∞ • 32.5.3.5 • 32.5.3.6 • 32.6.3.6 • 32.6.3.8 • 32.7.3.7 • 32.8.3.8 • 33.4.3.4 • 32.∞.3.∞ • 34.7 • 34.8 • 34.∞ • 35.4 • 37 • 38 • 3∞ • (3.4)3 • (3.4)4 • 3.4.62.4 • 3.4.7.4 • 3.4.8.4 • 3.4.∞.4 • 3.6.4.6 • (3.7)2 • (3.8)2 • 3.142 • 3.162 • (3.∞)2 • 3.∞2 • 42.5.4 • 42.6.4 • 42.7.4 • 42.8.4 • 42.∞.4 • 45 • 46 • 47 • 48 • 4∞ • (4.5)2 • (4.6)2 • 4.6.12 • 4.6.14 • V4.6.14 • 4.6.16 • V4.6.16 • 4.6.∞ • (4.7)2 • (4.8)2 • 4.8.10 • V4.8.10 • 4.8.12 • 4.8.14 • 4.8.16 • 4.8.∞ • 4.102 • 4.10.12 • 4.122 • 4.12.16 • 4.142 • 4.162 • 4.∞2 • (4.∞)2 • 54 • 55 • 56 • 5∞ • 5.4.6.4 • (5.6)2 • 5.82 • 5.102 • 5.122 • (5.∞)2 • 64 • 65 • 66 • 68 • 6.4.8.4 • (6.8)2 • 6.82 • 6.102 • 6.122 • 6.162 • 73 • 74 • 77 • 7.62 • 7.82 • 7.142 • 83 • 84 • 86 • 88 • 8.62 • 8.122 • 8.162 • ∞3 • ∞4 • ∞5 • ∞∞ • ∞.62 • ∞.82
Wikipedia
\begin{document} \title[The behavior of $\pi$-submaximal subgroups]{The behavior of $\pi$-submaximal subgroups under homomorphisms with $\pi$-separable kernels} \author{Danila O. Revin} \address{Danila O. Revin \newline\hphantom{iii} Sobolev Institute of Mathematics, \newline\hphantom{iii} 4, Koptyug av. \newline\hphantom{iii} 630090, Novosibirsk, Russia } \email{[email protected]} \author{Andrei V. Zavarnitsine} \address{Andrei V. Zavarnitsine \newline\hphantom{iii} Sobolev Institute of Mathematics, \newline\hphantom{iii} 4, Koptyug av. \newline\hphantom{iii} 630090, Novosibirsk, Russia } \email{[email protected]} \maketitle {\small \begin{quote} \noindent{{\sc Abstract.} We explore the extent to which constructing the inductive theory of $\mathfrak{X}$-submaximal subgroups is possible. To this end, we study the behavior of $\pi$-submaximal subgroups under homomorphisms with $\pi$-separable kernels and construct examples where such behavior is irregular.} \noindent{{\sc Keywords:} $\mathfrak{X}$-maximal subgroup, $\mathfrak{X}$-submaximal subgroup, complete class, $\pi$-separable group } \noindent{\sc MSC2010: 20E28, 20E34 } \end{quote} } {\em In honor of the 50th birthday of Professor Andrey V.\,Vasil'ev} \section{Introduction} The present paper is concerned with the extent to which constructing the inductive theory of $\mathfrak{X}$-submaximal subgroups is possible. This concept was introduced by H.\,Wielandt during his talk at the Santa Cruz Conference on Finite Groups in \cite{80Wie} as a tool to study $\mathfrak{X}$-maximal subgroups of finite groups. We recall the background and formulate the main definitions and problems. All groups considered throughout the text are assumed to be finite. Since its inception in the papers by \'E.\,Galois and C.\,Jordan \cite{Galois,Jordan0,Jordan}, group theory has had the following as one of its central problems. Given a group $G$, find its subgroups possessing a specific property or, equivalently, belonging to a specific class ${\mathfrak X}$ of groups (for example, solvable, nilpotent, abelian, $p$-groups, etc.). If ${\mathfrak X}$ has good properties resembling those of solvable groups then to solve the general problem it suffices to know the so-called {\it maximal $\mathfrak{X}$-subgroups} (or {\em $\mathfrak{X}$-maximal subgroups}), i.\,e. the maximal by inclusion subgroup belonging to $\mathfrak{X}$. At the famous group theory conference held in Santa Cruz as early as 1979, H.\,Wielandt put forward the program to study the $\mathfrak{X}$-maximal subgroups in finite groups for the so-called complete classes $\mathfrak{X}$ \cite{80Wie}. According to his definition, a nonempty class $\mathfrak{X}$ of finite groups is {\it complete} if $\mathfrak{X}$ is closed with respect to taking subgroups, homomorphic images, and extensions. The latter means that $\mathfrak{X}$ includes every group $G$ with a normal subgroup $A$ such that $A, G/A\in\mathfrak{X}$. Other than the class of solvable groups, an important example of a complete class is the class of $\pi$-groups for any set $\pi$ of primes, i.\,e. the groups all whose prime divisors of the order belong to $\pi$. Unless otherwise stated, we will henceforth assume that $\mathfrak{X}$ is a fixed complete class. Wielandt viewed his program as a development of H\"older's program which replaced the study of an arbitrary finite group $G$ with the study of quotients of a subnormal (e.\,g., composition) series $$ G=G_0\geq G_1\geq\dots\geq G_n=1 $$ and, in place of a subgroup $H$ of $G$, its {\it projections} $$H^i=(H\cap G_{i-1})G_i/G_i$$ on the corresponding quotients $G^i=G_{i-1}/G_i$ were considered. Clearly, if all projections $H^i$ are $\mathfrak{X}$-maximal in $G^i$ then $H$ itself is $\mathfrak{X}$-maximal in $G$. The problem is to see if the reverse holds, which is equivalent to asking whether the subgroups $H\cap A$ and $HA/A$ are $\mathfrak{X}$-maximal in a normal subgroup $A$ and the quotient $G/A$ if $H$ is $\mathfrak{X}$-maximal in $G$. The answer depends on the class $\mathfrak{X}$ and determines the extent to which the problem of finding $\mathfrak{X}$-maximal subgroups is inductive. The classes for which the answer is in the affirmative are listed in \cite{20Rev}: such is the class of all finite groups, the class of groups of order $1$, and the class of all $p$-groups for any prime $p$. For all the remaining cases, there exists a group $A$ with nonconjugate $\mathfrak{X}$-maximal subgroups. Wielandt proposed \cite[4.3]{80Wie} a rather general construction showing that, whichever finite group $G$ is, there exists an epimorphism from the regular wreath product $G^*=A\wr G$ onto $G$ such that every (not only maximal!) $\mathfrak{X}$-subgroup of $G$ is the image of an $\mathfrak{X}$-maximal subgroup of~$G$. Wielandt's idea \cite{80Wie} was to consider instead of $\mathfrak{X}$-maximal subgroups the broader, yet substantial, concept of $\mathfrak{X}$-submaximal subgroups which ``behave well'' under intersections with normal and subnormal subgroups. A subgroup $H$ of $G$ is {\it $\mathfrak{X}$-submaximal} (or a {\it submaximal $\mathfrak{X}$-subgroup}) if there exists an embedding of $G$ as a subnormal subgroup into a group $G^*$ (in which case we say that $G$ is {\em subnormally embedded} in $G^*$) such that $H$ coincides with the intersection of $G$ and $K$ for a suitable $\mathfrak{X}$-maximal subgroup $K$ of $G^*$. Clearly, every $\mathfrak{X}$-maximal subgroup is also $\mathfrak{X}$-submaximal. The converse is not true, see \cite[p.~13]{18GuoRev1}. $\mathfrak{X}$-submaximal subgroups have the obvious inductive property: if $H$ is $\mathfrak{X}$-maximal and $A$ is subnormal (in particular, normal) in~$G$ then $H\cap A$ is $\mathfrak{X}$-submaximal in~$A$. Not every $\mathfrak{X}$-subgroup is submaximal. An obstruction here is the following \begin{thm*}[Wielandt--Hartley, {\cite[5.4(a)]{80Wie}}, {\cite[Theorem~2]{19RevSkrV}}] If $H$~is an $\mathfrak{X}$-sub\-maxi\-mal subgroup of $G$ then $N_G(H)/H$ includes no nontrivial $\mathfrak{X}$-subgroups. \end{thm*} It is due to this theorem that the concept of an $\mathfrak{X}$-submaximal subgroup becomes useful and efficient. For example, it helps to easily see that the $\mathfrak{X}$-maximal subgroups are determined uniquely up to conjugacy by their projections on the quotients of a subnormal series \cite[5.4(c)]{80Wie}, \cite[Corollary~1]{19RevSkrV}. In comparison, the similarly defined dual concept of an $\mathfrak{X}$-epimaximal subgroup which ensures ``good behaviour'' under homomorphisms turns out to lack content, because in any nontrivial situation it is equivalent to the concept of an $\mathfrak{X}$-subgroup \cite{20Rev}. It was shown in \cite{18GuoRev1,18GuoRev2} that the knowledge of $\mathfrak{X}$-submaximal subgroups in simple groups for a given class $\mathfrak{X}$ would make it possible to inductively construct the $\mathfrak{X}$-maximal subgroups in an arbitrary finite group and, consequently, would make great progress in solving the general problem. One can naturally ask to what extent the problem of finding $\mathfrak{X}$-submaximal subgroups is inductive. By this, we mean the following. Besides the fact that $\mathfrak{X}$-submaximal subgroups ``behave well'' with respect to intersections with (sub)normal subgroups, it is important to determine how they behave under homomorphisms. Due to the above, we cannot expect the image of an $\mathfrak{X}$-submaximal subgroup in a factor group to be $\mathfrak{X}$-submaximal: this fails even for the images of $\mathfrak{X}$-maximal subgroups. At the same time, generalizing a famous result by S.\,A.\,Chunikhin (see \cite[Ch. 5, Theorem 3.7]{86Suz}) Wielandt showed that the following reduction theorem holds: \begin{thm*}[Wielandt, {\cite[12.9]{94Wie607}}] If $A$ is an $\mathfrak{X}$-separable normal subgroup of a group~$G$ (i.e., $A$ has a subnormal series with every factor either belonging to $\mathfrak{X}$ or to the class $\mathfrak{X}'$ that consists of all groups none of whose nonidentity subgroups belongs to~$\mathfrak{X}$), then the canonical epimorphism $G\rightarrow G/A$ maps an $\mathfrak{X}$-maximal subgroup to an $\mathfrak{X}$-maximal one and, furthermore, induces a bijection between the conjugacy classes of $\mathfrak{X}$-maximal subgroups in $G$ and $G/A$. \end{thm*} Therefore, the presence of a normal $\mathfrak{X}$-separable subgroup $A$ in $G$ enables us to replace the study of $\mathfrak{X}$-maximal subgroups in $G$ with a similar problem for the smaller group $G/A$. The inductiveness of the problem of finding $\mathfrak{X}$-subnormal subgroups depends, to a large extent, on whether a similar reduction theorem for $\mathfrak{X}$-submaximal subgroups holds, i.\,e. whether the following is true. \begin{conj}\label{conj_a} If $A$ is an $\mathfrak{X}$-separable normal subgroup of a finite group $G$ then the canonical epimorphism $G\rightarrow G/A$ always maps an $\mathfrak{X}$-submaximal subgroup of $G$ to an $\mathfrak{X}$-submaximal subgroup of $G/A$ and vice versa, every $\mathfrak{X}$-submaximal subgroup of $G/A$ is the image of an $\mathfrak{X}$-submaximal subgroup of $G$. \end{conj} This conjecture takes central place in the present paper. The original idea of the authors was to attempt confirming both Conjecture \ref{conj_a} and \begin{conj}\label{conj_b} If $A$ is a normal and $H$ an $\mathfrak{X}$-submaximal subgroups of a finite group $G$ then $H$ is $\mathfrak{X}$-submaximal in $N_G(H\cap A)$. \end{conj} In relation to Conjecture \ref{conj_b}, we note that if $H$ is an $\mathfrak{X}$-submaximal subgroup of $G$ and $H$ is contained in a subgroup $M$ of $G$ then $H$ is not necessarily $\mathfrak{X}$-submaximal in $M$. Say, an $\mathfrak{X}$-submaximal but not $\mathfrak{X}$-maximal subgroup $H$ which is contained in a strictly larger $\mathfrak{X}$-subgroup $M$ is not $\mathfrak{X}$-submaximal in $M$. If both Conjectures \ref{conj_a} and \ref{conj_b} turned out to be true, the problem of finding $\mathfrak{X}$-submaximal subgroups in an arbitrary finite group could be considered inductive. Let us explain why. Suppose $G$ is not simple and we know how to find the $\mathfrak{X}$-submaximal subgroups in all groups whose order is smaller than the order of $G$. Let~$A$ be a proper normal subgroup of $G$. For any $\mathfrak{X}$-submaximal subgroup $H$ of $G$, the intersection $H\cap A$ is $\mathfrak{X}$-submaximal in $A$ and is therefore known. We may consider the subgroup $N_G(H\cap A)$ in which the $\mathfrak{X}$-subgroup $H$ is submaximal due to Conjecture B. Furthermore, $N_G(H\cap A)$ includes the normal subgroup $N_A(H\cap A)$ which is $\mathfrak{X}$-separable by the Wielandt--Hartley theorem. Thus, the validity of Conjecture A would reduce finding the $\mathfrak{X}$-submaximal subgroups in $N_G(H\cap A)$ to a similar problem for $N_G(H\cap A)/N_A(H\cap A)\simeq AN_G(H\cap A)/A$. The quotient $G/A$ acts naturally on the set of conjugacy classes of $\mathfrak{X}$-submaximal subgroups of~$A$, and $AN_G(H\cap A)/A$ is the stabilizer of the class containing $H\cap A$. Therefore, the problem of finding $\mathfrak{X}$-submaximal subgroups of $G$ is reduced to a similar problem for the stabilizers in $G/A$ of the conjugacy classes of $\mathfrak{X}$-submaximal subgroups of~$A$. We note that, for a simple group $S$, the $\mathfrak{X}$-submaximal subgroups are precisely the intersections of $S=\operatorname{Inn}(S)$ and $\mathfrak{X}$-maximal subgroups of $\operatorname{Aut}(S)$ \cite[5.3]{80Wie}, \cite[Proposition~7]{18GuoRev}. The fact that Conjecture \ref{conj_a} does not hold was established in \cite{20RevZav}, where an example was constructed of a group $G$ with a normal abelian $2$-subgroup $A$ and a submaximal but not maximal $\{2,3\}$-subgroup $H$ whose image with respect to the canonical epimorphism $G\rightarrow G/A$ is not $\{2,3\}$-submaximal in~$G/A$. The kernel of this homomorphism is an $\mathfrak{X}$-subgroup for the class $\mathfrak{X}$ of all $\{2,3\}$-groups. In the same paper, it was announced that there exist epimorphisms whose kernel belongs to $\mathfrak{X}'$ and the image of an $\mathfrak{X}$-submaximal subgroup is not $\mathfrak{X}$-submaximal. We construct an infinite series of such examples in the final section of this paper. The situation with the preimages of $\mathfrak{X}$-submaximal subgroups is completely analogous, which makes Conjecture~\ref{conj_a} invalid in both directions. When constructing the examples, we consider a group $G$ with a unique minimal normal subgroup $V$ which is abelian, is not contained in the center of $G$, and is such that $G/V$ is a nonabelian simple group. In order to justify the examples, we need to show that some $\pi$-subgroup of $G$ is not submaximal. We make use of Proposition~\ref{one} stated and proved in Section~\ref{Sec3}, which implies that the $\pi$-submaximal subgroups of $G$ are exhausted by the intersections of $G$ with $\pi$-maximal subgroups of the groups $G^*$ satisfying $G=\operatorname{Inn}(G)\trianglelefteqslant G^* \leqslant\operatorname{Aut}(G)$. We emphasize that the subnormal embedding of $G$ into $G^*$ in the definition of an $\mathfrak{X}$-submaximal subgroup cannot in general be substituted with a normal embedding. An appropriate series of examples was pointed out by A.V.\,Vasil'ev, see \cite[Section~2]{19RevSkrV}. Thus, Proposition~\ref{one} is of interest in its own right and can be considered as one of the main results of this paper. Although the problem of whether Conjecture B is true loses its relevance in light of the present results along with~\cite{20RevZav}, it still remains open. Undoubtedly, the examples constructed in~\cite{20RevZav} and in this paper by no means imply that the study of $\mathfrak{X}$-submaximal subgroups is futile. Instead, they lead to the realization as to why the central problem in Wielandt's program and in the topic related to the search of $\mathfrak{X}$-maximal subgroups for a given complete class~$\mathfrak{X}$ is the description of $\mathfrak{X}$-submaximal subgroups in simple groups or, equivalently, the description of $\mathfrak{X}$-maximal subgroups in almost simple groups. Such a description would make it possible to find the $\mathfrak{X}$-maximal subgroup in arbitrary finite groups. \section{Preliminaries} To simplify the exposition, we restrict ourselves to the case where $\mathfrak{X}$ is the class of $\pi$-groups for an arbitrary set $\pi$ of primes. The following lemma was first formulated in \cite[Statement 5.4(a)]{80Wie}. A proof can be found in \cite[Theorem 2]{19RevSkrV}. \begin{lem}[The Wielandt--Hartley theorem, strong form]\label{WH-strong} Let $A$ be a subnormal and $K$ a $\pi$-maximal subgroup of a finite group $G$. Then $N_A(K\cap A)/(K\cap A)$ is a $\pi'$-group. \end{lem} \begin{lem}[{\cite[Theorems 12.4 and 12.7]{94Wie607}}]\label{pimhom} If $U$ is a normal $\pi$- or $\pi'$-subgroup of a group $G$ then $KU/U$ is $\pi$-maximal in $G/U$ for every $\pi$-maximal subgroup $K$ of $G$ \end{lem} \begin{lem}\label{three} Let $G\trianglelefteqslant \trianglelefteqslant G^*$ and let $H=K\cap G$ for some $\pi$-maximal subgroup $K$ of $G^*$. Suppose that $U$ is a normal $\pi'$-subgroup of $G^*$ and $\,\overline{\phantom{a}}:G^*\to G^*/U$ is the canonical epimorphism. Then $\overline{H}=\overline{K}\cap\overline{G}$. \end{lem} \begin{proof} Clearly, $\overline{H}=\overline{K\cap G}\leqslant \overline{K}\cap \overline{G}$. Suppose that $\overline{H}< \overline{K}\cap \overline{G}$. Since $G\trianglelefteqslant \trianglelefteqslant G^*$, we have $H\trianglelefteqslant \trianglelefteqslant K$, $\overline{H} \trianglelefteqslant \trianglelefteqslant \overline{K}$, and $\overline{H} \trianglelefteqslant \trianglelefteqslant \overline{K}\cap \overline{G}$. Since $\overline{H}<\overline{K}\cap \overline{G}$, the index of $\overline{H}$ in $N_{\overline{G}}(\overline{H})$ is divisible by a prime $p\in \pi$. We have $$ N_{\overline{G}}(\overline{H})=N_G(H)U/U=\overline{N_G(H)}. $$ Indeed, we clearly have $\overline{N_G(H)}\leqslant N_{\overline{G}}(\overline{H})$. Suppose $x\in N_G(HU)$ (equivalently, $\overline{x}\in N_{\overline{G}}(\overline{H})$). Then $H^xU=HU$ and the Schur--Zassenhaus theorem implies that there is $u\in U$ such that $H^x=H^u$, i.\,e. $xu^{-1}\in N_G(H)$, $x\in N_G(H)U$, and $\overline{x}\in\overline{N_G(H)}$. Now, we have $$ p\mid \left|N_{\overline{G}}(\overline{H}):\overline{H}\right|=\left|\frac{N_G(H)U}{HU}\right|=\frac{|N_G(H)|}{|N_U(H)|}:\frac{|H|}{|H\cap U|}=\left|\frac{N_G(H)}{H}\right|:\frac{|N_U(H)|}{|H\cap U|}. $$ Since $|N_U(H)|$ is a $\pi'$-number, we have $p\mid \left|N_G(H)/H\right|$ contrary to Lemma \ref{WH-strong}. \end{proof} Let $\mathcal{X}$ be a linear representation of a group $G$. Given $\gamma\in \operatorname{Aut}(G)$, the conjugate representation $\mathcal{X}^\gamma$ is defined by $\mathcal{X}^\gamma(g^\gamma)=\mathcal{X}(g)$ for all $g\in G$. \begin{lem}\label{norim} Let $G$ be a finite group such that $Z(G)=1$. Let $\mathcal{X}:G\to \operatorname{GL}_n(F)$ be a faithful absolutely irreducible representation of $G$ over a field $F$. Suppose that $\mathcal{X}$ is not equivalent to $\mathcal{X}^\mathcal{\gamma}$ for every $\gamma\in \operatorname{Aut}(G)\setminus\operatorname{Inn}(G)$. Then $N_{\operatorname{GL}_n(F)}(\operatorname{Im}\mathcal{X})\cong C\times G$ where $C\cong F^\times$ is the scalar subgroup of $\operatorname{GL}_n(F)$. \end{lem} \begin{proof} Denote $M=\operatorname{Im}\mathcal{X}\cong G$ and $N=N_{\operatorname{GL}_n(F)}(M)$. Since $\mathcal{X}$ is absolutely irreducible, we have $C=C_{\operatorname{GL}_n(F)}(M)$ by Schur's lemma. Clearly, $C\trianglelefteqslant N$ and $C\cap M =1$ due to $Z(G)=1$. Therefore, $G\leqslant \overline{N}\leqslant \operatorname{Aut}(G)$, where we denote by $\overline{\phantom{m}}: N\to N/C$ the canonical epimorphism. It remains to see that $G=\overline{N}$. Assume to the contrary that there is $\gamma\in \operatorname{Aut}(G)\setminus\operatorname{Inn}(G)$ such that $\overline{t}=\gamma$ for some $t\in N$. The identification of $G$ and $\operatorname{Inn}(G)$ shows that $\mathcal{X}$ and $\overline{\phantom{m}}$ are mutually inverse isomorphisms between $M$ and $G$. Namely, $x=\mathcal{X}(\overline{x})$ for every $x\in M$ and $\overline{\mathcal{X}(g)}=g$ for every $g\in L$. Since $t$ normalizes $M$, we have $\mathcal{X}(g)^t\in M$ and so $$ \mathcal{X}(g)^t=\mathcal{X}(\overline{\mathcal{X}(g)^t})= \mathcal{X}(\overline{\mathcal{X}(g)}^{\,\overline{t}})=\mathcal{X}(g^\gamma)=\mathcal{X}^{\gamma^{-1}}(g) $$ which implies that the conjugate representation $\mathcal{X}^{\gamma^{-1}}$ is equivalent to $\mathcal{X}$. This contradicts the hypothesis, because $\gamma^{-1}\in\operatorname{Aut}(G)\setminus\operatorname{Inn}(G)$. The claim follows. \end{proof} \section{Proposition}\label{Sec3} The following result includes \cite[Proposition~1]{20RevZav} as a particular case. \begin{prop}\label{one} Suppose a finite group $G$ has a unique minimal normal subgroup~$V$ which is abelian and $V\nleqslant \operatorname{Z}(G)$. Suppose also that $L=G/V$ is a nonabelian simple group. Let $H$ be a $\pi$-submaximal subgroup of $G$ and let $G^*$ have minimal order among the groups such that $G\trianglelefteqslant \trianglelefteqslant G^*$ and $H=K\cap G$ for a $\pi$-maximal subgroup $K$ of $G^*$. Then $G\trianglelefteqslant G^*$ and $C_{G^*}(G)=1$. \end{prop} \begin{proof} Since $V$ is abelian minimal normal, it is an elementary abelian $p$-group for a prime $p$. Denote $W=\langle V^g\mid g \in G^*\rangle$, the normal closure of $V$ in $G^*$. Note that $W$ is a $p$-group as the subgroup generated by subnormal $p$-subgroups. If $p\in \pi$ then $W\leqslant K$, and if $p\not\in\pi$ then $W\cap K= 1$. Denote by $\overline{\phantom{a}}:G^*\to G^*/W$ the canonical epimorphism. Set $X=\langle G^g\mid g \in G^*\rangle$. The minimality of $|G^*|$ implies that $G^*=KX$. Moreover, $\overline{X}$ is minimal normal in $\overline{G^*}$. In particular, $GW\trianglelefteqslant X$. We show that every minimal normal subgroup $U$ of $G^*$ such that $U\nleqslant W$ is a $\pi$-group. Indeed, we have $U\cap W=1$ and $\overline{U}\cong U$. Also, $\overline{U}$ is minimal normal in $\overline{G^*}$, for if $\overline{M}\trianglelefteqslant \overline{G^*}$ and $M\leqslant U$ then $$ [M,G^*]\leqslant U\cap MW = (U\cap W)M=M $$ and $M\trianglelefteqslant G^*$. Therefore, either $\overline{U}=\overline{X}$ or $\overline{U}\cap \overline{X}=1$. In the former case, we have $X=UW\cong U\times W$, which is impossible in view of the structure of $G$. In the latter case, $U\cap X\leqslant U\cap W = 1$ and $U$ can be embedded into $G^*/X\cong K/(K\cap X)$, i.\,e. $U$ is a $\pi$-group. Consequently, every minimal normal subgroup $U$ of $G^*$ is either a $\pi$- or a $\pi'$-group, for we have either $U\nleqslant W$ or $U\leqslant W$, and $W$ is a $\pi$- or a $\pi'$-group depending on whether $p\in \pi$. We show that $W$ is a unique minimal normal subgroup of $G^*$. Let $U$ be a minimal normal subgroup of $G^*$. It suffices to show that $U\cap V\ne 1$ and so $V\leqslant U$ and $W=\langle V^g\mid g \in G^*\rangle\leqslant U$. Assume to the contrary that $U\cap V= 1$. Denote by $\widetilde{\phantom{a}}:G^*\to G^*/U$ the canonical epimorphism. By Lemma \ref{pimhom}, $\widetilde{K}$ is a $\pi$-maximal subgroup of $\widetilde{G^*}$. We have $G\cap U\leqslant V\cap U=1$ as $V$ is the unique minimal normal subgroup of $G$. It follows that $G\cap U=1$ and $G\cong \widetilde{G}$. We show that $\widetilde{H}=\widetilde{K}\cap \widetilde{G}$. This follows from Lemma \ref{three} if $U$ is a $\pi'$-group. If $U$ is a $\pi$-group, we have $U\leqslant K$ and $GU\cap K=(G\cap K)U=HU$. This again implies $\widetilde{H}=\widetilde{K}\cap \widetilde{G}$. We now have a contradiction with the minimality of $|G^*|$, and so $U\cap V\ne 1$ as claimed. Let us now show that $W =VC_W(G)$. By Clifford's theorem and the subnormality of $G$ in $G^*$, the $\mathbb{F}_pG$-module $W$ is completely reducible. An arbitrary irreducible submodule $U$ of $W$ that is not contained in $C_W(G)$ must be contained in $V$. This follows since $W$ normalizes $G$, see \cite[Theorem 2.6]{08Isa}, and $$ U=[U,G]\leqslant U\cap G\leqslant W\cap G=V. $$ Therefore, $W =VC_W(G)$. We now prove that $C_{G^*}(X)=1$. Assuming the contrary we have $W\leqslant C_{G^*}(X)\trianglelefteqslant G$, because $W$ is the unique minimal normal subgroup of $G^*$. Then $V\leqslant W\cap G\leqslant C_G(G)=\operatorname{Z}(G)$ which contradicts the assumption that $V\nleqslant \operatorname{Z}(G)$. Denote $N=N_K(GW)$, $G^0=N_{G^*}(GW)$. We have $G^0=NX$, since $X\leqslant G^0$ and $G^*=KX$. We show that if $M$ is a $\pi$-maximal subgroup of $G^0$ that contains $N$ then $H=G\cap M$. Firstly, we have $H=G\cap N$, because $H\leqslant N$ and $G\cap N\leqslant G\cap K=H$. Secondly, $G^0=NX=MX$ by the above. Let $1=g_1,\ldots,g_m$ be a right transversal of $N$ in $K$ which will also be a right transversal of $G^0$ in $G^*$. We set $M_i=(M\cap GW)^{g_i}$. For every $g\in K$, there exists $\sigma\in \operatorname{Sym}_m$ and $t_1,\ldots,t_m\in N$ such that $g_ig=t_ig_{i\sigma}$. Therefore, $$ M_i^g=(M\cap GW)^{g_ig}=(M\cap GW)^{t_ig_{i\sigma}}=(M\cap GW)^{g_{i\sigma}}=M_{i\sigma}. $$ It follows that $K$ normalizes the subgroup $M_X=\langle M_i\mid i=1,\ldots, m\rangle$. Hence, $\overline{K}$ normalizes $\overline{M_X}$. Note that $\overline{M_X}$ is a $\pi$-group, since $[\overline{M_i},\overline{M_j}]=1$ for $i\ne j$ and the $M_i$'s are $\pi$-groups. Also, $\overline{M_X}\leqslant \overline{K}$, because $\overline{K}$ is $\pi$-maximal in $\overline{G^*}$. Now, if $W$ is a $\pi$-group, we have $M_X\leqslant K$ and $$ M\cap G\leqslant M\cap GW=M_1 \leqslant M_X\leqslant K, $$ which yields $$ H=N\cap G\leqslant M\cap G\leqslant K\cap G =H $$ as claimed. Suppose $W$ is a $\pi'$-group. Then Lemma \ref{pimhom} implies $$ \overline{K}\cap\overline{G}=\overline{H}\leqslant \overline{N}\cap\overline{G}\leqslant \overline{M}\cap \overline{G}\leqslant \overline{M_X}\cap\overline{G}\leqslant\overline{K}\cap\overline{G}, $$ which yields $\overline{H}=\overline{M}\cap\overline{G}$. Another application of Lemma \ref{pimhom} gives $\overline{M}\cap\overline{G}=\overline{M\cap G}$, hence $HW=(M\cap G)W$. Therefore, $|H|=|M\cap G|$. Since $H=N\cap G\leqslant M\cap G$, we have $H=M\cap G$ as claimed. The minimality of $G^*$ now gives $X=GW$ and $X/W\cong L$. Therefore, $C_W(G)=C_W(X)\leqslant C_{G^*}(X)=1$ and $W=VC_W(G)=V$. Consequently, $X=GV=G$ and $G\trianglelefteqslant G^*$. \end{proof} \section{Examples} In this section, we construct an infinite series of examples where the image of a $\pi$-submaximal subgroup under an epimorphism $\phi$ whose kernel is an abelian $\pi'$-group is not $\pi$-submaximal in $\operatorname{Im}\phi$. Conversely, we give examples where a $\pi$-submaximal subgroup of $\operatorname{Im}\phi$ is the image of no $\pi$-submaximal in the domain of $\phi$. In these examples, $\pi=\{2,3\}$. The simple group $L=\operatorname{PSL}_2(7)$ has presentation \begin{equation}\label{pres} L=\langle a,b \mid a^2=b^3=(ab)^7=[a,b]^4=1 \rangle, \end{equation} see \cite{atl}. Let $F$ be a finite field of characteristic coprime with $|L|=2^3.3.7$ in which the polynomial $x^2+x+2$ has distinct roots, say, $\alpha$ and $\beta$. For example, such is every field $\mathbb{F}_p$ for an odd prime $p\equiv 1,2,4 \pmod{7}$. It can be readily seen that the matrices $$ A=\left( \begin{array}{ccc} 1& \alpha & \beta \\ 0 & -1 & 0 \\ 0 & 0 & -1 \\ \end{array} \right),\qquad B=\left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ \end{array} \right) $$ satisfy the defining relations in (\ref{pres}) so that the map $a\mapsto A$, $b\mapsto B$ determines an absolutely irreducible faithful representation $\mathcal{X}:L\to \operatorname{GL}_3(F)$ whose $F$-character $\chi$ is shown in Table \ref{bch}. \begin{table}[htb] \centering \caption{The $F$-character of $\mathcal{X}$}\label{bch} \begin{tabular}{c|cccccc} & $1a$ & $2a$ & $3a$ & $4a$ & $7a$ & $7b$ \\ [2pt] \hline $\chi^{\vphantom{A^{A^A}}}$ & $3$ & $-1$ & $0$ & $1$ & $\alpha$ & $\beta$ \end{tabular} \end{table} We identify $L$ with the subgroup $\operatorname{Inn}(L)$ of $\operatorname{Aut}(L)\cong \operatorname{PGL}_2(7)$. It is known that $\operatorname{Aut}(L)$ is the extension $L\langle \delta \rangle$, where $\delta$ is a diagonal automorphism of $L$ of order~$2$. Observe that the conjugate representation $\mathcal{X}^\delta$ is not equivalent to $\mathcal{X}$, because $\delta$ permutes the two conjugacy classes of $L$ of order $7$ while $\chi$ has distinct values on these classes. Let $V$ be the $3$-dimensional $FL$-module corresponding to $\mathcal{X}$ and let $G$ be the natural semidirect product of $V$ and $L$. Note that $Z(G)=1$ and so we may identify $G$ with $\operatorname{Inn}(G)$ inside $\operatorname{Aut}(G)$. \begin{lem}\label{autg} $\operatorname{Aut}(G)/V \cong C\times L$, where $C\cong F^\times$. \end{lem} \begin{proof} Observe that $V$ is characteristic in $G$ and so every automorphism of $G$ leaves~$V$ invariant. Hence, we may apply the general theory of automorphisms of group extensions. By \cite[(4.4)]{82Rob}, there is an exact sequence of groups $$ 0\to Z^1(L,V) \to \operatorname{Aut}(G) \to N_{\operatorname{GL}_3(F)}(\operatorname{Im}\mathcal X) \to H^2(L,V). $$ Since the characteristic of $F$ is coprime to $|L|$, we have $H^2(L,V)=H^1(L,V)=0$. Also, $H^1(L,V)=Z^1(L,V)/B^1(L,V)$ and $B^1(L,V)\cong V/C_V(L)$. Therefore, we have $Z^1(L,V)=B^1(L,V)\cong V$ due to $C_V(L)=0$. As we observed above, $\mathcal{X}^\delta$ is not equivalent to $\mathcal{X}$. Also, $Z(L)=1$, since $L$ is simple. Hence, $N_{\operatorname{GL}_3(F)}(\operatorname{Im}\mathcal X)\cong C\times L$ by Lemma \ref{norim}. The claim follows from these remarks. \end{proof} Let $\pi=\{2,3\}$. As was observed in \cite[Example 2, p. 170]{86Suz}, the $2$-Sylow subgroups of $L$ are not $\pi$-maximal, because $L$ includes subgroups of order $2^3\cdot3$, however they are $\pi$-submaximal, because they are intersections of $L$ with the $2$-Sylow subgroups of $\operatorname{Aut}(L)$ which are maximal. \begin{lem}\label{s2n} In the above notation, the $2$-Sylow subgroups of $G$ are not $\pi$-sub\-ma\-xi\-mal. \end{lem} \begin{proof} Let $S$ be a $2$-Sylow subgroup of $G$ and suppose to the contrary that $S$ is $\pi$-submaximal. Let $G^*$ be a finite group of minimal order such that $G\trianglelefteqslant \trianglelefteqslant G^*$ and there is a $\pi$-maximal subgroup $K$ of $G^*$ satisfying $S=G\cap K$. Proposition \ref{one} implies that $G\trianglelefteqslant G^*$ and $C_{G^*}(G)=1$, i.\,e. $G^*\leqslant \operatorname{Aut}(G)$. Let $\overline{\phantom{m}}: \operatorname{Aut}(G)\to \operatorname{Aut}(G)/V$ be the canonical epimorphism. Since $V$ is a $\pi'$-group, we have $\overline{S}=\overline{G}\cap \overline{K}$ by Lemma~\ref{three}, $\overline{S}$ is a $2$-Sylow subgroup of $\overline{G}\cong L$, and $\overline{K}$ is $\pi$-maximal in $\overline{G^*}$ by Lemma~\ref{pimhom}. By Lemma \ref{autg}, we have $\overline{G^*}\cong C_0 \times L$, where $C_0$ is a subgroup of $F^\times$. In particular, $K$ is the direct product of a $\pi$-maximal subgroup of $C_0$ and a $\pi$-maximal subgroup of $\overline{G}\cong L$. Thus, $\overline{S}$ is $\pi$-maximal in $L$ contrary to the observation above. This completes the proof. \end{proof} The following example shows that the homomorphic image of a $\pi$-submaximal subgroup is not necessarily $\pi$-submaximal. {\bf Example 1.} Let $V^*$ denote the $FL$-module contragredient to $V$. Since $\delta$ interchanges $V$ and $V^*$, it naturally acts on $V\oplus V^*$. Hence, the semidirect product $H=(V\oplus V^*)L$ can be extended to $H^*=H\langle\delta\rangle$. As above, let $\pi=\{2,3\}$. The $2$-Sylow subgroup of $H^*$ is $\pi$-maximal, because its homomorphic image in $L\langle\delta\rangle$ is $2$-Sylow which is a maximal subgroup. Thus, the $2$-Sylow subgroup $S$ of $H$ is $\pi$-submaximal. Let $\overline{\phantom{m}}: H\to H/V^*$ be the canonical epimorphism. The image $\overline{H}$ is isomorphic to the semidirect product $G=VL$, and Lemma \ref{s2n} implies that $\overline{S}$ is not $\pi$-submaximal. A $\pi$-submaximal subgroup of a homomorphic image need not be the homomorphic image of a $\pi$-submaximal subgroup as shows the next example. {\bf Example 2.} Consider the canonical epimorphism $\overline{\phantom{m}}: G\to G/V\cong L$. We noted above that a $2$-Sylow subgroup $S$ of $L$ is $\pi$-submaximal. However, there is no $\pi$-submaximal subgroup $T$ of $G$ such that $\overline{T}=S$, because $T$ would clearly need to be $2$-Sylow in $G$, but the $2$-Sylow subgroups of $G$ are not $\pi$-submaximal by Lemma~\ref{s2n}. In connection with the study of $\pi$-submaximal subgroups in minimal nonsolvable groups begun in \cite{18GuoRev}, the paper \cite{20RevZav} gives an example of a minimal nonsolvable group $G$ such that the Frattini subgroup $\Phi(G)$ is a $\pi$-group, where $\pi=\{2,3\}$, and a $\pi$-submaximal subgroup of the minimal simple group $G/\Phi(G)\cong \operatorname{PSL}_2(7)$ is not the homomorphic image of any $\pi$-submaximal subgroup of $G$. Furthermore, it was shown in \cite{18GuoRev} that, for every minimal nonsolvable group $G$, the image of a $\pi$-submaximal subgroup in $G/\Phi(G)$ is always $\pi$-submaximal in the minimal simple group $G/\Phi(G)$. In the same paper, the $\pi$-submaximal subgroups in minimal simple groups were classified. It would be interesting to see if there exists a minimal nonsolvable group $G$ such that $\Phi(G)$ is a $\pi'$-group and $G/\Phi(G)$ has a $\pi$-submaximal subgroup that is not the image of any $\pi$-submaximal subgroup of $G$. {\em Acknowledgment.} This work was funded by RFBR and BRFBR, project \textnumero\ 20-51-00007 and by the Program of Fundamental Scientific Research of the SB RAS \textnumero\ I.1.1., project \textnumero\ 0314-2016-0001. \end{document}
arXiv
Evaluating the necessity of bone augmentation for distal radius fracture fixed with a volar locking plate: a retrospective study Feng-Shuo Chang1, Chih-Hui Chen1,2, Cheng-Hung Lee1,3, Kun-Tsan Lee1,4 & Yi-Cheng Cho1 Multiple approaches for fixation of distal radius fractures exist; nonetheless, there is no consensus on the optimal treatment for these injuries. Although using volar locking plates has become increasingly common as a surgical intervention, the usefulness of bone augmentation remains debatable. Therefore, this study aimed to evaluate the necessity of bone augmentation for distal radius fractures fixed with a volar locking plate. This retrospective study enrolled patients with a single distal radius fracture treated with a volar locking plate between January 2014 and December 2016. Overall, 105 fractures were included and divided into two groups (non-bone augmentation: group 1, n = 88; bone augmentation: group 2, n = 17). Images were reviewed, and dorsal cortex collapse, volar tilting, and radial height and inclination were measured immediately after surgery and at the 6-month follow-up. Both groups exhibited significant differences in dorsal collapse (p < 0.001 and p = 0.001, respectively) and radial height shortening (p < 0.001 and p = 0.039, respectively); volar tilting and radial inclination did not differ significantly. There was no difference in the degree of dorsal collapse (p = 0.715) and radial height shortening (p = 0.651) between the two groups. Of the 105 fractures, 54 were identified as comminuted type according to the AO classification (A3, C2, and C3), and similar radiographic outcomes were noted. Volar locking plates for the treatment of distal radius fractures with or without bone augmentation do not affect the radiographic outcomes. In comminuted fractures, additional bone augmentation is unnecessary if intraoperative anatomical reduction and fixation are performed when possible. Distal radius fracture is one of the most common bone fractures [1]. A previous study indicated that the average cost of care for a distal radius fracture patient was up to 8 thousand dollars in the United States during 2009–2015 [2]. Many treatment approaches, including nonsurgical (closed reduction and casting) and surgical (percutaneous fixation, external fixation, and open reduction and internal fixation) techniques for distal radius fractures have been established, and maximal restoration of anatomical alignment and early motion of the wrist is desirable [3,4,5]. The volar locking plate became commercially available in 2000. Its design allows better fixation of osteoporotic and comminuted bone fractures that were previously difficult to manage. A proper volar approach can help achieve intraoperative anatomical reduction, while using a low-profile locking plate can reduce compression and irritation of the median nerve and flexor tendons. Due to these advantages, the volar locking plate is increasingly used for surgical fixation and has proven effective [6,7,8,9]. In terms of surgical treatment, bone augmentation has been performed using a bone graft or bone substitute to fill metaphyseal bone defects caused by fracture, as it is believed to prevent fracture collapse and promote bone healing, especially in osteoporotic and comminuted bone fractures. However, to the best of our knowledge, there is currently no strong evidence that suggests the efficacy of bone augmentation and the indications for its use. Furthermore, bone augmentation is also associated with risks or complications, such as inflammation or improper healing [10,11,12]. For osteoporotic and comminuted bone fractures, the use of a volar locking plate for fixation in combination with bone augmentation seems to provide better fixation, in terms of biomechanics, and seems to be a reasonable treatment [13]. However, the 2009 American Academy of Orthopaedic Surgeons (AAOS) guidelines could not recommend for or against the use of supplemental bone grafts or substitutes when using locking plates [4]. Currently, surgical treatment and volar locking plate fixation has become prominent, and it provides particularly good fixation for osteoporotic and comminuted bone fractures [6,7,8,9]. However, whether combination with bone augmentation is necessary to provide better support for fractures remains questionable; our review of the literature revealed that research in this regard is lacking [4, 14]. We hypothesized that surgical fixation of distal radius fractures using a volar locking plate with intraoperative bone augmentation could effectively prevent metaphyseal collapse. The purpose of this study was to verify this hypothesis and observe the effect of using a volar locking plate for internal fixation with and without bone augmentation on postoperative radiographic outcomes by comparing X-ray images immediately and at 6 months after surgery. Patients and ethical considerations In this retrospective cohort study, we examined the X-ray images of patients undergoing surgery for distal radius fracture at the Department of Orthopedics of Taichung Veterans General Hospital between January 1, 2014, and December 31, 2016. Patients were included if (1) they were aged > 18 years, (2) they had experienced a distal radius fracture, (3) they had undergone open surgery with internal fixation, (4) their surgery included the use of the DePuy Variable Angle LCP Two-Column Volar Distal Radius Plate 2.4 mm® (DePuy Synthes, Oberdorf, Switzerland), (5) they underwent surgery within 2 weeks of fracture, and (6) they were followed up postoperatively for at least 6 months. Patients were excluded from the study if: (1) they underwent re-fracture or revision surgery, (2) they had an associated ipsilateral ulnar shaft fracture, (3) their surgery was performed without using the plate mentioned above, (4) no locking screw was present on the main combi-hole of the plate, (5) they were followed up for less than 6 months postoperatively, and (6) their treatment included the use of an additional external fixation instrument. A total of 391 X-ray images of operated fractures were reviewed, and 105 fractures met the inclusion criteria. We recorded background characteristics (age, sex, and ethnicity), fracture type (AO classification) [15], use of bone augmentation, associated injuries, radiographic outcomes, and complications (including infections, tendinopathies, neuropathies, and pseudoarthrosis). Using the postsurgical X-ray images obtained immediately and at 6 months after surgery, the following radiographic parameters were measured: radial height (Fig. 1a), radial inclination (Fig. 1b), volar tilting (Fig. 1c), and the status of dorsal collapse (Fig. 1d); the changes in each parameter were compared. The status of dorsal collapse was measured as follows: X was defined as the distance of the tip of the dorsal cortex to the 2nd distal locking screw; then, the change in X from immediately after surgery to 6 months after surgery was measured using the following equation: $$ \Delta \mathrm{X}={\mathrm{X}}_{6\mathrm{mo}}-{\mathrm{X}}_{\mathrm{postop}}. $$ Radiographic measurements. a Radial height measurement. R is the distance from the distal 1st locking hole to the distal 2nd locking hole in the AP view, which should be fixed in each radiograph, so that that the Radial height/R ratio after correction is shown to minimize statistical error. b Radial inclination measurement is as shown in the figure. c Volar tilt measurement is as shown in the figure. d The status of dorsal collapse. X is the distance from the tip of the dorsal cortex to the 2nd distal locking screw in the lateral view. Y is the distance from the distal 1st to the distal 2nd locking screw in the lateral view. Y is a fixed value in each image, so that the ratio X/Y is shown to minimize statistical error Due to measurement errors in X-ray images caused by imaging angle or magnification, we made corrections during statistical analysis. Y was defined as the distance from the distal 1st to the distal 2nd locking screw. Y should be fixed between the images immediately and at 6 months after surgery. During each acquisition, X increases or decreases in proportion to Y. Therefore, statistical analysis using the ratio X/Y can eliminate magnification error. Furthermore, X and Y changed when the angle changes during acquisition. When Y becomes Y cos θ, X also becomes X cos θ. The ratio of X cos θ/Y cos θ could be used as an offset to achieve a corrective effect (Fig. 2). Therefore, in the statistical analysis, the status of dorsal collapse is shown as (Fig. 1d): $$ {\mathrm{X}}_{6\mathrm{mo}}/{\mathrm{Y}}_{6\mathrm{mo}}-{\mathrm{X}}_{\mathrm{postop}}/{\mathrm{Y}}_{\mathrm{postop}} $$ Correction for X-ray imaging angle. Because differences in the angle, θ, during each acquisition may cause errors, statistics are expressed as the ratio X cos θ/Y cos θ (= X/Y) for correction Cos, cosine. In addition, the distance from the most distal end of the plate to the distal 2nd locking screw and the distance from the distal 1st to the distal 2nd screw of the DePuy Variable Angle LCP Two-Column Volar Distal Radius Plate 2.4 mm® steel plate used during the operation are the same regardless of the number of combi-holes; thus, even plates of different lengths can be compared with each other (Fig. 3). Comparison of plates of different lengths. The distance from the most distal end of the plate to the distal 2nd locking screw (a) is the same as the distance from the distal 1st to the distal 2nd screw (b), regardless of the length of the plate Similarly, we made corrections for the radial height statistics. R was defined as the distance from the distal 1st locking hole to the distal 2nd locking hole on the anteroposterior (AP) view. The change in radial height (RH) from immediately after surgery to a 6-month follow-up is shown by (Fig. 1a): $$ {\mathrm{R}\mathrm{H}}_{6\mathrm{mo}}/{\mathrm{R}}_{6\mathrm{mo}}-{\mathrm{R}\mathrm{H}}_{\mathrm{postop}}/{\mathrm{R}}_{\mathrm{postop}} $$ By evaluating the postsurgical X-ray images (immediate and after 6 months), we determined the status of fracture healing based on the radius union scoring system (RUSS); a RUSS score ≥ 6 represents healing [16]. Patients were admitted to the outpatient or emergency department and received a complete examination before surgery. Surgery was performed by all experienced attending surgeons working in the Department of Orthopedics of our hospital. Under general anesthesia, a tourniquet was inflated, and a modified Henry approach was used for entry. After correct reduction, we performed fixation using the DePuy Variable Angle LCP Two-Column Volar Distal Radius Plate 2.4 mm. The decision regarding bone augmentation was based on the judgment of each surgeon and conditions at the time of the operation. A bone substitute ("Purzer®" SinboneHT Bone Replacement; Taoyuan City, Taiwan, ROC) was used for all bone augmentation procedures by inserting it into the metaphyseal bone defect through the same volar incision, with no more than 5 mL per wrist. Finally, the C-arm examination was used to confirm the position of the plate and the condition of the fracture reduction to ensure that supination and pronation were feasible. After the pronator quadratus muscle was repaired as much as possible, the wounds were sequentially closed. The postoperative rehabilitation protocol was started based on an early active rehabilitation program [17], and a gentle active range of motion was started on postoperative days 3–5. Sutures were removed on postoperative days 10–14, and the wrist/forearm passive range of motion and strengthening exercises were started. Isotonic strengthening was started 4 weeks after surgery, and heavy putty strengthening was started at 6 weeks after surgery. Patients were evaluated at 3–5 days, 2 weeks, 1 month, 2 months, 3 months, and 6 months after surgery. Postoperative X-ray images were acquired immediately after surgery in the recovery room and at 1-month, 3-month, and 6-month follow-ups. To confirm the comparability of the two study arms, the Mann-Whitney U test was used for continuous variables (age) and the chi-squared statistic was used for normal variables (sex, injured side, and fracture type). The Mann-Whitney U test was also used to determine differences in radiographic parameters between the non-augmentation and augmentation groups immediately after surgery and at the 6-month follow-up. Furthermore, the Wilcoxon signed-rank test was used to determine differences in radiographic parameters between the non-augmentation and augmentation groups immediately after surgery and at 6 months postoperatively. A value of p < 0.05 was considered statistically significant. All statistical analyses were performed using SPSS Statistics version 24 (SPSS Inc., Chicago, Illinois, USA). Table 1 summarizes the characteristics of patients in each group. Of the 105 patients, 60 were women (57.1%) and 45 were men (42.9%). The average age was 50.8 years (range: 19–83 years). Of the affected limbs, 46 were the right hand (43.8%) and 59 were the left hand (56.2%). According to the AO classification [15], 21 cases were classified as AO-type 23-A2, 30 as type A3, 1 as type B1, 2 as type B2, 6 as type B3, 21 as type C1, 16 as type C2, and 8 as type C3, of which 54 (51.4%) were considered as comminuted types (A3, C2, C3). Table 1 Patient characteristics of the groups In all patients, internal fixation was performed using the DePuy Variable Angle LCP Two-Column Volar Distal Radius Plate 2.4 mm®. In 88 cases (83.8%), only the volar locking plate was used; these patients were designated as group 1 (GP1). In the other 17 cases (16.2%), a plate was used with bone augmentation; these patients were designated as group 2 (GP2). We compared the differences in background characteristics between GP1 and GP2 and found that the age (p = 0.011), proportion of women (p = 0.043), and frequency of comminuted fractures (p < 0.001) were significantly higher in GP2; there was no significant difference in the side of injury between both groups (Table 2). There were no significant differences in radiographic outcomes between the two groups both immediately after surgery and at the 6-month follow-up (Table 3). Table 2 Comparison of characteristics among patients with comminuted fractures Table 3 Differences in radiographic parameters immediately after surgery and at 6-month follow-up From the comparison of the postsurgical X-ray images taken immediately after surgery with those of the 6-month follow-up, we observed a significant difference in the change in dorsal collapse (∆X/Y) and radial height shortening in both groups, whereas no significant change in volar tilting or radial inclination was observed (Table 3, Fig. 4). However, there was no difference in the degree of dorsal collapse and radial height shortening between the two groups 6 months post-operation (Table 4). After 6 months, nearly all patients achieved union (103 patients, 98.1%). Only two patients (one in each group) had not met the criteria for the union, but both their RUSS scores were 5 points. Six-month postoperative findings in the non-augmentation (GP1) and augmentation groups (GP2). Significant dorsal collapse and radial height shortening was noted at the 6-month follow-up in both the non-augmentation and augmentation groups Table 4 Changes in radiographic parameters in the non-augmentation and augmentation groups The majority of the patients (94 patients, 89.5%) did not suffer from the associated complications. Complications were noted in nine patients of GP1, and two patients of GP2. In GP1, six patients presented symptoms of carpal tunnel syndrome (three of them required implant removal after bone union), another two patients had acute surgical site infection (both resolved after oral antibiotics treatment), and the other patient suffered from De Quervain syndrome, which resolved after local injection treatment. In GP2, an extraosseous bone substitute was present in ten patients (58.8%), and two of them presented symptoms of carpal tunnel syndrome, and both of them required implant removal after the bone union. The extraosseous material had disappeared in six of the ten patients by 6 months. The overall complications were not significantly different between both groups. (p > 0.99) (Table 1). Furthermore, of the 105 patients, 54 were diagnosed with comminuted fracture types (AO-type 23-A3, C2, C3). These patients were also divided into a non-bone augmentation group (GP1) and a bone augmentation group (GP2) with 38 and 16 patients, respectively. There were no significant differences in background characteristics, sex, or side of injury between the two groups (Table 2). There were no significant differences in radiographic outcomes between the two groups immediately after surgery and at 6-month follow-up (Table 3). The comparison of X-ray images immediately after surgery and at the 6-month follow-up revealed a difference in dorsal collapse (statistically significant) between both groups and a trend towards significance in radial height shortening (GP2); however, no significant change in volar tilting or radial inclination was detected (Table 3, Fig. 5). Furthermore, there was no significant difference in the degree of dorsal collapse and radial height shortening between groups (Table 4). Of the 54 patients, only one failed to meet the criteria for union and had a C2 fracture and required bone augmentation. Six-month postoperative findings in patients with comminuted-type fractures. Significant dorsal collapse and a trend toward significance in radial height shortening were noted in both the non-augmentation and augmentation groups According to our comparisons of the postoperative X-ray images obtained immediately and at the 6-month follow-up, we found no significant change in volar tilting and radial inclination when using the volar locking plate for fixation, regardless of whether bone augmentation was performed. Most patients achieved bone union within 6 months and most of them (94/105 patients, 89.5%) did not have associated complications. Furthermore, a certain degree of dorsal collapse and radial height shortening was present, regardless of whether bone augmentation was performed; moreover, the degree of change in these parameters was not significantly improved by bone augmentation. In addition, we performed statistical analysis including patients with comminuted fractures and reached similar conclusions. These data suggest that when the volar locking plate is used for fixation of distal radius fractures, combining it with intraoperative bone augmentation is not effective in preventing metaphyseal collapse. Although the complication rate was not significantly higher in bone augmentation group in our study, the additional cost and surgical time deem the technique less efficient. Instead, the fixation effect provided by the volar locking plate itself is adequate. Since its development in 2000, the volar locking plate has become prominent for internal fixation. The volar approach is considered to have greater physiological advantages, as it can reduce irritation and damage to the extensor tendons and better preserve the dorsal metaphyseal blood supply. In addition, the reduced anatomical distance is achieved seamlessly because the volar side is usually less comminuted than the dorsal side. Furthermore, the physical characteristics of the locking compression plate design allow it to provide better fixation effects for osteoporotic bones and comminuted fractures. Its low-profile design can reduce irritation and damage to flexor tendons and the median nerve [6,7,8,9, 18, 19]. Although there are numerous advantages to the volar locking plate, complications can still arise. If possible, the plate should not be placed over the watershed line to avoid irritation and damage to the flexor tendons and the median nerve. Distal locking screws in an incorrect position or of an inappropriate length may also cause intra-articular screw penetration or extensor tendon irritation or rupture on the dorsal side [20, 21]. In fracture fixation, in addition to using the volar locking plate for internal fixation, the use of bone augmentation to fill metaphyseal bone defects has also been advocated, especially in osteoporotic and comminuted bone fractures, and it is believed that it can help prevent fracture collapse and promote bone healing. However, not many studies have demonstrated the indication, timing of use, and actual benefits of using bone augmentation [10,11,12,13]. Hegde et al. [10] used K-pin fixation combined with synthetic hydroxyapatite augmentation to treat 31 elderly patients with unstable distal radius fractures. After an average of 16 weeks of follow-up, they found no metaphyseal defects or collapse and satisfactory clinical outcomes among these patients. In a cadaveric study, Högel et al. [13] used volar locking plates for fixation of AO-type 23-A3.3 fractures and divided patients into two groups: cement augmentation and non-augmentation groups. The results showed that cement augmentation significant increases the cycles and load until failure and provides better biomechanical properties. However, a prospective randomized study [22] of 50 patients (24 patients underwent intraoperative combination with AlloMatrix bone graft and 26 did not) with unstable distal radius fractures treated with K-wire fixation found no difference in union rate and reduction parameters between the two groups after 1 year. In a prospective study by Garcés-Zarzalejo et al. [14], 60 patients with unstable distal radius fractures were treated with the volar locking plate. All operations were performed without bone augmentation and all fractures achieved union without a significant loss of reduction after 1 year of follow-up. As mentioned before, there is still controversy in the 2009 AAOS guidelines regarding the use of supplemental bone grafts or substitutes when using locking plates for the treatment of distal radius fractures. The decision depends on the experience of the surgeon and the preference of the patient [4]. In our study, we found that the volar tilting and radial inclination angles did not significantly change from the time of surgery to 6-month follow-up, regardless of whether bone augmentation was performed. From this, we inferred that the distal locking screws on the locking plate have functions similar to those of rafting screws, which support the subchondral bone, thereby maintaining volar tilting and radial inclination. Although the other parameters were slightly worse, this result is still generally consistent with that of other studies. That is, there is little effect on clinical outcomes, and most patients can achieve bone healing and satisfactory functional outcomes [14, 23,24,25]. Additionally, we attempted to correct the length (radial height, dorsal collapse) measurements in the anteroposterior and lateral views of the images in our study to minimize possible errors caused by each X-ray image acquisition, including those arising because of changes in the angle and magnification. However, it was not possible to correct the error in the angle (radial inclination and volar tilt) measurements. As far as we know, there are only a few studies that have corrected the measurements for this aspect of the images. Our study may be the first to attempt this correction during measurement. Our study has some limitations. First, it was a retrospective study with a non-randomized design. Regarding measurements, we only made corrections to the length; we did not make corrections to the angle. Because the position and angle of wrist placement may cause errors during X-ray image acquisition, there is no absolute fixed angle that can be used as a parameter for correction. In addition, some errors may exist in the measurement of the images. Furthermore, the functional scores of the patients were not recorded in this study to judge clinical outcomes; only the radiographic outcomes were evaluated, but they could not be directly linked to clinical results. Although previous studies have reported that patients undergoing volar locking plate fixation for a distal radius fracture have some degree of deterioration of radiographic parameters on postoperative follow-up, they still had good clinical outcomes [14, 22,23,24]. Among our included patients, the number of patients in the bone augmentation group was relatively small, which may also bias our statistical results. However, this is because of the retrospective nature of the study. Based on the current literature, there is still a lack of research that demonstrates the absolute benefits of using bone augmentation. Currently, bone augmentation is performed based on the personal preferences of the surgeon and the intraoperative conditions. Future prospective studies should be randomized and include larger sample size and patients from multiple centers. The use of the volar locking plate is an effective and reliable treatment method for distal radius fractures. Even for comminuted fractures, no additional bone augmentation is needed if an anatomical reduction is achieved as much as possible and fixation is sufficient. Furthermore, nearly all patients in this study achieved union within 6 months of surgery with no loss of reduction. All data generated or analyzed during this study are available from the first author & corresponding author. LCP: Locking compression plate RH: Radial height AAOS: RUSS: Radius union scoring system Nellans KW, Kowalski E, Chung KC. The epidemiology of distal radius fractures. Hand Clin. 2012;28:113–25. Huetteman HE, Zhong L, Chung KC. Cost of surgical treatment for distal radius fractures and the implications of episode-based bundled payments. J Hand Surg. 2018;43:720–30. Koval K, Haidukewych GJ, Zirgibel BJ. Controversies in the management of distal radius fractures. J Am Acad Orthop Surg. 2014;22:566–75. Lichtman DM, Bindra RR, Boyer MI, Putnam MD, Ring D, Slutsky DJ, et al. AAOS clinical practice guideline summary: treatment of distal radius fractures. J Am Acad Orthop Surg. 2010;18:180–9. Chung KC, Shauver MJ, Yin H, Kim HM, Baser O, Birkmeyer JD. Variations in the use of internal fixation for distal radial fracture in the United States Medicare population. J Bone Joint Surg Am. 2011;93:2154–62. Orbay JL, Fernandez DL. Volar fixed-angle plate fixation for unstable distal radius fractures in the elderly patient. J Hand Surg [Am]. 2004;29:96–102. Drobetz H, Bryant AL, Pokorny T, Spitaler R, Leixnering M, Jupiter JB. Volar fixed-angle plating of distal radius extension fractures: influence of plate position on secondary loss of reduction--a biomechanic study in a cadaveric model. J Hand Surg. 2006;31:615–22. Osada D, Kamei S, Masuzaki K, Takai M, Kameda M, Tamai K. Prospective study of distal radius fractures treated with a volar locking plate system. J Hand Surg. 2008;33:691–700. Fok MW, Klausmeyer MA, Fernandez DL, Orbay JL, Bergada AL. Volar plate fixation of intra-articular distal radius fractures: a retrospective study. J Wrist Surg. 2013;2:247–54. Hedge C, Shetty V, Wasnik S, Ahammed I, Shetty V. Use of bone graft substitute in the treatment for distal radius fractures in elderly. Eur J Orthop Surg Traumatol. 2013;23:651–6. Cassidy C, Jupiter JB, Cohen M, Delli-Santi M, Fennell C, Leinberry C, et al. Norian SRS cement compared with conventional fixation in distal radial fractures: a randomized study. J Bone Joint Surg Am. 2003;85:2127–37. Handoll HHG, Watts AC. Bone grafts and bone substitutes for treating distal radial fractures in adults. Cochrane Database Syst Rev. 2008(2):CD006836. https://doi.org/10.1002/14651858.CD006836.pub2. Hogel F, Mair S, Eberle S, Weninger P, von Oldenburg G, Augat P. Distal radius fracture fixation with volar locking plates and additional bone augmentation in osteoporotic bone: a biomechanical study in a cadaveric model. Arch Orthop Trauma Surg. 2013;133:51–7. Garcés-Zarzalejo C, Sánchez-Crespo MR, Peñas-Díaz F, Ayala-Gutierrez H, Gimenez-Rico JS, Alfonso-Fernandez A, et al. Distal radius fractures: Should we use supplemental bone grafts or substitutes in cases of severe osteoporotic or comminution? (Article in English, Spanish). Rev Esp Cir Ortop Traumatol. 2015;59:97–103. Marsh JL, Slongo TF, Agel J, Broderick JS, Creevey W, DeCoster TA, et al. Fracture and dislocation classification compendium-2007: Orthopaedic trauma association classification, database and outcomes committee. J Orthop Trauma. 2007;21:S1–133. Patel SP, Anthony SG, Zurakowski D, Didolkar MM, Kim PS, Wu JS, et al. Radiographic scoring system to evaluate Union of Distal Radius Fractures. J Hand Surg [Am]. 2014;39:1471–9. Brehmer JL, Husband JB. Accelerated rehabilitation compared with a standard protocol after distal radial fractures treated with volar open reduction and internal fixation. J Bone Joint Surg Am. 2014;96:1621–30. Orbay JL. The treatment of unstable distal radius fractures with volar fixation. Hand Surg. 2000;5:103–12. Orbay JL, Touhami A. Current concepts in volar fixed-angle fixation of unstable distal radius fractures. Clin Orthop Relat Res. 2006;445:58–67. Tanaka Y, Aoki M, Izumi T, Fujimiya M, Yamashita T, Imai T. Effect of distal radius volar plate position on contact pressure between the flexor pollicis longus tendon and the distal plate edge. J Hand Surg [Am]. 2011;36:1790–7. Soong M, Earp BE, Bishop G, Leung A, Blazar P. Volar locking plate implant prominence and flexor tendon rupture. J Bone Joint Surg Am. 2011;93:328–35. D'Agostino P, Barbier O. An investigation of the effect of AlloMatrix bone graft in distal radial fracture: a prospective randomised controlled clinical trial. Bone Joint J. 2013;95-B:1514–20. Kotian P, Mudiganty S, Annappa R, Austine J. Radiological Outcomes of Distal Radius Fractures Managed with 2.7mm Volar Locking Plate Fixation-A Retrospective Analysis. J Clin Diagn Res. 2017;11:RC09–12. Vosbikian MM, Ketonis C, Huang R, Costanzo JA, Ilyas AM. Reduction loss after distal radius fracture fixation with locked volar plates. J Surg Orthop Adv. 2017;26:160–5. Martinez-Mendez D, Lizaur-Utrilla A, de Juan-Herrero J. Prospective study of comminuted articular distal radius fractures stabilized by volar plating in the elderly. Int Orthop. 2018;42:2243–8. We thank the Biostatistics Center, Taichung Veterans General Hospital for their support in the statistical analyses. This study received no specific grant from any funding agency. Department of Orthopedics, Taichung Veterans General Hospital, 1650, Taiwan Boulevard Sect. 4, Taichung, 40705, Taiwan Feng-Shuo Chang, Chih-Hui Chen, Cheng-Hung Lee, Kun-Tsan Lee & Yi-Cheng Cho School of Medicine, National Yang-Ming University, No.155, Sec. 2, Li-Nong Street, Pei-Tou District, Taipei City 112, Taiwan Chih-Hui Chen Department of Food Science and Technology, Hung Kuang University, Taichung, Taiwan Cheng-Hung Lee Department of Veterinary Medicine, College of Veterinary Medicine, National Chung-Hsing University, Taichung, Taiwan Kun-Tsan Lee Feng-Shuo Chang Yi-Cheng Cho C-HC designed the study. F-SC and C-HC collected and analyzed the patient data. F-SC searched the literature. F-SC, C-HC, C-HL, K-TL, and Y-CC wrote and prepared the manuscript. All authors read and approved the final manuscript. Correspondence to Chih-Hui Chen. This study was approved by the institutional review board of Taichung Veterans General Hospital, Taichung, Taiwan (approval number: CE19214A). A certificate of approval has been provided. Due to the retrospective nature of the study, formal informed consent from the participants was not required. Chang, FS., Chen, CH., Lee, CH. et al. Evaluating the necessity of bone augmentation for distal radius fracture fixed with a volar locking plate: a retrospective study. BMC Musculoskelet Disord 21, 180 (2020). https://doi.org/10.1186/s12891-020-03203-7 Received: 17 September 2019 Volar locking plate Radiographic outcomes
CommonCrawl
\begin{document} \title{ Estimation of Gaussian quantum states} \author{Dominik \v{S}afr\'{a}nek} \email{[email protected]} \affiliation{SCIPP and Department of Physics, University of California, Santa Cruz, CA 95064, USA} \date{\today} \begin{abstract} We derive several expressions for the quantum Fisher information matrix (QFIM) for the multi-parameter estimation of multi-mode Gaussian quantum states, the corresponding symmetric logarithmic derivatives, and conditions for saturability of the quantum Cram\'{e}r-Rao bound. This bound determines the ultimate precision with which parameters encoded into quantum states can be estimated. We include expressions for mixed states, for the case when the Williamson decomposition of the covariance matrix is known, expressions in terms of infinite series, and expressions for pure states. We also discuss problematic behavior when some modes are pure, and present a method that allows the use of expressions that are defined only for mixed states, to compute QFIM for states with any number of pure modes. \end{abstract} \maketitle With growing need for precise and cost-effective detectors, quantum metrology paves the way in which the new-era quantum sensors should be designed. This theory gives the ultimate bound on the precision with what parameters of a physical system can be measured by a quantum probe. This bound, called the quantum Cram\'er-Rao bound~\cite{Paris2009a}, might not be generally achievable, either because of practical limitations, or not even in principle~\cite{szczykulska2016multi}. But it still gives an estimate whether a certain design of a new quantum detector is feasible, and whether it can yield a better result than the current technology~\cite{Giovannetti2004a,Giovannetti2006a,zwierz2010general,Giovannetti2011a,Demkowicz2012a}. To determine this bound, it is necessary to compute the quantum Fisher information matrix~(QFIM), denoted $H({\boldsymbol{\epsilon}})$. Introduced by Holevo~\cite{Holevo2011a}, Helstrom~\cite{helstrom1967minimum,helstrom1976quantum}, by Bures~\cite{Bures1969a}, and later popularized by Braunstein and Caves~\cite{BraunsteinCaves1994a}, QFIM describe limits in distinguishability of infinitesimally close quantum states $\hat{\rho}_{{\boldsymbol{\epsilon}}}$ and $\hat{\rho}_{{\boldsymbol{\epsilon}}+{\mathrm{d}\!\!\;\boldsymbol{\epsilon}}}$ that differ only by a small variation in parameters that parametrize them. Generally, larger elements of QFIM predict better distinguishability, which therefore leads to a better precision in estimating vector of parameters ${\boldsymbol{\epsilon}}$. There has been a wide range of applicability of this quantity, such as optical interferometry used in the detection of gravitational waves~\cite{abbott2016observation} and litography~\cite{boto2000quantum}, applications in thermometry~\cite{Monras2010a,correa2015individual,hofer2017quantum,spedalieri2018thermal}, phase estimation~\cite{Ballester2004a,Monras2006a,Aspachs2008a,demkowicz2009quantum,chin2012quantum,humphreys2013quantum,Nusran2014a,Sparaciari2015a,pezze2017optimal}, estimation of space-time parameters~\cite{nation2009analogue,weinfurtner2011measurement,aspachs2010optimal,kish2017quantum,fink2017experimental}, magnetic fields~\cite{Wasilewski2010a,cai2013chemical,zhang2014fitting,nair2016far}, squeezing parameters~\cite{Milburn1994a,chiribella2006optimal,Gaiba2009a,benatti2011entanglement,safranek2016optimal}, time~\cite{zhang2013criterion,komar2014quantum}, and frequency~\cite{frowis2014optimal,boss2017quantum}. QFIM has been also used in the description of criticality and quantum phase transitions under the name of `fidelity susceptibility' where they help to describe a sudden change of a quantum state when an external parameter such as magnetic field is varied~\cite{paraoanu1998bures,zanardi2007bures,venuti2007quantum,gu2010fidelity,banchi2014quantum,wu2016geometric,marzolino2017fisher}. Moreover, QFIM measures speed limits on the evolution of quantum states~\cite{margolus1998maximum,taddei2013quantum,del2013quantum,pires2016generalized,deffner2017quantum}, speed limits of quantum computation~\cite{lloyd2000ultimate} and speed limits in charging of batteries~\cite{binder2015quantacell}, it quantifies coherence and entanglement~\cite{hauke2016measuring,girolami2017information,liu2017quantum}, and it provides bounds on irreversibility in open quantum systems~\cite{mancino2018geometrical}. Many recent experimental setups use Gaussian quantum states. This is because these states are easily created, and easily manipulated. They have been used for example in the aforementioned detection of Gravitational waves~\cite{aasi2013enhanced}, and in Bose-Einstein condensates~\cite{dunningham1998quantum,gross2011atomic,wade2016manipulation}. Naturally, one often wants to calculate QFIM for Gaussian states. That can be a daunting task, because they appear very complicated in the density matrix formalilsm. Consequently, there has been a lot of effort to make these calculations easier by utilizing the phase-space formalism, which allows to elegantly describe any Gaussian state just by its first and the second moment. Numerous expressions have been derived. For a single parameter estimation, it is QFI for a pure state~\cite{Pinel2012a}, for a nearly pure state~\cite{Friis2015a}, for a single-mode state~\cite{Pinel2013b}, for a two-mode state~\cite{Safranek2015b}, for a multi-mode state~\cite{Monras2013a}, for a multi-mode state in terms of Williamson's decomposition of the covariance matrix~\cite{Safranek2015b}, and for a multi-mode state in terms of infinite series (limit formula)~\cite{Safranek2015b}; for the multi-parameter estimation, it is QFIM for a single-mode state~\cite{Pinel2013b}, for special cases of two-mode states~\cite{marian2016quantum}, for a multi-mode state in terms of tensors~\cite{Gao2014a}, for a multi-mode state in terms of inverses of super-operators~\cite{banchi2015quantum}, and QFIM for a multi-mode state and condition on saturability on the quantum Cram\'er-Rao bound in terms of Williamson's decomposition of the covariance matrix~\cite{nichols2018multiparameter} that differs in form from the single-parameter result of~\cite{Safranek2015b}, and from results published here. In this paper, we complete the story by deriving several missing expressions for the QFIM for the multi-parameter estimation of multi-mode Gaussian states, expressions for symmetric logarithmic derivatives (SLDs) that determine the optimal measurement, and expressions that determine if the quantum Cram\'er-Rao bound can be saturated. We also discuss and resolve problematic behavior when some of the modes are pure. We devise a regularization procedure, which allows us to use expressions for QFIM that are valid only for mixed Gaussian states to calculate the QFIM for Gaussian states with any number of pure modes. This paper is structured as follows. We show expressions for QFIM, SLD, and expression determining saturability of the Cram\'er-Rao bound subsequently for cases when 1) all the modes of the Gaussian quantum state are mixed, 2) Williamson's decomposition of the covariance matrix is known, 3) the state is mixed and numerical solutions are enough (limit formulas), and 4) the Gaussian state is pure. In between, at appropriate places, we discuss critical behavior of expressions when some of the modes are pure, associated discontinuous behavior, and we explain the aforementioned regularization procedure. Finally, we illustrate the use of derived formulas on several examples. \section{Notation and preliminaries} Lower indices will denote different matrices, while upper indices will denote elements of a matrix. \emph{Bar} as in $\ov{\sigma}$ will denote the complex conjugate, upper index $T$ as in $\sigma^T$ will denote transpose, and $\dag$ as in $\sigma^\dag$ will denote conjugate transpose. $\partial_i\equiv\partial_{\epsilon_i}$ denotes partial derivative with respect to $i$'th element of the vector of estimated parameters ${\boldsymbol{\epsilon}}=(\epsilon_1,\epsilon_2,\dots)$, $\otimes$ denotes the Kronecker product, $[\cdot,\cdot]$ denotes the commutator, $\{\cdot,\cdot\}$ denotes the anti-commutator, $\mathrm{tr}[\cdot]$ denotes trace of a matrix, and $\vectorization{\cdot}$ denotes vectorization of a matrix, which is defined as a column vector constructed from columns of a matrix as \[\label{eq:vectorizationdef} A=\begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}, \quad \vectorization{A}=\begin{pmatrix} a \\ c \\ b \\ d \\ \end{pmatrix}. \end{equation} We consider a Bosonic system with a set annihilation and creation operators $\{\hat{a}_n,\hat{a}_n^\dag\}$. We collect them into a vector of operators ${\boldsymbol{\hat{A}}}:=(\hat{a}_1,\dots,\hat{a}_N,\hat{a}_1^\dag,\dots,\hat{a}_N^\dag)^T$, where $N$ denotes the number of modes. Now we can write commutation relations in an elegant form, $[{\boldsymbol{\hat{A}}}^{m},{\boldsymbol{\hat{A}}}^{n\dag}]=K^{mn}\mathrm{id}$, where $\mathrm{id}$ denotes the identity operator and \[\label{def:K} K= \begin{bmatrix} I & 0 \\ 0 & -I \end{bmatrix} \end{equation} is a constant matrix called \emph{the symplectic form}, and $I$ denotes the identity matrix. In a Bosonic system\footnote{Although Gaussian states are also defined for Fermionic systems, see e.g. Ref.~\cite{carollo2018symmetric}.} one can define a special class of continuous variable states called \emph{Gaussian states}~\cite{Weedbrook2012a}, $\hat{\rho}$, which are fully characterized by its first moments $\boldsymbol{d}^m=\mathrm{tr}\big[\hat{\rho}{\boldsymbol{\hat{A}}}^m\big]$ (\emph{the displacement vector}) and the second moments $\sigma^{mn}=\mathrm{tr}\big[\hat{\rho}\,\{{\Delta\boldsymbol{\hat{A}}}^m,{\Delta\boldsymbol{\hat{A}}}^{n\dag}\}\big]$ (\emph{the covariance matrix}), where ${\Delta\boldsymbol{\hat{A}}}:={\boldsymbol{\hat{A}}}-\boldsymbol{d}$. In this form, $\sigma^\dag=\sigma$, and the moments have the following structure, \[\label{def:first_and_second_moments} \boldsymbol{d}= \begin{bmatrix} \boldsymbol{\boldsymbol{\gamma}} \\ \overline{\boldsymbol{\boldsymbol{\gamma}}} \end{bmatrix},\quad \sigma\,=\,\begin{bmatrix} X & Y \\ \overline{Y} & \overline{X} \end{bmatrix}. \end{equation} These definitions are known as the complex form, and we will use this convention throughout this paper. This description is equivalent to the real form used by some authors~\cite{braunstein2005quantum,Weedbrook2012a,Adesso2014a}. We show how to switch between these descriptions in Appendix~\ref{app:mixed_state}. Both the real and the complex form phase-space represenations of common Gaussian unitaries and Gaussian states are shown in Appendix~\ref{sec:The_phase_space_Gaussian_unitaries}. For more information on the real and the complex form see for example~\cite{Arvind1995a,Safranek2015b,safranek2016optimal,vsafranek2016gaussian}. The QFIM is defined as~\cite{Paris2009a} \[\label{eq:qfi} H^{ij}\equiv\tfrac{1}{2}\mathrm{tr}[\hat{\rho}\{\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j\}], \end{equation} where \emph{symmetric logarithmic derivatives} (SLDs) $\hat{\mathscr{L}}_i$ are defined as operator solutions to equations \[\label{eq:sld} \tfrac{1}{2}\{\hat{\rho},\hat{\mathscr{L}}_i\}=\partial_i\hat{\rho}. \end{equation} The quantum Cram\'er-Rao bound gives a lower bound on the covariance matrix of estimators, \[ \mathrm{Cov}(\hat{{\boldsymbol{\epsilon}}})\geq H^{-1}, \end{equation} meaning that matrix $\mathrm{Cov}(\hat{{\boldsymbol{\epsilon}}})- H^{-1}$ is a positive semi-definite matrix. This bound is can be saturated when~\cite{szczykulska2016multi,ragy2016compatibility} \[\label{eq:condition_for_CRbound} \mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]=0, \end{equation} and the optimal measurement basis is given by the eigenvectors of the SLDs. \section{Results} \textit{\textbf{Mixed states}} In tasks where we know the covariance matrix and displacement vector of a Gaussian state, it is possible to use results derived in~\cite{Gao2014a} to calculate the QFIM and the symmetric logarithmic derivatives. These results can be expressed in an elegant matrix form: for a Gaussian state $(\boldsymbol{d},\sigma)$, the QFIM and symmetric logarithmic derivatives can be calculated as \begin{align} H^{ij}({\boldsymbol{\epsilon}})&=\frac{1}{2}\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d},\label{eq:mixed_QFI}\\ \hat{\mathscr{L}}_i({\boldsymbol{\epsilon}})&={\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_i{\Delta\boldsymbol{\hat{A}}}-\frac{1}{2}\mathrm{tr}[\sigma{\mathscr{A}}_i]+2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_i{\boldsymbol{d}},\label{eq:SLD_mixed} \end{align} where \[ \mathfrak{M}=\ov{\sigma}\otimes\sigma-K\otimes K,\quad \vectorization{{\mathscr{A}}_i}=\mathfrak{M}^{-1}\vectorization{\partial_i\sigma}. \end{equation} These results represent the Gaussian version of the expressions published in Ref.~\cite{vsafranek2018simple}. The above formulas require differentiating the displacement vector and the covariance matrix, and inverting two matrices. However, they cannot be used when at least one of the modes is in a pure state (at least not without any modification), because $\mathfrak{M}$ is not invertible in that case. We will discuss this issue later, and show how this can be resolved (see Eq.~\eqref{eq:any_state_QFIM}). Proof of how results of~\cite{Gao2014a} transform into this elegant matrix form can be found in Appendix~\ref{app:mixed_state}. Further, we derive expression that determines saturability of the quantum Cram\'er-Rao bound, \[\label{eq:condition_QCR} \begin{split} \mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]&\!=\!\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}\\ &+4\partial_i\boldsymbol{d}^\dag\sigma^{-1}K\sigma^{-1}\partial_j\boldsymbol{d}. \end{split} \end{equation} The derivation can be found in Appendix~\ref{app:saturability}. \textit{\textbf{When the Williamson's decomposition of the covariance matrix is known}} There are situations when we know the Williamson's decomposition of the covariance matrix. This is for example when we have control over preparation of the initial state, and we are planning to estimate parameters encoded into this state by a Gaussian unitary channel. This is a case when it is useful to simplify our calculations of QFIM by making use of this decomposition, without the actual need of calculating this decomposition. In other cases, it might be still numerically more efficient to calculate this decomposition, instead of inverting the large matrix $\mathfrak{M}$ needed for Eq.~\eqref{eq:mixed_QFI}. According to the Williamson's theorem~\cite{Williamson1936a,deGosson2006a,simon1999congruences} any positive-definite matrix can be diagonalized by symplectic matrices, $\sigma=SDS^\dag$. We show how to do that explicitly in Appendix~\ref{app:diagonalization}. $D=\mathrm{diag}(\lambda_1,\dots,\lambda_N,\lambda_1,\dots,\lambda_N)$ is a diagonal matrix consisting of \emph{symplectic eigenvalues}, which are defined as the positive eigenvalues of the matrix $K\sigma$. It follows from Heisenberg uncertainty relations that for all $k$, $\lambda_k\geq 1$. $S$ is a \emph{symplectic matrix} satisfying the defining property of the complex form of the real symplectic group $Sp(2N,\mathbb{R})$, \[\label{def:structure_of_S} S= \begin{bmatrix} \alpha & \beta \\ \ov{\beta} & \ov{\alpha} \end{bmatrix},\ \ SKS^\dag=K. \end{equation} We define matrices $P_i:=S^{-1}\partial_i{S}$, which are elements of the Lie algebra associated with the symplectic group, satisfying the defining properties of this algebra, \[\label{def:P_1} P_i= \begin{bmatrix} R_i & Q_i \\ \ov{Q}_i & \ov{R}_i \end{bmatrix},\ \ P_iK+KP_i^\dag=0. \end{equation} Common symplectic matrices in the complex form, representing for example a squeezing operation, phase-change, or a beam-splitter, can be found in Appendix~\ref{sec:The_phase_space_Gaussian_unitaries} or in more detail in Ref.~\cite{safranek2016optimal} and Section II of Ref.~\cite{vsafranek2016gaussian}. Rewriting Eq.~\eqref{eq:mixed_QFI} in terms of the Williamson's decomposition of the covariance matrix, switching to element-wise notation, and simplifying using identities~\eqref{def:structure_of_S} and~\eqref{def:P_1}, we derive we derive an analytical expression for the quantum Fisher information matrix of Gaussian states in terms of the Williamson's decomposition of the covariance matrix, \begin{align} H^{ij}({\boldsymbol{\epsilon}})&\!\!=\!\!\!\!\sum_{k,l=1}^{N}\!\!\frac{(\lambda_k\!-\!\lambda_l)^2}{\lambda_k\lambda_l\!-\!1}\mathrm{Re}[\ov{R_{i}}^{kl}\!R_{j}^{kl}]+\frac{(\lambda_k\!+\!\lambda_l)^2}{\lambda_k\lambda_l\!+\!1}\mathrm{Re}[\ov{Q_{i}}^{kl}\!Q_{j}^{kl}]\nonumber\\ &\!+\sum_{k=1}^{N}\frac{\partial_i\lambda_k\partial_j\lambda_k}{\lambda_k^2-1}+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}.\label{eq:multimode_QFI} \end{align} $\mathrm{Re}$ denotes the real part, $R_i=\alpha^\dag\partial_i{\alpha}-\overline{\beta^\dag\partial_i{\beta}}$ is a skew-Hermitian and $Q_i=\alpha^\dag\partial_i{\beta}-\overline{\beta^\dag\partial_i{\alpha}}$ a (complex) symmetric matrix. We note that $\sigma^{-1}=KSD^{-1}S^\dag K$, which follows from Eq.~\eqref{def:structure_of_S} and properties of $K$. The above formula represents a multi-parameter generalization of the result for a single-parameter estimation published in Ref.~\cite{Safranek2015b}. The full derivation can be found in Appendix~\ref{app:williamson_QFI}. The above formula can be used even for states with some pure modes, defined by $\lambda_k=1$ for some mode $k$. We will show how soon. When none of the modes are pure, i.e., all symplectic eigenvalues are larger than one, we can rewrite Eq.~\eqref{eq:multimode_QFI} in a very elegant way. Defining Hermitian matrix $\widetilde{R}_i^{kl}:=\frac{\lambda_k-\lambda_l}{\sqrt{\lambda_k\lambda_l-1}}R_i^{kl}$, symmetric matrix $\widetilde{Q}_i^{kl}:=\frac{\lambda_k+\lambda_l}{\sqrt{\lambda_k\lambda_l+1}}Q_i^{kl}$, and diagonal matrix $L:=\mathrm{diag}(\lambda_1,\dots,\lambda_N)$, QFIM can be written as \[\label{eq:exact_multimode_compact} \begin{split} H^{ij}({\boldsymbol{\epsilon}})&=\frac{1}{2}\mathrm{tr}\big[\widetilde{R}_i\widetilde{R}_j^\dag+\widetilde{R}_j\widetilde{R}_i^\dag+\widetilde{Q}_i\widetilde{Q}_j^\dag+\widetilde{Q}_j\widetilde{Q}_i^\dag\big]\\ &+\mathrm{tr}\big[(L^2-I)^{-1}\partial_iL\partial_jL\big]+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}. \end{split} \end{equation} Looking at the derivatives in each term, we can conclude that QFIM consists of three qualitatively different terms: the first term is connected to the change of orientation and squeezing of the Gaussian state with small variations in ${\boldsymbol{\epsilon}}$, the second to the change of purity, and the third to the change of displacement. Similarly, we derive expression for the symmetric logarithmic derivative, \[\label{eq:SLD_mixed_williamson} \hat{\mathscr{L}}_i({\boldsymbol{\epsilon}})={\Delta\boldsymbol{\hat{A}}}^\dag \!(S^{-1})^\dag W_i S^{-1}\!{\Delta\boldsymbol{\hat{A}}}-\sum_{k=1}^N\frac{\lambda_k\partial_i\lambda_k}{\lambda_k^2-1}+2{\Delta\boldsymbol{\hat{A}}}^\dag\!\sigma^{-1}\partial_i{\boldsymbol{d}}, \end{equation} where $W_i$ is a Hermitian matrix of form \[ \begin{split} W_i&=\begin{bmatrix} W_{Xi} & W_{Yi} \\ \overline{W_{Yi}} & \overline{W_{Xi}} \end{bmatrix},\\ W_{Xi}^{kl} &=-\frac{\lambda_k-\lambda_l}{\lambda_k\lambda_l-1}R_i^{kl}+\frac{\partial_i\lambda_k}{\lambda_k^2-1}\delta^{kl},\\ W_{Yi}^{kl}&=\frac{\lambda_k+\lambda_l}{\lambda_k\lambda_l+1}Q_i^{kl}. \end{split} \end{equation} Further, we derive expression that determines saturability of the quantum Cram\'er-Rao bound, \[\label{eq:multimode_condition} \begin{split} &\mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]=4\partial_i\boldsymbol{d}^\dag\sigma^{-1}K\sigma^{-1}\partial_j\boldsymbol{d}\\ &+\!\!\sum_{k,l=1}^{N}\!\!\frac{2i(\lambda_k\!+\!\lambda_l)^3}{(\lambda_k\lambda_l\!+\!1)^2}\mathrm{Im}[\ov{Q_{i}}^{kl}\!Q_{j}^{kl}]-\frac{2i(\lambda_k\!-\!\lambda_l)^3}{(\lambda_k\lambda_l\!-\!1)^2}\mathrm{Im}[\ov{R_{i}}^{kl}\!R_{j}^{kl}].\\ \end{split} \end{equation} The derivation can be found in Appendix~\ref{app:williamson_QFI}. Alternative but equivalent forms of the above expressions are also published in Ref.~\cite{nichols2018multiparameter}. \textit{\textbf{When some of the modes are pure}} Eqs.~(\ref{eq:mixed_QFI},\ref{eq:SLD_mixed},\ref{eq:condition_QCR},\ref{eq:multimode_QFI},\ref{eq:SLD_mixed_williamson},\ref{eq:multimode_condition}) are not well defined for states that have at least one mode in a pure state, i.e., $\lambda_k=1$ for some mode $k$, which also results in matrix $\mathfrak{M}$ not being invertible. It has been shown~\cite{vsafranek2017discontinuities}, and we explain it in detail in Appendix~\ref{app:pops}, that there are two unique ways of defining QFIM at these problematic points, depending on the quantity we want to obtain. First, we will illustrate this on Eq.~\eqref{eq:multimode_QFI}. To obtain the QFIM, which is defined~\cite{Paris2009a} through the symmetric logarithmic derivatives $\mathscr{L}_i$ as $H^{ij}\equiv\tfrac{1}{2}\mathrm{tr}[\hat{\rho}\{\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j\}]$, we define problematic terms as \[\label{def:problematic_points} \frac{(\lambda_k\!-\!\lambda_l)^2}{\lambda_k\lambda_l\!-\!1}=0,\quad \frac{\partial_i\lambda_k\partial_j\lambda_k}{\lambda_k^2-1}=0, \end{equation} for ${\boldsymbol{\epsilon}}$ such that $\lambda_k({\boldsymbol{\epsilon}})=\lambda_l({\boldsymbol{\epsilon}})=1$. The continuous quantum Fisher information matrix (cQFIM) is defined~\cite{vsafranek2017discontinuities} as four-times the Bures metric as $H_c^{ij}\equiv 4g^{ij}$, where the Bures metric is defined by $\sum_{i,j}g^{ij}({\boldsymbol{\epsilon}})\mathrm{d}\epsilon_i\mathrm{d}\epsilon_j\equiv 2\big(1-\sqrt{\mathcal{F}(\hat{\rho}_{\boldsymbol{\epsilon}},\hat{\rho}_{{\boldsymbol{\epsilon}}+{\boldsymbol{\mathrm{d}\epsilon}}})}\big)$, and ${\mathcal{F}({\hat{\rho}}_{1},{\hat{\rho}}_{2})=\big(\mathrm{tr}\sqrt{\sqrt{{\hat{\rho}}_{1}}\,{\hat{\rho}}_{2}\,\sqrt{{\hat{\rho}}_{1}}}\big)^{2}}$ denotes the Uhlmann's fidelity~\cite{Uhlmann1976a}. To obtain the cQFIM, we define \[\label{def:problematic_points2} \frac{(\lambda_k\!-\!\lambda_l)^2}{\lambda_k\lambda_l\!-\!1}=0,\quad \frac{\partial_i\lambda_k\partial_j\lambda_k}{\lambda_k^2-1}=\partial_i\partial_j\lambda_k, \end{equation} for ${\boldsymbol{\epsilon}}$ such that $\lambda_k({\boldsymbol{\epsilon}})=\lambda_l({\boldsymbol{\epsilon}})=1$. QFIM and cQFIM are identical everywhere, apart from those problematic points~\cite{vsafranek2017discontinuities}. At those points, QFIM can be discontinuous, while cQFIM is in some sense continuous~\cite{vsafranek2017discontinuities}. Defining Hessian matrix $\mathcal{H}_k^{ij}:=\partial_i\partial_j\lambda_k$, we can use the above equations to write relation \[\label{eq:QFIM_cQFIM_relation} H_c({\boldsymbol{\epsilon}})=H({\boldsymbol{\epsilon}})+\!\!\!\!\!\!\sum_{k:\lambda_k({\boldsymbol{\epsilon}})=1}\!\!\!\!\!\!\mathcal{H}_k({\boldsymbol{\epsilon}}). \end{equation} By writing $k:\lambda_k({\boldsymbol{\epsilon}})=1$ we mean that the sum goes only over values of $k$ for which $\lambda_k({\boldsymbol{\epsilon}})=1$. For any $k$ such that $\lambda_k(\epsilon)=1$, $\mathcal{H}_k({\boldsymbol{\epsilon}})$ is positive semi-definite, and we can therefore write $H_c\geq H$. $H_c= H$ if and only if for all $k$ such that $\lambda_k=1$, $\mathcal{H}_k=0$. Similarly, in Eqs.~\eqref{eq:SLD_mixed_williamson} and~\eqref{eq:multimode_condition} we define $\frac{\lambda_k\!-\!\lambda_l}{\lambda_k\lambda_l\!-\!1}=\frac{(\lambda_k\!-\!\lambda_l)^3}{(\lambda_k\lambda_l\!-\!1)^2}=\frac{\partial_i\lambda_k}{\lambda_k^2-1}:=0$ for ${\boldsymbol{\epsilon}}$ such that $\lambda_k({\boldsymbol{\epsilon}})=\lambda_l({\boldsymbol{\epsilon}})=1$. \textit{\textbf{Regularization procedure}} Now we will show how to treat cases when some modes are pure in general, not limiting ourselves to already resolved case of Eq.~\eqref{eq:multimode_QFI}. We can devise a regularization procedure that will allow us to use expressions that work only for states where all the modes are mixed, such as Eq.~\eqref{eq:mixed_QFI}, to compute the QFIM for any state. Similar method has been already used for regularizing QFIM for non-Gaussian states~\cite{vsafranek2017discontinuities}. It goes as follows. First we multiply the covariance matrix by regularization parameter $\nu>1$, and use some expression, such as Eq.~\eqref{eq:mixed_QFI}, to calculate the QFIM for state $(\boldsymbol{d},\nu\sigma)$. Then we perform limit $\nu\rightarrow 1$. The resulting value will represent the correct QFIM for state $(\boldsymbol{d},\sigma)$. To prove that, however, we have to check that this limit leads to the proper definition of the problematic points, as given by Eq.~\eqref{def:problematic_points}. We take Eq.~\eqref{eq:multimode_QFI} as a study case, but because this formula is general, the result will be valid for any other expression for QFIM. When covariance matrix $\sigma$ has symplectic eigenvalues $\lambda_k$, covariance matrix $\nu\sigma$ has symplectic eigenvalues $\nu\lambda_k$. Sympletic matrices from the decompositions of $\sigma$ and $\nu\sigma$ are identical. Parameter $\nu$ therefore appears only as a modification of symplectic eigenvalues, which we will take advantage of. Assuming $\lambda_k({\boldsymbol{\epsilon}})=\lambda_l({\boldsymbol{\epsilon}})=1$ and performing the limit, both problematic terms are set to zero by taking the limit, $\lim_{\nu\rightarrow1}\frac{(\nu\lambda_k-\nu\lambda_l)^2}{\nu\lambda_k\nu\lambda_l-1}=\lim_{\nu\rightarrow1}\frac{0}{\nu^2-1}=0$, $\lim_{\nu\rightarrow1}\frac{(\nu\partial_i{\lambda}_k)^2}{(\nu\lambda_k)^2-1}=\lim_{\nu\rightarrow1}\frac{0}{\nu^2-1}=0$, which is exactly the definition, Eq.~\eqref{def:problematic_points}, that we wanted. $\partial_i{\lambda}_k({\boldsymbol{\epsilon}})=0$, because ${\lambda}_k({\boldsymbol{\epsilon}})$ achieves a local minimum at point ${\boldsymbol{\epsilon}}$ when $\lambda_k({\boldsymbol{\epsilon}})=1$. This method therefore leads to the correct value of the QFIM, and we can write expression for the QFIM for any Gaussian quantum state as \[\label{eq:regularization_procedure} H({\boldsymbol{\epsilon}})\equiv H(\boldsymbol{d}({\boldsymbol{\epsilon}}),\sigma({\boldsymbol{\epsilon}}))=\lim_{\nu\rightarrow1}H\big(\boldsymbol{d}({\boldsymbol{\epsilon}}),\nu\sigma({\boldsymbol{\epsilon}})\big). \end{equation} Applying this result to Eq.~\eqref{eq:mixed_QFI}, QFIM for any state can be computed as \[\label{eq:any_state_QFIM} \begin{split} H^{ij}({\boldsymbol{\epsilon}})&=\lim_{\nu\rightarrow1} \frac{1}{2}\vectorization{\partial_i\sigma}^\dag(\nu^2\ov{\sigma}\otimes\sigma-K\otimes K)^{-1}\vectorization{\partial_j\sigma}\\ &+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}. \end{split} \end{equation} Expression for the cQFIM can obtained by combining Eq.~\eqref{eq:regularization_procedure} with Eq.~\eqref{eq:QFIM_cQFIM_relation}. Similarly, we have \[\label{eq:SLD_regularization} \hat{\mathscr{L}}_i({\boldsymbol{\epsilon}})=\lim_{\nu\rightarrow1}{\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_{i\nu}{\Delta\boldsymbol{\hat{A}}}-\frac{1}{2}\mathrm{tr}[\sigma{\mathscr{A}}_{i\nu}]+2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_i{\boldsymbol{d}}, \end{equation} and \[\label{eq:condition_QCRregularization_procedure2} \begin{split} &\mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]=4\partial_i\boldsymbol{d}^\dag\sigma^{-1}K\sigma^{-1}\partial_j\boldsymbol{d}\\ &+\lim_{\nu\rightarrow1}\vectorization{\partial_i\sigma}^\dag\mathfrak{M}_{\nu}^{-1}(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\mathfrak{M}_{\nu}^{-1}\vectorization{\partial_j\sigma}, \end{split} \end{equation} where $\vectorization{{\mathscr{A}}_{i\nu}}=\mathfrak{M}_{\nu}^{-1}\vectorization{\partial_i\sigma}$ and $\mathfrak{M}_{\nu}=\nu^2\ov{\sigma}\otimes\sigma-K\otimes K$. \textit{\textbf{Limit formula}} We presented exact analytical expressions for QFIM, however, in some cases a numerical value that approximates the exact value to any desired precision is enough. Defining matrix $A:=K\sigma$, and generalizing procedure derived in Ref.~\cite{Safranek2015b} for a single-parameter estimation, we derive the limit expression for the QFIM, \[\label{eq:limit_formula} H^{ij}({\boldsymbol{\epsilon}})=\frac{1}{2}\sum_{n=1}^\infty\mathrm{tr}\big[A^{-n}\partial_i{A}A^{-n}\partial_j{A}]+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}. \end{equation} This expression is proved in the next section, by showing its relation to Eq.~\eqref{eq:mixed_QFI}. Note that although the above infinite series converges even when some symplectic eigenvalue are equal to one, for such cases it is not absolutely convergent and it does not give the correct expression for the QFIM. This can be shown by careful analysis of the elements in the series given by Eq.~\eqref{eq:expression_in_terms_of_P}, and it was explained in detail in Ref.~\cite{Safranek2015b}. The correct expression for cases when some of the modes are pure can be obtained by combining the above equation with the regularization procedure, Eq.~\eqref{eq:regularization_procedure}. In applications, we would like to take just a few elements of the series, and believe that their sum well approximates the QFIM. To estimate the error when doing this, we define remainder of the series as $R_M^{ij}:=\frac{1}{2}\sum_{n=M+1}^\infty\mathrm{tr}\big[A^{-n}\!\partial_i{A}A^{-n}\!\partial_j{A}]$. As shown in Appendix~\ref{app:Remainder}, this remainder is bounded, \[\label{eq:remainder} |R_M^{ij}|\leq\frac{\sqrt{\mathrm{tr}[(A\partial_iA)^2]}\sqrt{\mathrm{tr}[(A\partial_jA)^2]}}{2\lambda_{\mathrm{min}}^{2(M+1)}(\lambda_{\mathrm{min}}^2-1)}, \end{equation} where $\lambda_{\mathrm{min}}:=\min_{k}\{\lambda_k\}$ is the smallest symplectic eigenvalue of the covariance matrix $\sigma$. The right hand side therefore represents the maximal error when calculating the QFIM by using the first $M$ elements of the series, Eq.~\eqref{eq:limit_formula}. We can derive similar limit expressions for the SLD and for $\mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]$ by using expression \[ {\mathscr{A}}_i=\sum_{n=1}^\infty A^{-n}\partial_iA A^{-n}K, \end{equation} derived towards the end of Appendix~\ref{app:mixed_state}. \textit{\textbf{Relations between different expressions for QFIM}} Here we show how the limit expression, Eq.~\eqref{eq:limit_formula}, relates to Eqs.~\eqref{eq:mixed_QFI} and~\eqref{eq:multimode_QFI}. Relation between Eqs.~\eqref{eq:mixed_QFI} and~\eqref{eq:multimode_QFI} is shown in Appendix~\ref{app:williamson_QFI}. To obtain Eq.~\eqref{eq:mixed_QFI}, we use $\sigma^\dag=\sigma$, properties of vectorization, $\mathrm{tr}[A^\dag B]=\vectorization{A}^\dag\vectorization{B}$, and properties of Kronecker product, $(AB)\otimes(A'B)=(A\otimes A')(B\otimes B')$, $(C^T\otimes A)\vectorization{B}=\vectorization{ABC}$, to transform the infinite sum in Eq.~\eqref{eq:limit_formula} into a Neumann series that can be evaluated, \[ \begin{split} &\sum_{n=1}^\infty\mathrm{tr}\big[A^{-n}\partial_i{A}A^{-n}\partial_j{A}]\\ &=\vectorization{\partial_i\sigma}^\dag\bigg(\sum_{n=0}^\infty(\ov{A}\otimes A)^{-n}\bigg)\big(\ov{\sigma}\otimes\sigma\big)^{-1}\vectorization{\partial_j\sigma}\\ &=\vectorization{\partial_i\sigma}^\dag(I-\ov{A}^{-1}\otimes A^{-1})^{-1}\big(\ov{\sigma}\otimes\sigma\big)^{-1}\vectorization{\partial_j\sigma}\\ &=\vectorization{\partial_i\sigma}^\dag(\ov{\sigma}\otimes\sigma-K\otimes K)^{-1}\vectorization{\partial_j\sigma}, \end{split} \end{equation} which gives Eq.~\eqref{eq:mixed_QFI}. Using identity \[\label{eq:expression_in_terms_of_P} \begin{split} &\mathrm{tr}[A^{-n}\partial_i{A}A^{-n}\partial_j{A}]=2\mathrm{tr}[D^{-n+1}\!K^{-n+1}\!P_{i}D^{-n+1}\!K^{-n+1}\!P_{j}]\\ &-\mathrm{tr}[D^{-n+2}\!K^n\!P_{i}D^{-n}\!K^n\!P_{j})] -\mathrm{tr}[D^{-n+2}\!K^n\!P_{j}D^{-n}\!K^n\!P_{i})]\\ &+\mathrm{tr}[D^{-n}\partial_i{D}D^{-n}\partial_j{D}] \end{split} \end{equation} and changing to element-wise notation, the infinite sum~\eqref{eq:limit_formula} turns out to be geometric series in powers of $\lambda_k$'s, which can be evaluated. Then, using $R_i^{kl}=-\ov{R}_i^{lk}$, $Q_i^{kl}=Q_i^{lk}$ which follows from Eq.~\eqref{def:P_1}, we prove that Eq.~\eqref{eq:limit_formula} simplifies to Eq.~\eqref{eq:multimode_QFI}. \textit{\textbf{Pure states}} Combining Eq.~\eqref{eq:limit_formula}, the regularization procedure~\eqref{eq:regularization_procedure}, and $A^2({\boldsymbol{\epsilon}})=I$ (which holds for pure states because for them, $\lambda_k({\boldsymbol{\epsilon}})=1$ for all $k$), we obtain the well-known result for pure states~\cite{Pinel2012a}, \[\label{eq:pure_non-elegant} H^{ij}({\boldsymbol{\epsilon}})=\frac{1}{4}\mathrm{tr}[\sigma^{-1}\partial_i\sigma\sigma^{-1}\partial_j\sigma]+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}. \end{equation} It is important to stress that although this expression is defined for any state, it can be applied only for states that are pure at point ${\boldsymbol{\epsilon}}$. If the state becomes mixed when ${\boldsymbol{\epsilon}}$ is slightly varied, i.e., when $\partial_i\partial_j\lambda_k({\boldsymbol{\epsilon}})\neq 0$ for some $k$ (see Appendix~\ref{app:pops}, and Ref.~\cite{vsafranek2017discontinuities}), QFIM at this varied parameter $H({\boldsymbol{\epsilon}}+{\mathrm{d}\!\!\;\boldsymbol{\epsilon}})$ has to be calculated using some other formula (for example, Eqs.~\eqref{eq:mixed_QFI}, \eqref{eq:multimode_QFI}, or \eqref{eq:any_state_QFIM}), and one finds that in that case, function $H^{ij}$ is discontinuous at point ${\boldsymbol{\epsilon}}$. To obtain cQFIM for states that are pure at point ${\boldsymbol{\epsilon}}$, i.e., $A^2({\boldsymbol{\epsilon}})=I$, we can use expression \[\label{eq:pure_elegant} \begin{split} H_c^{ij}({\boldsymbol{\epsilon}})&=\frac{1}{4}\big(2\mathrm{tr}[\sigma^{-1}\partial_i\partial_j\sigma]-\mathrm{tr}[\sigma^{-1}\partial_i\sigma\sigma^{-1}\partial_j\sigma]\big)\\ &+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}, \end{split} \end{equation} To prove this expression, one needs to utilize the Williamson's decomposition of the covariance matrix, find~$\mathrm{tr}[\sigma^{-1}\partial_i\partial_j\sigma]$ and $\mathrm{tr}[\sigma^{-1}\partial_i\sigma\sigma^{-1}\partial_j\sigma]$ in terms of matrices $K$, $P_i=S^{-1}\partial_i S,$ and $P_{ij}:=S^{-1}\partial_i\partial_jS$ (which will give expressions similar to Eq.~\eqref{eq:expression_in_terms_of_P}), and use Eqs.~\eqref{def:structure_of_S},~\eqref{def:P_1}, and $P_{ij}K+P_iKP_j^\dag+P_jKP_i^\dag+KP_{ji}^\dag=0$. When applied to both Eq.~\eqref{eq:pure_non-elegant} and Eq.~\eqref{eq:pure_elegant}, we find that Eq. ~\eqref{eq:pure_elegant} gives the same expression as Eq.~\eqref{eq:pure_non-elegant}, plus an additional factor given by the second part of Eq.~\eqref{eq:QFIM_cQFIM_relation}. This proves that Eq.~\eqref{eq:pure_elegant} represents the cQFIM for pure states. Note that Eqs.~\eqref{eq:pure_non-elegant} and~\eqref{eq:pure_elegant} can be further simplified by using $\sigma^{-1}=K\sigma K$ and $\partial_i AA=-A\partial_i A$, which follows from $A^2({\boldsymbol{\epsilon}})=I$. Finally, we derive symmetric logarithmic derivatives for pure states \[\label{eq:sld_pure} \hat{\mathscr{L}}_i({\boldsymbol{\epsilon}})=\frac{1}{2}{\Delta\boldsymbol{\hat{A}}}^\dag\!\sigma^{-1}\! \partial_i\sigma \sigma^{-1}{\Delta\boldsymbol{\hat{A}}}+2{\Delta\boldsymbol{\hat{A}}}^\dag\!\sigma^{-1}\partial_i{\boldsymbol{d}}, \end{equation} and \[\label{eq:pure_state_condition} \mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]\!=\!\frac{1}{4}\mathrm{tr}[K\!\sigma[K\partial_i\sigma,K\partial_j\sigma]] +4\partial_i\boldsymbol{d}^\dag\!\sigma^{-1}\!K\!\sigma^{-1}\!\partial_j\boldsymbol{d}. \end{equation} The derivation can be found in Appendix~\ref{app:pure_state_condition}. \section{Examples} Here we illustrate the derived formulas on several examples. As shown in~\cite{safranek2016optimal}, Gaussian unitary operations, which are generated via an exponential map with the exponent at most quadratic in the field operators~\cite{Weedbrook2012a}, can be parameterized by matrix $W$ and vector $\boldsymbol{\alpha}$ as \[\label{def:Gaussian_unitary} \hat{U}=\exp\big(\tfrac{i}{2}{\boldsymbol{\hat{A}}}^\dag W {\boldsymbol{\hat{A}}}+{\boldsymbol{\hat{A}}}^\dag K \boldsymbol{\alpha}\big). \end{equation} In case of $W=0$, this operator corresponds to the Weyl displacement operator, while for $\boldsymbol{\alpha}=0$ we obtain purely quadratic transformations such as the phase-changing operator, one- and two-mode squeezing operators, or mode-mixing operators, depending on the particular structure of $W$. The first and the second moments of the transformed density matrix $\hat{\rho}'=\hat{U}\hat{\rho}\hat{U}^\dag$ are computed as \[\label{def:transformation} \boldsymbol{d}'=S\boldsymbol{d}+\boldsymbol{b},\ \ \sigma'=S\sigma S^\dag, \end{equation} where the symplectic matrix and the displacement are given by \[\label{eq:S_and_b} S=e^{iKW},\ \ \boldsymbol{b}=\Big(\!\int_0^1e^{iKWt}\mathrm{d}t\!\Big)\ \!\boldsymbol{\alpha}. \end{equation} The states in the following examples are generated using the above transformations, usually applied on a thermal state. Their form in the phase-space formalism is explicitly computed in Appendix~\ref{sec:The_phase_space_Gaussian_unitaries}, or in more detail in Ref.~\cite{safranek2016optimal} and in Chapter II~of Ref.~\cite{vsafranek2016gaussian}. Although every formula demonstrated here can be used to calculate QFIM for Gaussian states of any number of modes, for simplicity we choose only single- and two-mode states. However, we point out that for a single- and two-mode Gaussian states it is often better to use expressions valid for these specific number of modes~\cite{Pinel2013b,Safranek2015b}. \textit{\textbf{Mixed states}} Let us consider estimation of a squeezing parameter $r$ and inverse temperature $\beta$ from a squeezed thermal state, $\hat{\rho}=\hat{S}(r)\hat{\rho}_{\mathrm{th}}(\beta)\hat{S}^\dag(r)$, where $\hat{S}(r)$ denotes the squeezing operator and $\hat{\rho}_{\mathrm{th}}(\beta)=\frac{1}{Z}\exp(-\beta\hat{n})$ is the thermal state. $Z$ denotes the partition function and $\hat{n}$ denotes the number operator. The final state can be expressed via the first and the second moment as \[\label{eq:first_example_moments} \boldsymbol{d}=\begin{pmatrix} 0 \\ 0 \\ \end{pmatrix} , \quad \sigma=\lambda\begin{pmatrix} \cosh 2r & -\sinh 2r \\ -\sinh 2r & \cosh 2r \end{pmatrix}, \end{equation} where $\lambda=\coth\tfrac{\beta}{2}$. We compute \begin{widetext} \begin{gather} \mathfrak{M}^{-1}=\frac{\lambda^2}{2(\lambda^4-1)}\begin{pmatrix} \cosh 4r+1+\tfrac{2}{\lambda^2} & \sinh 4r & \sinh 4r & \cosh 4r-1 \\ \sinh 4r & \cosh 4r+1-\tfrac{2}{\lambda^2} & \cosh 4r-1 & \sinh 4r \\ \sinh 4r & \cosh 4r-1& \cosh 4r+1-\tfrac{2}{\lambda^2} & \sinh 4r \\ \cosh 4r-1 & \sinh 4r & \sinh 4r & \cosh 4r+1+\tfrac{2}{\lambda^2} \\ \end{pmatrix},\\ \vectorization{\partial_\beta\sigma}=\frac{\lambda^2-1}{2}\begin{pmatrix} -\cosh 2r \\ \sinh 2r \\ \sinh 2r \\ -\cosh 2r \\ \end{pmatrix}, \quad \vectorization{\partial_r\sigma}=2\lambda\begin{pmatrix} \sinh 2r \\ -\cosh 2r \\ -\cosh 2r \\ \sinh 2r \\ \end{pmatrix}.\nonumber \end{gather} QFIM is calculated from Eq.~\eqref{eq:mixed_QFI}, \[\label{eq:example1exact} H(\beta,r)=\frac{1}{2}\begin{pmatrix} \vectorization{\partial_\beta\hat{\rho}}^\dag\mathfrak{M}^{-1}\vectorization{\partial_\beta\hat{\rho}} &\vectorization{\partial_\beta\hat{\rho}}^\dag\mathfrak{M}^{-1}\vectorization{\partial_r\hat{\rho}} \\ \vectorization{\partial_r\hat{\rho}}^\dag\mathfrak{M}^{-1}\vectorization{\partial_\beta\hat{\rho}} & \vectorization{\partial_r\hat{\rho}}^\dag\mathfrak{M}^{-1}\vectorization{\partial_r\hat{\rho}} \\ \end{pmatrix}= \begin{pmatrix} \frac{\lambda^2-1}{4} & 0 \\ 0 & \frac{4\lambda^2}{\lambda^2+1} \\ \end{pmatrix}. \end{equation} \end{widetext} From Eq.~\eqref{eq:condition_QCR} we derive $\mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]=0$ for all $i,j=\beta,r$, which according to Eq.~\eqref{eq:condition_for_CRbound} means that the quantum Cram\'er-Rao bound is achievable. \textit{\textbf{When the Williamson's decomposition of the covariance matrix is known}} Let the initial state be a coherent state $\ket{\alpha}$, which is given by \[ \boldsymbol{d}_0=\begin{pmatrix} \alpha \\ \ov{\alpha} \\ \end{pmatrix} , \quad \sigma_0=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \end{equation} and we use this state to probe squeezing channel with subsequent phase-change, which transforms the state to $\hat{\rho}=\hat{R}(\theta)\hat{S}(r)\pro{\alpha}{\alpha}\hat{S}^\dag(r)\hat{R}^\dag(\theta)$. Action of this channel corresponds to symplectic matrix \[ S(r,\theta)=R(\theta)S(r)=\begin{pmatrix} e^{- i \theta}\cosh r & -e^{- i \theta}\sinh r \\ -e^{ i \theta}\sinh r & e^{ i \theta}\cosh r \end{pmatrix}, \end{equation} which leads to moments for $\hat{\rho}$, \[ \boldsymbol{d}=S(r,\theta)\boldsymbol{d}_0 , \quad \sigma=S(r,\theta)\sigma_0S^\dag(r,\theta). \end{equation} Since $\sigma$ is already written in the form of its symplectic decomposition, we can immediately use Eq.~\eqref{eq:multimode_QFI} to compute the QFIM. We compute $P_r=S^{-1}(r,\theta)\partial_r{S}(r,\theta)$, and $P_\theta=S^{-1}(r,\theta)\partial_\theta{S}(r,\theta)$, from which we obtain \[ R_r=0,\ \, Q_r=-1,\ \, R_\theta=-i \cosh 2r,\ \, Q_\theta=i \sinh 2r. \end{equation} Moreover, \begin{align} \sigma^{-1}&=\begin{pmatrix} \cosh 2r & e^{-2 i \theta}\sinh 2r \\ e^{2 i \theta}\sinh 2r & \cosh 2r \end{pmatrix},\nonumber\\ \partial_r\boldsymbol{d}&=\begin{pmatrix} e^{- i \theta}(-\ov{\alpha}\cosh r+\alpha \sinh r )\\ e^{ i \theta}(-\alpha\cosh r+\ov{\alpha} \sinh r )\\ \end{pmatrix},\\ \partial_\theta\boldsymbol{d}&=\begin{pmatrix} -ie^{- i \theta}(\alpha\cosh r-\ov{\alpha} \sinh r )\\ ie^{ i \theta}(\ov{\alpha}\cosh r-\alpha \sinh r )\\ \end{pmatrix}.\nonumber \end{align} \break Since the symplectic eigenvalue $\lambda=1$, according to Eq.~\eqref{def:problematic_points}, the first and the third term in the sum, Eq.~\eqref{eq:multimode_QFI}, disappears, which yields \[ H^{ij}({\boldsymbol{\epsilon}})=2\mathrm{Re}[\ov{Q_{i}}Q_{j}] +2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}, \end{equation} from which we compute QFIM as \begin{widetext} \[ H(r,\theta)= \begin{pmatrix} 2+4\abs{\alpha}^2 & -4\mathrm{Im}[\alpha^2]\cosh 2r \\ -4\mathrm{Im}[\alpha^2]\cosh 2r & 2\sinh^2 2r\!+\!4e^{4r}\mathrm{Im}[\alpha]^2\!+\!4e^{-4r}\mathrm{Re}[\alpha]^2 \\ \end{pmatrix}. \end{equation} \end{widetext} Further, applying $\lambda=1$ to Eq.~\eqref{eq:multimode_condition} we derive \[ \mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]=4i\mathrm{Im}[\ov{Q_{i}}Q_{j}]+4\partial_i\boldsymbol{d}^\dag\sigma^{-1}K\sigma^{-1}\partial_j\boldsymbol{d}, \end{equation} which yields $\mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_r,\hat{\mathscr{L}}_r]]=\mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_\theta,\hat{\mathscr{L}}_\theta]]=0$, and \[ \mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_r,\hat{\mathscr{L}}_\theta]]=4i(-\sinh 2r+2e^{2r}\mathrm{Im}[\alpha]^2-2e^{-2r}\mathrm{Re}[\alpha]^2), \end{equation} which means that the quantum Cram\'er-Rao bound is not in general achievable for simultaneous estimation of $r$ and $\theta$ encoded into a coherent state. \textit{\textbf{Limit formula}} Here we are going to use Eqs.~\eqref{eq:limit_formula} and~\eqref{eq:remainder} to numerically estimate QFIM of the Gaussian state from the first example, and then compared it to the analytical result. From the first and the second moments, Eq.~\eqref{eq:first_example_moments}, we calculate \[ \begin{split} A&=K\sigma=\lambda\begin{pmatrix} \cosh 2r & -\sinh 2r \\ \sinh 2r & -\cosh 2r \end{pmatrix},\\ A^{-1}&=\frac{1}{\lambda}\begin{pmatrix} \cosh 2r & -\sinh 2r \\ \sinh 2r & -\cosh 2r \end{pmatrix},\\ \partial_\beta A&=\frac{\lambda^2-1}{2}\begin{pmatrix} -\cosh 2r & \sinh 2r \\ -\sinh 2r & \cosh 2r \end{pmatrix},\\ \partial_r A&=2\lambda\begin{pmatrix} \sinh 2r & -\cosh 2r \\ \cosh 2r & -\sinh 2r \end{pmatrix}. \end{split} \end{equation} In order to calculate QFIM for $n$ decimal places, we require $M$ to be such that \[ |R_M^{\beta,\beta}|< \frac{1}{10^n},\quad |R_M^{\beta,r}|< \frac{1}{10^n},\quad|R_M^{rr}|< \frac{1}{10^n}, \end{equation} which, using Eq.~\eqref{eq:remainder}, leads to \[ M>\frac{n+\log_{10}\frac{\max_{i\in\{\beta,r\}}\mathrm{tr}[(A\partial_iA)^2]}{2(\lambda_{\mathrm{min}}^2-1)}}{2\log_{10}\lambda_{\min}}-1. \end{equation} To calculate QFIM for $\lambda=2$ and $r=1$ with precision for two decimal places, we insert $n=2$. In our example $\lambda_{\mathrm{min}}=\lambda$, which gives \[ M>4.529, \end{equation} meaning that we need $M=5$ terms in the sum~\eqref{eq:limit_formula}. Summing these terms, we obtain an estimate for the QFIM, \[ H(\beta,r)\approx \begin{pmatrix} 0.749268 & 0 \\ 0 & 3.20313 \\ \end{pmatrix}. \end{equation} Comparing this to the analytical result calculated by inserting values $\lambda=2$ and $r=1$ into Eq.~\eqref{eq:example1exact}, \[ H(\beta,r)= \begin{pmatrix} 0.75 & 0 \\ 0 & 3.2 \\ \end{pmatrix}, \end{equation} shows that we are within the limit of two decimal places of precision. \textit{\textbf{Pure states and discontinuity of QFIM}} Here we show the difference between QFIM and cQFIM and show how it is connected to the discontinuity of the quantum Fisher information. We consider a task of estimating the squeezing parameter from the two-mode squeezed vacuum, $\hat{\rho}=\hat{S}_T(r)\pro{0}{0}\hat{S}_T^\dag(r)$, given by moments \[\label{eq:full_state} \boldsymbol{d}=\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ \end{pmatrix} , \quad \sigma=\begin{pmatrix} \cosh 2r & 0 & 0 & -\sinh 2r \\ 0 & \cosh 2r & -\sinh 2r & 0 \\ 0 & -\sinh 2r & \cosh 2r & 0 \\ -\sinh 2r & 0 & 0 & \cosh 2r \end{pmatrix}. \end{equation} Since this state is a pure state, we can use Eq.~\eqref{eq:pure_non-elegant} to compute the QFIM, and Eq.~\eqref{eq:pure_elegant} to compute the cQFIM. Variation of the parameter $r$ does not change the purity of the state, i.e., both symplectic eigenvalues $\lambda_1=\lambda_2=1$ for all $r$, thus, according to Eq.~\eqref{eq:QFIM_cQFIM_relation}, we should find $H=H_c$. Indeed, we have \[ H(r)=H_c(r)=4. \end{equation} Now, let us say that experimenter does not have an access to the second mode, so they have to trace over it. In the phase-space formalism, this is done simply by taking out the rows and columns representing that mode~\cite{Weedbrook2012a} (in our notation, the second and the fourth row and column), resulting in state \[\label{eq:reduced_state} \boldsymbol{d}_1=\begin{pmatrix} 0 \\ 0 \\ \end{pmatrix} , \quad \sigma_1=\begin{pmatrix} \cosh 2r & 0 \\ 0 & \cosh 2r \end{pmatrix}. \end{equation} For $r=0$ the state is pure, so we can again use Eqs.~\eqref{eq:pure_non-elegant} and~\eqref{eq:pure_elegant} to compute QFIM and cQFIM. However, now we find \[\label{eq:H_and_Hc} H(0)=0,\quad H_c(0)=4. \end{equation} For $r>0$ the state is mixed, $\lambda_1=\cosh 2r$, according to Eq.~\eqref{eq:QFIM_cQFIM_relation}, $H=H_c$, and we can use any formula for mixed states (such as Eq.~\eqref{eq:multimode_QFI}) to compute the QFIM and cQFIM. Put together with Eq.~\eqref{eq:H_and_Hc}, we find \[ H(r)=\begin{cases} 0, & r=0, \\ 4, & \mathrm{otherwise}, \end{cases}\quad \quad H_c(r)=4. \end{equation} Clearly, QFIM is discontinuous at point $r=0$, as expected from the theory~\cite{vsafranek2017discontinuities}. Intuitively, this can be explained as follows: QFIM measures the amount of identifiability of parameter $dr$ from state $\hat{\rho}_{r+dr}$. As we can see from Eq.~\eqref{eq:reduced_state}, if $r=0$, then states $\hat{\rho}_{+dr}$ and $\hat{\rho}_{-dr}$ correspond to the same density matrix, $\hat{\rho}_{+dr}=\hat{\rho}_{-dr}$. Therefore, $dr$ is not identifiable around point $r=0$, because there is no physical experiment that experimenter could apply on the system to distinguish parameter $-dr$ from parameter $+dr$. It is therefore reasonable to expect that QFIM, which measures the ability to estimate $dr$, is zero at point $r=0$. Experimenter does not have the same problem when they have access to the full state, Eq.~\eqref{eq:full_state}, because in there $\hat{\rho}_{+dr}\neq\hat{\rho}_{-dr}$, so $dr$ is identifiable at point $r=0$. On the other hand, cQFIM, which is defined as four times the Bures metric, measures the infinitesimal distance between states $\hat{\rho}_{r}$ and $\hat{\rho}_{r+dr}$. This distance is always positive, no matter what $r$ is. Identifiability does not play any role in the measure of distance. It is therefore not surprising, that in this case, cQFIM is a continuous function in $r$. \section{Conclusion} In this paper we derived several expressions for the quantum Fisher information matrix (Eqs.~(\ref{eq:mixed_QFI},\ref{eq:multimode_QFI},\ref{eq:limit_formula},\ref{eq:pure_non-elegant})) for the multi-parameter estimation of multi-mode Gaussian states, associated symmetric logarithmic derivatives (Eqs.~(\ref{eq:SLD_mixed},\ref{eq:SLD_mixed_williamson},\ref{eq:sld_pure})), and expressions that determine saturability of the quantum Cram\'er-Rao bound (Eqs.~(\ref{eq:condition_QCR},\ref{eq:multimode_condition},\ref{eq:pure_state_condition})). We then illustrated their use on several examples. As our main results, we consider expression for the QFIM when the Williamson's decomposition of the covariance matrix is known, Eq.~\eqref{eq:multimode_QFI}, which can be used for example for finding optimal Gaussian probe states for Gaussian unitary channels; the limit formula together with the estimate of the remainder, Eqs.~\eqref{eq:limit_formula} and \eqref{eq:remainder}, which can be used for efficient numerical calculations, to any given precision; and expressions for SLDs, Eqs.~(\ref{eq:SLD_mixed},\ref{eq:SLD_mixed_williamson},\ref{eq:sld_pure}), which can be studied to provide the optimal measurement schemes. In addition, we discussed and resolved problematic behavior of QFIM at the points of purity, and we devised a regularization procedure (Eqs.~(\ref{eq:any_state_QFIM},\ref{eq:SLD_regularization},\ref{eq:condition_QCRregularization_procedure2})) that allows to use expressions for mixed states to calculate quantities for Gaussian states with any number of pure modes. Altogether, we provided a useful set of tools for Gaussian quantum metrology. \textit{\textbf{Acknowledgements}} I thank Tanja Fabsits and Karishma Hathlia, for reading the first version of the manuscript, and for useful feedback. This research was supported by the Foundational Questions Institute (FQXi.org). \appendix \section{Derivation of formula for mixed states, the real form, and the symmetric logarithmic derivative.}\label{app:mixed_state} Here we use the general result of~\cite{Gao2014a} to derive Eq.~\eqref{eq:mixed_QFI}. According to~\cite{Gao2014a} while using the Einstein's summation convention, the quantum Fisher information for $N$-mode Gaussian state can be calculated as \[\label{eq:Gao_unpolished} H_{i,j}({\boldsymbol{\epsilon}})=\frac{1}{2}(\mathfrak{M}^{-1})^{\alpha\beta,\mu\nu}\partial_j\Sigma^{\alpha\beta}\partial_i\Sigma^{\mu\nu}+(\Sigma^{-1})^{\mu\nu}\partial_j\boldsymbol{\lambda}^\mu\partial_i\boldsymbol{\lambda}^\nu. \end{equation} The displacement vector and the covariance matrix are defined as $\boldsymbol{\lambda}^m=\mathrm{tr}\big[\hat{\rho}\boldsymbol{\hat{A}}_{G}^m\big]$ and $\Sigma^{mn}=\mathrm{tr}\big[\hat{\rho}\,\{(\boldsymbol{\hat{A}}_G-\boldsymbol{\lambda})^m,(\boldsymbol{\hat{A}}_G-\boldsymbol{\lambda})^n\}\big]$, ${\boldsymbol{\hat{A}}}_G=(\hat{a}_1,\hat{a}_1^\dag,\dots,\hat{a}_N,\hat{a}_N^\dag)^T$, and the symplectic form is given by $[\boldsymbol{\hat{A}}_{G}^m,\boldsymbol{\hat{A}}_{G}^{n}]=:\Omega^{mn}\mathrm{id}$. The inverse of the tensor $ \mathfrak{M}_G^{\alpha\beta,\mu\nu}=\Sigma^{\alpha\mu}\Sigma^{\beta\nu}+\frac{1}{4}\Omega^{\alpha\mu}\Omega^{\beta\nu}$ is defined via \[\label{eq:inverse_matrix} (\mathfrak{M}_G^{-1})^{\tilde{\mu}\tilde{\nu},\alpha\beta}\mathfrak{M}_G^{\alpha\beta,\mu\nu}=\delta^{\tilde{\mu}\mu}\delta^{\tilde{\nu}\nu}, \end{equation} where $\delta^{\tilde{\mu}\mu}$ denotes the Kronecker delta. Considering the above definition, we can find a matrix form to Eq.~\eqref{eq:Gao_unpolished}, \[\label{eq:Gao_polished} H^{ij}({\boldsymbol{\epsilon}})=\frac{1}{2}\vectorization{\partial_i\Sigma}^T\mathfrak{M}_G^{-1}\vectorization{\partial_j\Sigma}+\partial_i\boldsymbol{\lambda}^T\Sigma^{-1}\partial_j\boldsymbol{\lambda}, \end{equation} where $\mathfrak{M}_G=\Sigma\otimes\Sigma+\frac{1}{4}\Omega\otimes\Omega$, $\otimes$ denotes the Kronecker product, and $\vectorization{\cdot}$ is a vectorization of a matrix. To obtain the result in our notation we need to consider transformation relations \[ \begin{split} \sigma&=2P\Sigma XP^T,\\ K&=P\Omega XP^T,\\ \boldsymbol{d}&=P\boldsymbol{\lambda}, \end{split} \end{equation} where $X=\bigoplus_{i=1}^N\sigma_x$ ($X$ is real and $X^2=I$) and $P$ is a certain permutation matrix ($P$ is real and $PP^T=I$). Using properties \[ \begin{split} X\Sigma X&=\ov{\Sigma},\\ X\Omega X&=-\Omega, \end{split} \end{equation} the fact that $\Omega$ is real, and identities \[\label{id:Kronecker_product_ids} \begin{split} (ABC)\otimes(A'B'C')&=(A\otimes A')(B\otimes B')(C\otimes C'),\\ (C^T\otimes A)\vectorization{B}&=\vectorization{ABC}, \end{split} \end{equation} we derive \[\label{eq:app_mixed_states} H^{ij}({\boldsymbol{\epsilon}})=\frac{1}{2}\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}, \end{equation} where $\mathfrak{M}=\ov{\sigma}\otimes\sigma-K\otimes K$. Some authors also use the real form, which is defined with respect to the collection of quadrature operators $\boldsymbol{\hat{Q}}=\{\hat{x}_{1},\hat{x}_{2},\ldots,\hat{p}_{1},\hat{p}_{2},\ldots\}$, as $\boldsymbol{d}_{R}^m=\mathrm{tr}\big[\hat{\rho}\boldsymbol{\hat{Q}}^m\big]$, $\sigma_{R}^{mn}=\mathrm{tr}\big[\hat{\rho}\{\Delta\boldsymbol{\hat{Q}}^m,\Delta\boldsymbol{\hat{Q}}^n\}\big]$, $\Delta\boldsymbol{\hat{Q}}:=\boldsymbol{\hat{Q}}-\boldsymbol{d}_{R}$, $[\hat{{Q}}^{m},\hat{{Q}}^{n}]\,=i\,\Omega_R^{mn}\,\mathrm{id}$. Due to $\hat{a}_i=\frac{1}{\sqrt{2}}(\hat{x}_i+i\hat{p}_i)$ and $\hat{a}_i^\dag=\frac{1}{\sqrt{2}}(\hat{x}_i-i\hat{p}_i)$, the real form is connected to the complex form through a unitary matrix \[\label{eq:U} U=\frac{1}{\sqrt{2}}\,\begin{bmatrix}I & +iI \\ I & -iI\end{bmatrix} \end{equation} as \[\label{eq:matrix_transformation} {\boldsymbol{\hat{A}}}=U\boldsymbol{\hat{Q}},\ \boldsymbol{d}=U\boldsymbol{d}_{R},\ \sigma=U\sigma_RU^\dag,\ \mathrm{and}\ K=Ui\Omega_RU^\dag. \end{equation} Assuming the real, $\sigma_R=S_RD_RS_R^T$, and the complex, $\sigma=S D S^\dag$, symplectic decomposition, the diagonal matrix consisting of symplectic eigenvalues remains the same, $D_R=UDU^\dag=U^\dag DU=D$ which for symplectic matrices means that $\sigma=U\sigma_RU^\dag=US_RD_RS_R^T U^\dag=US_RU^\dag D U S_R^\dag U^\dag=US_RU^\dag D (US_RU^\dag)^\dag$ (where we have used that in the real form, the symplectic matrices are real, $S_R^T=S_R^\dag$), therefore \[\label{eq:symplectic_transformation} D=D_R,\ S=US_RU^\dag. \end{equation} Using these transformation relations, Eqs.~\eqref{id:Kronecker_product_ids} and~\eqref{eq:app_mixed_states}, we derive \[ H^{ij}({\boldsymbol{\epsilon}})=\frac{1}{2}\vectorization{\partial_i\sigma_R}^T\mathfrak{M}_R^{-1}\vectorization{\partial_j\sigma_R}+2\partial_i\boldsymbol{d}_R^T\sigma_R^{-1}\partial_j\boldsymbol{d}_R, \end{equation} where $\mathfrak{M}_R=\sigma_R\otimes\sigma_R-\Omega_R\otimes \Omega_R$. Using a similar approach, we can rewrite expressions for the symmetric logarithmic derivatives originally published in Ref.~\cite{Gao2014a} in an elegant matrix form, \[\label{eq:L} \hat{\mathscr{L}}_i={\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_i{\Delta\boldsymbol{\hat{A}}}+2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_i{\boldsymbol{d}}-\frac{1}{2}\mathrm{tr}[\sigma{\mathscr{A}}_i], \end{equation} where ${\Delta\boldsymbol{\hat{A}}}:={\boldsymbol{\hat{A}}}-\boldsymbol{d}$, $\vectorization{{\mathscr{A}}_i}:=\mathfrak{M}^{-1}\vectorization{\partial_i\sigma}$. By ${\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_i{\Delta\boldsymbol{\hat{A}}}$ we mean $ \sum_{m,n}\big({\Delta\boldsymbol{\hat{A}}}^m\big)^\dag{\mathscr{A}}_i^{mn}{\Delta\boldsymbol{\hat{A}}}^n$. ${\mathscr{A}}_i$ is Hermitian, and is of form \[\label{eq:structure_of_A} {\mathscr{A}}_i=\begin{bmatrix} {\mathscr{A}}_{Xi} & {\mathscr{A}}_{Yi} \\ \overline{{\mathscr{A}}_{Yi}} & \overline{{\mathscr{A}}_{Xi}} \end{bmatrix}. \end{equation} To show that, using $\vectorization{{\mathscr{A}}_i}:=\mathfrak{M}^{-1}\vectorization{\partial_i\sigma}$ we can derive the defining equation for ${\mathscr{A}}_i$, \[ \sigma{\mathscr{A}}_i\sigma-K{\mathscr{A}}_iK=\partial_i\sigma. \end{equation} Taking the conjugate transpose of this equation, we find that if ${\mathscr{A}}_i$ solves this equation, then also ${\mathscr{A}}_i^\dag$ solves this equation. But since ${\mathscr{A}}_i$ is uniquely defined when matrix $\mathfrak{M}$ is invertible (as $\vectorization{{\mathscr{A}}_i}:=\mathfrak{M}^{-1}\vectorization{\partial_i\sigma}$), it must be that ${\mathscr{A}}_i^\dag={\mathscr{A}}_i$, i.e., ${\mathscr{A}}_i$ is Hermitian. Solution of the above equation can be also written as \[\label{eq:cAlimit} {\mathscr{A}}_i=\sum_{n=1}^\infty(K\sigma)^{-n}K\partial_i\sigma K(\sigma K)^{-n}. \end{equation} This shows that ${\mathscr{A}}_i$ is a combination of matrices $K$, $\sigma^{-1}=KSD^{-1}S^\dag K$, and $\sigma$. These matrices have the structure of Eq.~\eqref{eq:structure_of_A}, and because this structure is conserved when combining these matrices, also ${\mathscr{A}}_i$ must have the same structure, i.e., Eq.~\eqref{eq:structure_of_A} holds. \section{The phase-space representation of common Gaussian unitaries and Gaussian states}\label{sec:The_phase_space_Gaussian_unitaries} Here we provide a list of complex and real symplectic matrices that represent often-used Gaussian unitaries (denoted with hat), and common Gaussian states. We use definitions introduced above Eq.~\eqref{def:first_and_second_moments} and Eq.~\eqref{eq:U}. The real form is denoted by the lower index $R$. This section is based on Chapter II~in Ref.~\cite{vsafranek2016gaussian} which contains more detailed discussion. The derivation uses Eq.~\eqref{def:transformation} and transformation relations~\eqref{eq:matrix_transformation} and~\eqref{eq:symplectic_transformation}. One-mode operations acting on a multi-mode state (which leave the other modes invariant) are easily lifted into multi-mode operations by adding identities onto suitable places as illustrated on Eq.~\eqref{eq:phase_operator}.\\ \noindent \emph{Rotation/phase-change} $\hat{R}(\theta)=\exp(-i\theta\hat{a}^\dag \hat{a})$, $\hat{R}_1(\theta)=\exp(-i\theta\hat{a}_1^\dag \hat{a}_1)$, is represented by the complex and real symplectic matrices \begin{flalign}\label{eq:phase_operator} R(\theta)&=\begin{bmatrix} e^{-i\theta} & 0 \\ 0 & e^{i\theta} \end{bmatrix}, \quad R_R(\theta)=\begin{bmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{bmatrix},&&\\ \quad R_1(\theta)&\!=\!\begin{bmatrix} e^{-i\theta} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & e^{i\theta} & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \ R_{1R}(\theta)\!=\!\begin{bmatrix} \cos\!\theta & 0 & \sin\!\theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin\!\theta & 0 & \cos\!\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}.&&\nonumber \end{flalign} \emph{One-mode squeezing} $\hat{S}(r,\chi)=\exp(-\frac{r}{2}(e^{i\chi}\hat{a}^{\dag2}-e^{-i\chi}\hat{a}^{2}))$, \begin{flalign}\label{eq:squeezing_operator} S(r,\chi)&=\begin{bmatrix} \cosh r & -e^{i\chi}\sinh r \\ -e^{-i\chi}\sinh r & \cosh r \end{bmatrix},\\ \ S_R(r,\chi)&=\begin{bmatrix} \cosh r-\cos\chi\sinh r & -\sin\chi\sinh r \\ -\sin\chi\sinh r & \cosh r+\cos\chi\sinh r \end{bmatrix}.&&\nonumber \end{flalign} \emph{Mode-mixing} $\hat{B}(\theta,\chi)=\exp(\theta(e^{i\chi}\hat{a}_1^\dag\hat{a}_2-e^{-i\chi}\hat{a}_2^\dag\hat{a}_1))$, \begin{flalign}\label{eq:mode_mixing_operator} B(\theta,\chi)&\!=\!\begin{bmatrix} \cos\theta & e^{i\chi}\sin\theta & 0 & 0 \\ -e^{-i \chi}\sin\theta & \cos\theta & 0 & 0 \\ 0 & 0 & \cos\theta & e^{-i \chi}\sin\theta \\ 0 & 0 & -e^{i \chi}\sin\theta & \cos\theta \end{bmatrix},&&\nonumber\\ B_R(\theta,\chi)&\!=\!\begin{bmatrix} \cos\!\theta & \cos\!\chi\sin\!\theta & 0 & -\sin\!\chi\sin\!\theta \\ -\cos\chi\sin\!\theta & \cos\!\theta & -\sin\!\chi\sin\!\theta & 0 \\ 0 & \sin\!\chi\sin\!\theta & \cos\!\theta & \cos\!\chi\sin\!\theta\\ \sin\!\chi\sin\!\theta & 0 & -\cos\!\chi\sin\!\theta & \cos\!\theta \end{bmatrix}.&& \end{flalign} \emph{Two-mode squeezing} $\hat{S}_{T}(r,\chi)=\exp(-r(e^{i\chi}\hat{a}_1^\dag\hat{a}_2^\dag-e^{-i\chi}\hat{a}_1\hat{a}_2))$, \begin{flalign}\label{eq:twomode_squeezing_operator} &S_T(r,\chi)=\nonumber\\ &\begin{bmatrix} \cosh r & 0 & 0 & -e^{i\chi}\sinh r \\ 0 & \cosh r & -e^{i\chi}\sinh r & 0 \\ 0 & -e^{-i\chi}\sinh r & \cosh r & 0 \\ -e^{-i\chi}\sinh r & 0 & 0 & \cosh r \end{bmatrix},&&\nonumber\\ &S_{TR}(r,\chi)=\nonumber\\ &\begin{bmatrix} \cosh r & -\cos \chi\sinh r & 0 & -\sin \chi\sinh r \\ -\cos \chi\sinh r & \cosh r & -\sin \chi\sinh r & 0 \\ 0 & -\sin \chi\sinh r & \cosh r & \cos \chi\sinh r \\ -\sin \chi\sinh r & 0 & \cos \chi\sinh r & \cosh r \end{bmatrix}.&& \end{flalign} Sometimes in the literature, yet another notation is encountered, for example it is common to see the real form displacement vectors and covariance matrices generated by ordering `$xpxp$' given by $\boldsymbol{\hat{Q}}:=(\hat{x}_1,\hat{p}_1,\hat{x}_2,\hat{p}_2)^T$ instead of `$xxpp$' vector $\boldsymbol{\hat{Q}}:=(\hat{x}_1,\hat{x}_2,\hat{p}_1,\hat{p}_2)^T$ used here. It is easy to transform into the `$xpxp$' form by simply reordering rows and columns, \[ \begin{split} S_R&=\begin{bmatrix} S_{x_1x_1} & S_{x_1x_2} & S_{x_1p_1} & S_{x_1p_2} \\ S_{x_2x_1} & S_{x_2x_2} & S_{x_2p_1} & S_{x_2p_2} \\ S_{p_1x_1} & S_{p_1x_2} & S_{p_1p_1} & S_{p_1p_2} \\ S_{p_2x_1} & S_{p_2x_2} & S_{p_2p_1} & S_{p_2p_2} \end{bmatrix}\\ &\longrightarrow\\ S_{R,xpxp}&=\begin{bmatrix} S_{x_1x_1} & S_{x_1p_1} & S_{x_1x_2} & S_{x_1p_2} \\ S_{p_1x_1} & S_{p_1p_1} & S_{p_1x_2} & S_{p_1p_2} \\ S_{x_2x_1} & S_{x_2p_1} & S_{x_2x_2} & S_{x_2p_2} \\ S_{p_2x_1} & S_{p_2p_1} & S_{p_2x_2} & S_{p_2p_2} \end{bmatrix}, \end{split} \end{equation} which corresponds to transformation with a permutation matrix, \[ S_{R,xpxp}=PS_{R}P^T,\ P=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}. \end{equation} Now we introduce the most common Gaussian states. Characteristics of all other Gaussian states can be seen as mixtures of characteristics of these basic ones. In that sense the following list is complete. \emph{Thermal state} is the simplest Gaussian state. Assuming the single particle Hilbert space is spanned by $N$ states -- modes, each mode is characterized by the energy $E_i$ of the state $\ket{\psi_i}$. We assume that each mode is thermally populated, i.e., number of particles in each mode is given by the thermal distribution, $\hat{\rho}_{{\mathrm{th}}i}=\frac{1}{Z}\mathrm{exp}(-\frac{E_i}{kT}\hat{n}_i)$, where $\hat{n}_i=\hat{a}_i^\dag\hat{a}_i$ denotes the number operator associated with mode $i$, $k$ is the Boltzmann constant, and $Z=\mathrm{tr}[e^{-\frac{E_i}{kT}\hat{n}_i}]$ defines the partition function. The full thermal state is then a tensor product of the thermal states of each mode, $\hat{\rho}_{\mathrm{th}}=\hat{\rho}_{\mathrm{th}1}\otimes\cdots\otimes\hat{\rho}_{\mathrm{th}N}$. The displacement vector of the thermal state is equal to zero and the covariance matrix in both complex and the real form is a diagonal matrix, \begin{align} \boldsymbol{d}&=\boldsymbol{0}, \quad \sigma=\mathrm{diag}(\lambda_1,\dots,\lambda_N,\lambda_1,\dots,\lambda_N),\\ \boldsymbol{d}_R&=\boldsymbol{0}, \quad \sigma_{R}=\mathrm{diag}(\lambda_1,\dots,\lambda_N,\lambda_1,\dots,\lambda_N), \end{align} where $\lambda_i=\coth(\frac{E_i}{2kT})$ are the symplectic eigenvalues. They can be also expressed in terms of the mean number of thermal bosons, $\lambda_i=1+2n_{{\mathrm{th}}i}$, where $n_{{\mathrm{th}}i}:=\mathrm{tr}[\hat{n}_i\hat{\rho}_{\mathrm{th}}]$. Larger temperatures and smaller energies correspond to larger symplectic eigenvalues. For each $i$, $\lambda_i\geq1$ and $\lambda_i=1$ for $T=0$. Thermal state corresponding to $T=0$ is the lowest-energy state called \emph{vacuum state} and is described by the identity matrix $\sigma=I$. \emph{Coherent state} is a Gaussian state which is characterized only by its displacement vector, \[ \ket{\alpha}=e^{-\frac{\abs{\alpha}^2}{2}}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}\ket{n}. \end{equation} Coherent state is an eigenvector of the annihilation operator, $a\ket{\alpha}=\alpha\ket{\alpha}$. Coherent states typically describe beams of light emitted by a laser~\cite{zhang1990coherent}. Mathematically, coherent state can be created by the action of the Weyl displacement operator on the vacuum (thus an equivalent name would be a single-mode displaced vacuum), $\ket{\alpha}=\hat{D}(\alpha)\ket{0}$. The first and the second moments are \begin{align} \boldsymbol{d}&=(\alpha,\ov{\alpha})^T, \quad \sigma=I,\\ \boldsymbol{d}_R&=\sqrt{2}(\mathrm{Re}[\alpha],\mathrm{Im}[\alpha])^T, \quad \sigma_R=I. \end{align} \emph{(Single-mode) squeezed state} is created by an action of the squeezing operator~\eqref{eq:squeezing_operator} on the vacuum, $\ket{S(r,\chi)}=\hat{S}(r,\chi)\ket{0}$. For $\chi=0$ this state takes the form~\cite{kok2010introduction} \[ \ket{S(r)}=\frac{1}{\sqrt{\cosh\abs{r}}}\sum_{n=0}^\infty\frac{\sqrt{(2n)!}}{n!}\left(\frac{-r}{2\abs{r}}\right)^n\tanh^n\abs{r}\ket{2n}. \end{equation} Such states for example from a laser light by going through an optical parametric oscillator~\cite{breitenbach1997measurement,Lvovsky2014squeezed}. The first and the second moments are \begin{align} \boldsymbol{d}&=\boldsymbol{0}, \quad \sigma=S(r,\chi)S^\dag(r,\chi)=S(2r,\chi),\\ \boldsymbol{d}_R&=\boldsymbol{0}, \quad \sigma_R=S_R(r,\chi)S_R^T(r,\chi)=S_R(2r,\chi). \end{align} \emph{Two-mode squeezed state} is an entangled two-mode state created by an action of the two mode squeezing operator~\eqref{eq:twomode_squeezing_operator} on the vacuum, $\ket{S_T(r,\chi)}=\hat{S}_T(r,\chi)\ket{0}$. For $\chi=0$ this state takes the form~\cite{kok2010introduction} \[ \ket{S_T(r)}=\frac{1}{\cosh\abs{r}}\sum_{n=0}^\infty\left(\frac{-r}{\abs{r}}\right)^n\tanh^n\abs{r}\ket{n,n} \end{equation} Physically, two-mode squeezed states are prepared by sending squeezed and anti-squeezed state (squeezed with the negative squeezing) through a beam-splitter (mode-mixing operator \eqref{eq:mode_mixing_operator} with $\chi=0$). The first and the second moments are \begin{align} \boldsymbol{d}&=\boldsymbol{0}, \quad \sigma=S_T(r,\chi)S_T^\dag(r,\chi)=S_T(2r,\chi),\\ \boldsymbol{d}_R&=\boldsymbol{0}, \quad \sigma_R=S_{TR}(r,\chi)S_{TR}^T(r,\chi)=S_{TR}(2r,\chi). \end{align} \section{Saturability of the Cram\'er-Rao bound}\label{app:saturability} Here we derive expression for $\mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]$ in the phase-space formalism, which gives condition on the saturability of the quantum Cram\'er-Rao bound, Eq.~\eqref{eq:condition_for_CRbound}. Inserting Eq.~\eqref{eq:L} we derive \[\label{eq:condition_wide} \begin{split} \mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]&=\mathrm{tr}[\hat{\rho}[{\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_i{\Delta\boldsymbol{\hat{A}}},{\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_j{\Delta\boldsymbol{\hat{A}}}]]\\ &+\mathrm{tr}[\hat{\rho}[{\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_i{\Delta\boldsymbol{\hat{A}}},2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_j{\boldsymbol{d}}]]\\ &+\mathrm{tr}[\hat{\rho}[2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_i{\boldsymbol{d}},{\Delta\boldsymbol{\hat{A}}}^\dag{\mathscr{A}}_i{\Delta\boldsymbol{\hat{A}}}]]\\ &+\mathrm{tr}[\hat{\rho}[2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_i{\boldsymbol{d}},2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_j{\boldsymbol{d}}]]\\ &\equiv a_1+a_2+a_3+a_4 \end{split} \end{equation} Terms that contain $\frac{1}{2}\mathrm{tr}[\sigma{\mathscr{A}}_i]$ vanish, because as a number, this term commutes with every operator. In the following, we use the Einstein's summation convention. We will use the commutation relations, \begin{subequations} \begin{align} [{\Delta\boldsymbol{\hat{A}}}^m,{\Delta\boldsymbol{\hat{A}}}^{\dag n}]&=K^{mn}\mathrm{id},\\ [{\Delta\boldsymbol{\hat{A}}}^{m},{\Delta\boldsymbol{\hat{A}}}^{n}]&=(KT)^{mn}\mathrm{id},\\ [{\Delta\boldsymbol{\hat{A}}}^{\dag m},{\Delta\boldsymbol{\hat{A}}}^{\dag n}]&=(TK)^{mn}\mathrm{id}=-(KT)^{nm}\mathrm{id}, \end{align} \end{subequations} and identities, \begin{subequations} \begin{align} {\Delta\boldsymbol{\hat{A}}}^{\dag m}{\Delta\boldsymbol{\hat{A}}}^{n}&=\tfrac{1}{2}(\{{\Delta\boldsymbol{\hat{A}}}^{n},{\Delta\boldsymbol{\hat{A}}}^{\dag m}\}-K^{nm}\mathrm{id}),\\ {\Delta\boldsymbol{\hat{A}}}^{m}{\Delta\boldsymbol{\hat{A}}}^{n}&=\tfrac{1}{2}(\{{\Delta\boldsymbol{\hat{A}}}^{n},{\Delta\boldsymbol{\hat{A}}}^{\dag k}\}-K^{nk}\mathrm{id})T^{km},\\ {\Delta\boldsymbol{\hat{A}}}^{\dag m}{\Delta\boldsymbol{\hat{A}}}^{\dag n}&=\tfrac{1}{2}T^{nk}(\{{\Delta\boldsymbol{\hat{A}}}^{k},{\Delta\boldsymbol{\hat{A}}}^{\dag m}\}-K^{km}\mathrm{id}), \end{align} \end{subequations} from which follows \begin{subequations} \begin{align} \mathrm{tr}[\hat{\rho}{\Delta\boldsymbol{\hat{A}}}^{\dag m}{\Delta\boldsymbol{\hat{A}}}^{n}]&=\tfrac{1}{2}(\sigma-K)^{nm},\\ \mathrm{tr}[\hat{\rho}{\Delta\boldsymbol{\hat{A}}}^{m}{\Delta\boldsymbol{\hat{A}}}^{n}]&=\tfrac{1}{2}(\sigma-K)^{nk}T^{km},\\ \mathrm{tr}[\hat{\rho}{\Delta\boldsymbol{\hat{A}}}^{\dag m}{\Delta\boldsymbol{\hat{A}}}^{\dag n}]&=\tfrac{1}{2}T^{nk}(\sigma-K)^{km}. \end{align} \end{subequations} We have used ${\Delta\boldsymbol{\hat{A}}}^\dag=T{\Delta\boldsymbol{\hat{A}}}$, where matrix $T$ is defined as \[\label{def:T} T= \begin{bmatrix} 0 & I \\ I & 0 \end{bmatrix}. \end{equation} Further, we use properties of commutator, \begin{subequations} \begin{align} [AB,C]&=A[B,C]+[A,C]B,\\ [A,BC]&=B[A,C]+[A,B]C. \end{align} \end{subequations} We look separately at each term $a_1,a_2,a_3,a_4$. \begin{widetext} \[ \begin{split} a_1&={\mathscr{A}}_i^{kl}{\mathscr{A}}_j^{mn}\mathrm{tr}[\hat{\rho}[{\Delta\boldsymbol{\hat{A}}}^{k\dag}{\Delta\boldsymbol{\hat{A}}}^l,{\Delta\boldsymbol{\hat{A}}}^{m\dag}{\Delta\boldsymbol{\hat{A}}}^n]]\\ &={\mathscr{A}}_i^{kl}{\mathscr{A}}_j^{mn}\mathrm{tr}[\hat{\rho}({\Delta\boldsymbol{\hat{A}}}^{k\dag}{\Delta\boldsymbol{\hat{A}}}^{m\dag}[{\Delta\boldsymbol{\hat{A}}}^l,{\Delta\boldsymbol{\hat{A}}}^n]+{\Delta\boldsymbol{\hat{A}}}^{k\dag}[{\Delta\boldsymbol{\hat{A}}}^l,{\Delta\boldsymbol{\hat{A}}}^{m\dag}]{\Delta\boldsymbol{\hat{A}}}^n\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +{\Delta\boldsymbol{\hat{A}}}^{m\dag}[{\Delta\boldsymbol{\hat{A}}}^{k\dag},{\Delta\boldsymbol{\hat{A}}}^n]{\Delta\boldsymbol{\hat{A}}}^l+[{\Delta\boldsymbol{\hat{A}}}^{k\dag},{\Delta\boldsymbol{\hat{A}}}^{m\dag}]{\Delta\boldsymbol{\hat{A}}}^n{\Delta\boldsymbol{\hat{A}}}^l)]\\ &={\mathscr{A}}_i^{kl}{\mathscr{A}}_j^{mn}\mathrm{tr}[\hat{\rho}({\Delta\boldsymbol{\hat{A}}}^{k\dag}{\Delta\boldsymbol{\hat{A}}}^{m\dag}(KT)^{ln}+{\Delta\boldsymbol{\hat{A}}}^{k\dag}K^{lm}{\Delta\boldsymbol{\hat{A}}}^n-{\Delta\boldsymbol{\hat{A}}}^{m\dag}K^{kn}{\Delta\boldsymbol{\hat{A}}}^l-(KT)^{mk}{\Delta\boldsymbol{\hat{A}}}^n{\Delta\boldsymbol{\hat{A}}}^l)]\\ &=\tfrac{1}{2}{\mathscr{A}}_i^{kl}{\mathscr{A}}_j^{mn}(T^{ms}(\sigma-K)^{sk}(KT)^{ln}+K^{lm}(\sigma-K)^{nk}-K^{kn}(\sigma-K)^{lm}-(KT)^{km}(\sigma-K)^{ls}T^{sn})]\\ &=\tfrac{1}{2}\big(T^{ms}(\sigma-K)^{sk}{\mathscr{A}}_i^{kl}(KT)^{ln}({\mathscr{A}}_j^T)^{nm}+{\mathscr{A}}_i^{kl}K^{lm}{\mathscr{A}}_j^{mn}(\sigma-K)^{nk}\\ &\ \ \ -K^{nk}{\mathscr{A}}_i^{kl}(\sigma-K)^{lm}{\mathscr{A}}_j^{mn}-(TK)^{mk}{\mathscr{A}}_i^{kl}(\sigma-K)^{ls}T^{sn}({\mathscr{A}}_j^T)^{mn}\big)\\ &=\tfrac{1}{2}\mathrm{tr}[(K\sigma-I)({\mathscr{A}}_iK(T{\mathscr{A}}_j^TT)K+{\mathscr{A}}_iK{\mathscr{A}}_jK-{\mathscr{A}}_jK{\mathscr{A}}_iK-(T{\mathscr{A}}_j^TT)K{\mathscr{A}}_iK)]\\ &=\mathrm{tr}[(K\sigma-I)[{\mathscr{A}}_iK,{\mathscr{A}}_jK]]\\ &=\mathrm{tr}[K\sigma[{\mathscr{A}}_iK,{\mathscr{A}}_jK]], \end{split} \end{equation} where we have used $T{\mathscr{A}}_j^TT={\mathscr{A}}_j$, which holds for any matrix of form Eq.~\eqref{eq:structure_of_A}, and the fact that trace of a commutator is always zero. Using the properties of vectorization, Eq.~\eqref{id:Kronecker_product_ids}, the fact that ${\mathscr{A}}_i$ and $\mathfrak{M}$ are Hermitian, and $\vectorization{{\mathscr{A}}_i}=\mathfrak{M}^{-1}\vectorization{\partial_i\sigma}$, we further derive \[\label{eq:first_part} \begin{split} a_1&=\mathrm{tr}[{\mathscr{A}}_i^\dag K{\mathscr{A}}_j\sigma-{\mathscr{A}}_i^\dag\sigma{\mathscr{A}}_jK]\\ &=\vectorization{{\mathscr{A}}_i}^\dag(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\vectorization{{\mathscr{A}}_j}\\ &=\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}. \end{split} \end{equation} Terms $a_2$ and $a_3$ in Eq.~\eqref{eq:condition_wide} are identically zero, because contracting a commutation relation between two operators will always give a number, and the remaining operator commutes with a number. Finally, we have \[\label{eq:second_part} \begin{split} a_4&=4(\sigma^{-1})^{kl}(\sigma^{-1})^{mn}\partial_j{\boldsymbol{d}}^n\partial_i{\boldsymbol{d}}^l\mathrm{tr}[\hat{\rho}[{\Delta\boldsymbol{\hat{A}}}^{k\dag},{\Delta\boldsymbol{\hat{A}}}^{m\dag}]]\\ &=4(\sigma^{-1})^{kl}(\sigma^{-1})^{mn}\partial_j{\boldsymbol{d}}^nT^{ls}\ov{\partial_i{\boldsymbol{d}}^s}\mathrm{tr}[\hat{\rho}(TK)^{km}]\\ &=4\ov{\partial_i{\boldsymbol{d}}^s}T^{sl}((\sigma^{-1})^T)^{lk}(TK)^{km}(\sigma^{-1})^{mn}\partial_j{\boldsymbol{d}}^n\\ &=4\partial_i{\boldsymbol{d}}^\dag T((\sigma^{-1})^T)TK\sigma^{-1}\partial_j{\boldsymbol{d}}\\ &=4\partial_i{\boldsymbol{d}}^\dag \sigma^{-1}K\sigma^{-1}\partial_j{\boldsymbol{d}}, \end{split} \end{equation} where we have used $T((\sigma^{-1})^T)T=\sigma^{-1}$, because $\sigma^{-1}$ has the same structure as Eq.~\eqref{eq:structure_of_A}. (This can be for example seen from the Williamson's decomposition $\sigma^{-1}=KSD^{-1}S^\dag K$, where each of the matrices $K,S,D^{-1},...$ has this structure, therefore also $\sigma^{-1}$ has that structure.) Combining Eq.~\eqref{eq:condition_wide}, \eqref{eq:first_part}, and \eqref{eq:second_part}, we derive \[ \mathrm{tr}[\hat{\rho}[\hat{\mathscr{L}}_i,\hat{\mathscr{L}}_j]]=\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}+4\partial_i{\boldsymbol{d}}^\dag \sigma^{-1}K\sigma^{-1}\partial_j{\boldsymbol{d}}. \end{equation} \end{widetext} \section{Finding the Williamson's decomposition of the covariance matrix}\label{app:diagonalization} Here we show how to find the symplectic matrices that diagonalize the covariance matrix, $\sigma=SDS^\dag$, where $D=\mathrm{diag}(\lambda_1,\dots,\lambda_N,\lambda_1,\dots,\lambda_N)$ is a diagonal matrix consisting of \emph{symplectic eigenvalues}, which are defined as the positive eigenvalues of the matrix $K\sigma$. We note, that eigenvalues of $K\sigma$ always come in pairs~\cite{deGosson2006a,Adesso2014a}, i.e., when the $\lambda_i$ is an eigenvalue of $K\sigma$, then $-\lambda_i$ is also an eigenvalue of $K\sigma$. As a result, $KD=\mathrm{diag}(\lambda_1,\dots,\lambda_N,-\lambda_1,\dots,-\lambda_N)$ is a diagonal matrix that consists of the entire spectrum of matrix $K\sigma$. Essentially, we will redo part of the proof of the Williamson's theorem (see, e.g., Ref.~\cite{simon1999congruences}), but in the complex form of the density matrix, where the symplectic matrices are defined by the complex form of the real symplectic group, Eq.~\eqref{def:structure_of_S}. The reader is welcome to skip the derivation, and go directly to the summarized result, Theorem~\ref{thm:Williamson}. We would like to find symplectic matrices such that \[\label{eq:decomposition_sigma} \sigma=SDS^\dag. \end{equation} It can be easily checked that \[\label{eq:tildeS} S=\sigma^{\frac{1}{2}}UD^{-\frac{1}{2}} \end{equation} where $U$ is any unitary matrix, solves Eq.~\eqref{eq:decomposition_sigma}. $\sigma^{\frac{1}{2}}$ always exists, because $\sigma$ is a positive-definite matrix. For generic $U$, this solution may not be a symplectic matrix, but we will manage to find $U$ so that $S$ is indeed symplectic. To do that, we plug this solution into the defining relation for the symplectic matrix, Eq.~\eqref{def:structure_of_S}, and obtain \[ \sigma^{\frac{1}{2}}UD^{-\frac{1}{2}}KD^{-\frac{1}{2}}U^\dag\sigma^{\frac{1}{2}}=K. \end{equation} We invert both sides, and rewrite it as \[\label{eq:KD} KD=U^\dag\sigma^{\frac{1}{2}}K\sigma^{\frac{1}{2}}U. \end{equation} On the left hand side, we have a diagonal matrix consisting of eigenvalues of $K\sigma$. According to this equation, unitary matrix $U$ should be the matrix that diagonalizes matrix $\sigma^{\frac{1}{2}}K\sigma^{\frac{1}{2}}$. But does such $U$ exist? Such $U$ exists if and only if eigenvalues of $K\sigma$ and $\sigma^{\frac{1}{2}}K\sigma^{\frac{1}{2}}$ are identical. However, as we can easily see from the characteristic polynomial, \[ \begin{split} |\sigma^{\frac{1}{2}}K\sigma^{\frac{1}{2}}-\lambda|&=|\sigma^{\frac{1}{2}}||K\sigma^{\frac{1}{2}}-\lambda\sigma^{-\frac{1}{2}}|\\ &=|K\sigma^{\frac{1}{2}}-\lambda\sigma^{-\frac{1}{2}}||\sigma^{\frac{1}{2}}|=|K\sigma-\lambda|, \end{split} \end{equation} this is indeed the case. Therefore, we have found a choice of $U$ that will make sure that $S$ is symplectic. We can summarize our findings in the following Theorem: \begin{theorem}\label{thm:Williamson}(Williamson) Any covariance matrix can be decomposed using symplectic matrices as $\sigma=SDS^\dag$. Symplectic matrix $S$ is calculated as \[ S=\sigma^{\frac{1}{2}}UD^{-\frac{1}{2}}, \end{equation} where unitary matrix $U$ consists of eigenvectors of $\sigma^{\frac{1}{2}}K\sigma^{\frac{1}{2}}$, i.e., it solves Eq. \[ U^\dag\sigma^{\frac{1}{2}}K\sigma^{\frac{1}{2}}U=KD, \end{equation} and $D=\mathrm{diag}(\lambda_1,\dots,\lambda_N,\lambda_1,\dots,\lambda_N)$, where $\lambda_i$ are the positive eigenvalues of operator $\sigma^{\frac{1}{2}}K\sigma^{\frac{1}{2}}$ (or equivalently, of operator $K\sigma$). \end{theorem} \section{Derivation of formulas for the case when the Williamson's decomposition is known}\label{app:williamson_QFI} Here we use Eq.~\eqref{eq:mixed_QFI} to derive Eq.~\eqref{eq:multimode_QFI}. Using the Williamson decomposition $\sigma=SDS^\dag$, identities \eqref{def:structure_of_S} and \eqref{id:Kronecker_product_ids} we derive \[ \begin{split} \mathfrak{M}^{-1}&=(\!(S^{-1}\!)^T\!\otimes\! (\!KSK\!)\!)(\!D\!\otimes\! D\!-\!K\!\otimes\! K)^{-1}(\!(\!KSK\!)^T\!\otimes\! S^{-1}\!),\\ \partial_i\sigma&=\partial_iS DKS^{-1}K+S\partial_iD KS^{-1}K-SDKS^{-1}\partial_iSS^{-1}K. \end{split} \end{equation} The first part of Eq.~\eqref{eq:mixed_QFI} then reads \begin{multline}\label{eq:P1_first_part} \frac{1}{2}\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}=\\ \vectorization{\!P_{i}D\!-\!DKP_{i}K\!+\!\partial_iD\!}^\dag\mathfrak{M}_{diag}^{-1}\vectorization{\!P_{j}D\!-\!DKP_{j}K\!+\!\partial_jD\!}, \end{multline} where $\mathfrak{M}_{diag}=D\!\otimes\! D\!-\!K\!\otimes \!K$. We define elements of matrix $X$ in basis $\ket{\mu}\ket{\nu}$ as \[ X=\sum_{\mu,\nu,\tilde{\mu},\tilde{\nu}}X^{\tilde{\mu}\tilde{\nu},\mu\nu}\ket{\tilde{\mu}}\ket{\tilde{\nu}}\bra{\mu}\bra{\nu}, \end{equation} and applying definition of vectorization, Eq.~\eqref{eq:vectorizationdef}, on matrices $Y=\sum_{\alpha,\beta}Y^{\alpha\beta}\ket{\alpha}\bra{\beta}$, $Z=\sum_{\alpha,\beta}Z^{\alpha\beta}\ket{\alpha}\bra{\beta}$, we derive \begin{subequations} \begin{align} \vectorization{Z}&=\sum_{\alpha,\beta}Z^{\alpha\beta}\ket{\beta}\ket{\alpha},\\ \vectorization{Y}^\dag&=\sum_{\alpha,\beta}\ov{Y}^{\alpha\beta}\bra{\beta}\bra{\alpha}. \end{align} \end{subequations} Then we have \[\label{eq:vecyXz} \begin{split} &\vectorization{Y}^\dag X \vectorization{Z}\\ &=\!\!\!\sum_{\alpha,\beta,\tilde{\alpha},\tilde{\beta},\mu,\nu,\tilde{\mu},\tilde{\nu}}\!\!\!\ov{Y}^{\tilde{\alpha}\tilde{\beta}}\bra{\tilde{\beta}}\bra{\tilde{\alpha}} \bigg(X^{\tilde{\mu}\tilde{\nu},\mu\nu}\ket{\tilde{\mu}}\ket{\tilde{\nu}}\bra{\mu}\bra{\nu}\bigg) Z^{\alpha\beta}\ket{\beta}\ket{\alpha}\\ &=\!\!\!\sum_{\alpha,\beta,\tilde{\alpha},\tilde{\beta},\mu,\nu,\tilde{\mu},\tilde{\nu}}\!\!\!\ov{Y}^{\tilde{\alpha}\tilde{\beta}}X^{\tilde{\mu}\tilde{\nu},\mu\nu} Z^{\alpha\beta} \delta^{\tilde{\beta}\tilde{\mu}}\delta^{\tilde{\alpha}\tilde{\nu}}\delta^{{\beta}{\mu}}\delta^{{\alpha}{\nu}}\\ &=\!\!\!\sum_{\mu,\nu,\tilde{\mu},\tilde{\nu}}\!\!\!\ov{Y}^{\tilde{\nu}\tilde{\mu}}X^{\tilde{\mu}\tilde{\nu},\mu\nu} Z^{\nu\mu} \end{split} \end{equation} Similarly, defining matrix $W$ such that \[ \vectorization{W}=X \vectorization{Z}=\sum_{\mu,\nu}X^{\tilde{\mu}\tilde{\nu},\mu\nu} Z^{\nu\mu}\ket{\tilde{\mu}}\ket{\tilde{\nu}}, \end{equation} we find that this matrix has elements \[\label{eq:vecW} W^{\tilde{\nu}\tilde{\mu}}=\sum_{\mu,\nu}X^{\tilde{\mu}\tilde{\nu},\mu\nu} Z^{\nu\mu}. \end{equation} In this formalism, since \[ \begin{split} \mathfrak{M}_{diag}&=(\sum_{\tilde{\mu},\mu}D^{\tilde{\mu}\mu}\ket{\tilde{\mu}}\bra{\mu})\otimes(\sum_{\tilde{\nu},\nu}D^{\tilde{\nu}\nu}\ket{\tilde{\nu}}\bra{\nu})\\ &-(\sum_{\tilde{\mu},\mu}K^{\tilde{\mu}\mu}\ket{\tilde{\mu}}\bra{\mu})\otimes(\sum_{\tilde{\nu},\nu}K^{\tilde{\nu}\nu}\ket{\tilde{\nu}}\bra{\nu})\\ &=\sum_{\mu,\nu,\tilde{\mu},\tilde{\nu}}(D^{\tilde{\mu}\mu}D^{\tilde{\nu}\nu}-K^{\tilde{\mu}\mu}K^{\tilde{\nu}\nu})\ket{\tilde{\mu}}\ket{\tilde{\nu}}\bra{\mu}\bra{\nu}, \end{split} \end{equation} and $D^{\tilde{\mu}\mu}=\delta^{\tilde{\mu}\mu} D^{\mu\mu}$, $K^{\tilde{\mu}\mu}=\delta^{\tilde{\mu}\mu} K^{\mu\mu}$, we find \[ (\mathfrak{M}_{diag}^{-1})^{\tilde{\mu}\tilde{\nu},\mu\nu}=\frac{\delta^{\tilde{\mu}\mu}\delta^{\tilde{\nu}\nu}}{D^{\mu\mu}D^{\nu\nu}-K^{\mu\mu}K^{\nu\nu}}. \end{equation} Changing to element-wise notation and using Einstein's summation convention ($\mu,\nu\in\{1,\dots,2N\}$, $k,l\in\{1,\dots,N\}$) we expand Eq.~\eqref{eq:P1_first_part} using Eq.~\eqref{eq:vecyXz}, \begin{widetext} \[ \begin{split} &\frac{1}{2}\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}=\frac{1}{2}(\ov{P_{i}}D-DK\ov{P_{i}}K+\partial_iD)^{\tilde{\nu}\tilde{\mu}} \frac{\delta^{\tilde{\mu}\mu}\delta^{\tilde{\nu}\nu}}{D^{\mu\mu}D^{\nu\nu}-K^{\mu\mu}K^{\nu\nu}} (P_{j}D-DKP_{j}K+\partial_jD)^{\nu\mu}\\ &=\frac{1}{2} \frac{(\ov{P_{i}}^{\nu\mu}D^{\mu\mu}-D^{\nu\nu} K^{\nu\nu}\ov{P_{i}}^{\nu\mu}K^{\mu\mu}+\partial_iD^{\nu\nu}\delta^{\nu\mu})(P_{j}^{\nu\mu}D^{\mu\mu}-D^{\nu\nu} K^{\nu\nu} P_{j}^{\nu\mu}K^{\mu\mu}+\partial_jD^{\nu\nu}\delta^{\nu\mu})}{D^{\nu\nu}D^{\mu\mu}-K^{\nu\nu}K^{\mu\mu}}\\ &=\frac{1}{2}\bigg( \frac{1}{\lambda_k\lambda_l-1} (\ov{R_{i}}^{kl}\lambda_l-\lambda_k\ov{R_{i}}^{kl}+\partial_i\lambda_k\delta^{kl})(R_{j}^{kl}\lambda_l-\lambda_k R_{j}^{kl}+\partial_j\lambda_k\delta^{kl})\\ &+\frac{1}{\lambda_k\lambda_l-1} ({R_{i}}^{kl}\lambda_l-\lambda_k{R_{i}}^{kl}+\partial_i\lambda_k\delta^{kl})(\ov{R_{j}}^{kl}\lambda_l-\lambda_k \ov{R_{j}}^{kl}+\partial_j\lambda_k\delta^{kl})\\ &+\frac{1}{\lambda_k\lambda_l+1}(\ov{Q_{i}}^{kl}\lambda_l+\lambda_k \ov{Q_{i}}^{kl})(Q_{j}^{kl}\lambda_l+\lambda_k Q_{j}^{kl}) +\frac{1}{\lambda_k\lambda_l+1}(Q_{i}^{kl}\lambda_l+\lambda_k Q_{i}^{kl})(\ov{Q_{j}}^{kl}\lambda_l+\lambda_k \ov{Q_{j}}^{kl})\bigg)\\ &=\frac{1}{2}\bigg( \frac{1}{\lambda_k\lambda_l-1} ((\lambda_l-\lambda_k)\ov{R_{i}}^{kl}+\partial_i\lambda_k\delta^{kl})((\lambda_l-\lambda_k)R_{j}^{kl}+\partial_j\lambda_k\delta^{kl})+\frac{1}{\lambda_k\lambda_l+1} (\lambda_l+\lambda_k)^2\ov{Q_{i}}^{kl}Q_{j}^{kl}\\ &+\frac{1}{\lambda_k\lambda_l-1} ((\lambda_l-\lambda_k){R_{i}}^{kl}+\partial_i\lambda_k\delta^{kl})((\lambda_l-\lambda_k)\ov{R_{j}}^{kl}+\partial_j\lambda_k\delta^{kl})+\frac{1}{\lambda_k\lambda_l+1} (\lambda_l+\lambda_k)^2Q_{i}^{kl}\ov{Q_{j}}^{kl}\bigg)\\ &=\frac{(\lambda_l-\lambda_k)^2}{\lambda_k\lambda_l-1}\mathrm{Re}[\ov{R_{i}}^{kl}R_{j}^{kl}]+\frac{(\lambda_l+\lambda_k)^2}{\lambda_k\lambda_l+1}\mathrm{Re}[\ov{Q_{i}}^{kl}Q_{j}^{kl}] +\frac{\partial_i\lambda_k\partial_j\lambda_k}{\lambda_k^2-1}, \end{split} \end{equation} which in combination with term $2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}$ gives Eq.~\eqref{eq:multimode_QFI}. Similarly, we derive \[ \begin{split} &\vectorization{\partial_i\sigma}^\dag\mathfrak{M}^{-1}(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\mathfrak{M}^{-1}\vectorization{\partial_j\sigma}\\ &=\vectorization{\!P_{i}D\!-\!DKP_{i}K\!+\!\partial_iD\!}^\dag\mathfrak{M}_{diag}^{-1}(D\!\otimes\! K\!-\!K\!\otimes\!D)\mathfrak{M}_{diag}^{-1}\vectorization{\!P_{j}D\!-\!DKP_{j}K\!+\!\partial_jD\!},\\ &=(\ov{P_{i}}D-DK\ov{P_{i}}K+\partial_iD)^{\tilde{\nu}\tilde{\mu}} \frac{\delta^{\tilde{\mu}\mu}\delta^{\tilde{\nu}\nu}(D^{\mu\mu}K^{\nu\nu}-K^{\mu\mu}D^{\nu\nu})}{(D^{\mu\mu}D^{\nu\nu}-K^{\mu\mu}K^{\nu\nu})^2} (P_{j}D-DKP_{j}K+\partial_jD)^{\nu\mu}\\ &=-\frac{\lambda_k-\lambda_l}{(\lambda_k\lambda_l-1)^2} (\ov{R_{i}}^{kl}\lambda_l-\lambda_k\ov{R_{i}}^{kl}+\partial_i\lambda_k\delta^{kl})(R_{j}^{kl}\lambda_l-\lambda_k R_{j}^{kl}+\partial_j\lambda_k\delta^{kl})\\ &+\frac{\lambda_k-\lambda_l}{(\lambda_k\lambda_l-1)^2} ({R_{i}}^{kl}\lambda_l-\lambda_k{R_{i}}^{kl}+\partial_i\lambda_k\delta^{kl})(\ov{R_{j}}^{kl}\lambda_l-\lambda_k \ov{R_{j}}^{kl}+\partial_j\lambda_k\delta^{kl})\\ &+\frac{\lambda_k+\lambda_l}{(\lambda_k\lambda_l+1)^2}(\ov{Q_{i}}^{kl}\lambda_l+\lambda_k \ov{Q_{i}}^{kl})(Q_{j}^{kl}\lambda_l+\lambda_k Q_{j}^{kl}) -\frac{\lambda_k+\lambda_l}{(\lambda_k\lambda_l+1)^2}(Q_{i}^{kl}\lambda_l+\lambda_k Q_{i}^{kl})(\ov{Q_{j}}^{kl}\lambda_l+\lambda_k \ov{Q_{j}}^{kl})\\ &=-\frac{(\lambda_k-\lambda_l)^3}{(\lambda_k\lambda_l-1)^2}(\ov{R_{i}}^{kl}R_{j}^{kl}-\ov{\ov{R_{i}}^{kl}R_{j}^{kl}}) +\frac{(\lambda_k+\lambda_l)^3}{(\lambda_k\lambda_l+1)^2}(\ov{Q_{i}}^{kl}Q_{j}^{kl}-\ov{\ov{Q_{i}}^{kl}Q_{j}^{kl}})\\ &=-2i\frac{(\lambda_k-\lambda_l)^3}{(\lambda_k\lambda_l-1)^2}\mathrm{Im}[\ov{R_{i}}^{kl}R_{j}^{kl}]+2i\frac{(\lambda_k+\lambda_l)^3}{(\lambda_k\lambda_l+1)^2}\mathrm{Im}[\ov{Q_{i}}^{kl}Q_{j}^{kl}], \end{split} \end{equation} which in combination with term $4\partial_i\boldsymbol{d}^\dag\sigma^{-1}K\sigma^{-1}\partial_j\boldsymbol{d}$ gives Eq.~\eqref{eq:multimode_condition}. Finally, to obtain matrix elements of ${\mathscr{A}}_i$ for symmetric logarithmic derivative $\mathscr{L}_i$, we define $\vectorization{W_i}=\mathfrak{M}_{diag}^{-1}\vectorization{\!P_{i}D\!-\!DKP_{i}K\!+\!\partial_iD\!}$, and derive \[ \begin{split} \vectorization{{\mathscr{A}}_i}&=\mathfrak{M}^{-1}\vectorization{\partial_i\sigma}=((S^{-1})^T\otimes (S^{-1})^\dag)\mathfrak{M}_{diag}^{-1}\vectorization{\!P_{i}D\!-\!DKP_{i}K\!+\!\partial_iD\!}\\ &=((S^{-1})^T\otimes (S^{-1})^\dag)\vectorization{W_i}=\vectorization{(S^{-1})^\dag W_iS^{-1}}, \end{split} \end{equation} from which we have \[\label{eq:AvsS} {\mathscr{A}}_i=(S^{-1})^\dag W_iS^{-1}. \end{equation} Inserting this into the formula for the symmetric logarithmic derivative, Eq.~\eqref{eq:SLD_mixed}, we obtain \[\label{eq:SLD_mixed3} \hat{\mathscr{L}}_i({\boldsymbol{\epsilon}})={\Delta\boldsymbol{\hat{A}}}^\dag (S^{-1})^\dag W_i S^{-1}{\Delta\boldsymbol{\hat{A}}}-\frac{1}{2}\mathrm{tr}[DW_i]+2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_i{\boldsymbol{d}}. \end{equation} To obtain operator $W_i$ explicitly, using Eq.~\eqref{eq:vecW} we derive \[ \begin{split} &W_i^{\nu\mu}=\big(\mathfrak{M}_{diag}^{-1}\big)^{\mu\nu,\alpha\beta}(\!P_{i}D\!-\!DKP_{i}K\!+\!\partial_iD)^{\beta\alpha},\\ &=\frac{\delta^{\mu\alpha}\delta^{\nu\beta}}{D^{\mu\mu}D^{\nu\nu}-K^{\mu\mu}K^{\nu\nu}} (\!P_{i}D\!-\!DKP_{i}K\!+\!\partial_iD)^{\beta\alpha},\\ &=\frac{\!P_{i}^{\nu\mu}D^{\mu\mu}\!-\!D^{\nu\nu}K^{\nu\nu}P_{i}^{\nu\mu}K^{\mu\mu}\!+\!\partial_iD^{\nu\nu}\delta^{\nu\mu}}{D^{\nu\nu}D^{\mu\mu}-K^{\nu\nu}K^{\mu\mu}}. \end{split} \end{equation} We find that $W_i$ is Hermitian, and it has structure \[\label{eq:structure_of_A} W_i=\begin{bmatrix} W_{Xi} & W_{Yi} \\ \overline{W_{Yi}} & \overline{W_{Xi}} \end{bmatrix},\quad W_{Xi}^{kl} =-\frac{\lambda_k-\lambda_l}{\lambda_k\lambda_l-1}R_i^{kl}+\frac{\partial_i\lambda_k}{\lambda_k^2-1}\delta^{kl},\quad W_{Yi}^{kl}=\frac{\lambda_k+\lambda_l}{\lambda_k\lambda_l+1}Q_i^{kl}. \end{equation} Finally, we have \[ \frac{1}{2}\mathrm{tr}[DW_i]=\frac{1}{2}\sum_{\mu,\nu}D^{\nu\mu}W_i^{\mu\nu}=\sum_{k=1}^N\frac{\lambda_k\partial_i\lambda_k}{\lambda_k^2-1}. \end{equation} which in combination with Eq.~\eqref{eq:SLD_mixed3}, gives Eq.~\eqref{eq:SLD_mixed_williamson}. \begin{comment} Defining $\Delta \boldsymbol{a}=(a_1,\dots,a_N)^T$ and using $S^{-1}=KS^\dag K$, \[ S^{-1}{\Delta\boldsymbol{\hat{A}}}=\begin{bmatrix} \alpha^\dag & -\beta^T \\ -\beta^\dag & \alpha^T \end{bmatrix} \begin{bmatrix} \Delta\boldsymbol{a} \\ \Delta\boldsymbol{a}^\dag \end{bmatrix} =\begin{bmatrix} \alpha^\dag\Delta\boldsymbol{a}-\beta^T\Delta\boldsymbol{a}^\dag \\ \alpha^T\Delta\boldsymbol{a}^\dag-\beta^\dag\Delta\boldsymbol{a} \end{bmatrix}. \end{equation} \end{comment} \end{widetext} \section{Problematic behavior at points of purity}\label{app:pops} It follows from the Heisenberg uncertainty relations that $\lambda_k\geq1$, and $\lambda_k=1$ if and only if the mode is in a pure state. The multi-mode Gaussian state is pure when all symplectic eigenvalues are equal to one. Here we show why terms in Eq.~\eqref{eq:multimode_QFI} that are problematic for pure modes, are defined by Eq.~\eqref{def:problematic_points} for QFIM, and by Eq.~\eqref{def:problematic_points2} for cQFIM respectively. Let us first take a look at the QFIM. We can ask how to define problematic terms in Eq.~\eqref{eq:multimode_QFI}, which has been derived for $\lambda_k>1$, so that it can also be valid for $\lambda_k=1$. Rewriting formula for QFIM for pure Gaussian states, Eq.~\eqref{eq:pure_non-elegant}, in terms of the Williamson's decomposition of the covariance matrix yields \[\label{eq:pure_Williamson} H^{ij}({\boldsymbol{\epsilon}})=2\sum_{k,l=1}^{N}\mathrm{Re}[\ov{Q_{i}}^{kl}Q_{j}^{kl}]+2\partial_i\boldsymbol{d}^\dag\sigma^{-1}\partial_j\boldsymbol{d}. \end{equation} Since for pure Gaussian states $\lambda_k({\boldsymbol{\epsilon}})=1$ for all $k$, we can easily see that Eq.~\eqref{eq:multimode_QFI} becomes identical with Eq.~\eqref{eq:pure_Williamson}, when we define problematic terms as \[ \frac{(\lambda_k\!-\!\lambda_l)^2}{\lambda_k\lambda_l\!-\!1}({\boldsymbol{\epsilon}})=0,\quad \frac{\partial_i\lambda_k\partial_j\lambda_k}{\lambda_k^2-1}({\boldsymbol{\epsilon}})=0, \end{equation} for ${\boldsymbol{\epsilon}}$ such that $\lambda_k({\boldsymbol{\epsilon}})=\lambda_l({\boldsymbol{\epsilon}})=1$. We assume that defining problematic terms this way is valid also when only some symplectic eigenvalues are equal to 1, but not all of them as in the case of pure Gaussian states. Of course, this could also be precisely derived by carefully studying this intermediate case, for example by using methods introduced in Ref.~\cite{Monras2013a} for a single-parameter estimation: solving the Stein equation for the symmetric logarithmic derivatives, and utilizing the Moore-Penrose pseudoinverse for writing the solution for the QFIM. Now, according to Ref.~\cite{vsafranek2017discontinuities}, the cQFIM can be calculated from QFIM by performing a special limit: \[\label{eq:limitHc} H_c^{ij}({\boldsymbol{\epsilon}})=\lim_{d\epsilon\rightarrow 0}H^{ij}({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i)=\lim_{d\epsilon\rightarrow 0}H^{ij}({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_j), \end{equation} where $\boldsymbol{e}_i=(0,\dots,0,1,0\dots,0)$ denotes a unit vector with number $1$ at the $i$'th position. Let ${\boldsymbol{\epsilon}}$ be a point such $\lambda_k({\boldsymbol{\epsilon}})=1$ for some $k$, and $\sigma({\boldsymbol{\epsilon}})\in C^{(2)}$. The function $\lambda_k$ must achieve the local minimum at point ${\boldsymbol{\epsilon}}$, because $\lambda({\boldsymbol{\epsilon}})\geq 1$. The Taylor expansion must be of the form \[ \lambda_k({\boldsymbol{\epsilon}}+{\mathrm{d}\!\!\;\boldsymbol{\epsilon}})=1+\frac{1}{2}{\mathrm{d}\!\!\;\boldsymbol{\epsilon}}^T\mathcal{H}_k{\mathrm{d}\!\!\;\boldsymbol{\epsilon}}+\cdots, \end{equation} where $\mathcal{H}_k^{ij}=\partial_i\partial_j\lambda_k$ is the positive semi-definite matrix called Hessian. The Taylor expansion of the derivative at the same point is \[ \partial_i\lambda_k({\boldsymbol{\epsilon}}+{\mathrm{d}\!\!\;\boldsymbol{\epsilon}})=\frac{1}{2}(\boldsymbol{e}_i^T\mathcal{H}_k{\mathrm{d}\!\!\;\boldsymbol{\epsilon}}+{\mathrm{d}\!\!\;\boldsymbol{\epsilon}}^T\mathcal{H}_k\boldsymbol{e}_i)+\cdots =\sum_j\mathcal{H}_k^{ij}{\mathrm{d}\!\!\;\boldsymbol{\epsilon}}^j+\cdots. \end{equation} Specifically, for ${\mathrm{d}\!\!\;\boldsymbol{\epsilon}}\equiv d\epsilon\boldsymbol{e}_i$ we have \begin{subequations} \begin{align} \lambda_k({\boldsymbol{\epsilon}}+d\epsilon\boldsymbol{e}_i)&=1+\frac{1}{2}\mathcal{H}_k^{ii}d\epsilon^2+\cdots,\\ \partial_i\lambda_k({\boldsymbol{\epsilon}}+d\epsilon\boldsymbol{e}_i)&=\mathcal{H}_k^{ii}d\epsilon+\cdots,\\ \partial_j\lambda_k({\boldsymbol{\epsilon}}+d\epsilon\boldsymbol{e}_i)&=\mathcal{H}_k^{ij}d\epsilon+\cdots. \end{align} \end{subequations} According to Eq.~\eqref{eq:limitHc}, and assuming ${\boldsymbol{\epsilon}}$ is such that $\lambda_k({\boldsymbol{\epsilon}})=\lambda_l({\boldsymbol{\epsilon}})=1$, to obtain the cQFIM $H_c$, the problematic terms in Eq.~\eqref{eq:multimode_QFI} must be defined as \begin{subequations} \begin{align} \frac{(\lambda_k\!-\!\lambda_l)^2}{\lambda_k\lambda_l\!-\!1}({\boldsymbol{\epsilon}})\!&:=\!\lim_{d\epsilon\rightarrow 0}\frac{(\lambda_k({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i)\!-\!\lambda_l({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i))^2}{\lambda_k({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i)\lambda_l({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i)\!-\!1}\nonumber\\ &=\lim_{d\epsilon\rightarrow 0}\frac{\frac{1}{4}(\mathcal{H}_k^{ii}-\mathcal{H}_l^{ii})^2d\epsilon^4+\cdots}{\frac{1}{2}(\mathcal{H}_k^{ii}+\mathcal{H}_l^{ii})d\epsilon^2+\cdots}=0,\\ \frac{\partial_i\lambda_k\partial_j\lambda_k}{\lambda_k^2-1}({\boldsymbol{\epsilon}})\!&:=\!\lim_{d\epsilon\rightarrow 0}\frac{\partial_i\lambda_k({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i)\partial_j\lambda_k({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i)}{\lambda_k^2({\boldsymbol{\epsilon}}+d\epsilon\,\boldsymbol{e}_i)-1}\nonumber\\ &=\lim_{d\epsilon\rightarrow 0}\frac{\mathcal{H}_k^{ii}\mathcal{H}_k^{ij}d\epsilon^2+\cdots}{\mathcal{H}_k^{ii}d\epsilon^2+\cdots}=\mathcal{H}_k^{ij}. \end{align} \end{subequations} \section{Estimation of the remainder in the limit formula}\label{app:Remainder} Here we prove the bound on the remainder of the limit formula, Eq.~\eqref{eq:remainder}. We consider the Williamson decomposition $\sigma=SDS^\dag$. An element of the infinite series, Eq.~\eqref{eq:limit_formula}, can be written as \[ a_n=\mathrm{tr}\big[A^{-n}\partial_i{A}A^{-n}\partial_j{A}\big]=\mathrm{tr}\big[{D^{-n}B_iD^{-n}B_j}\big], \end{equation} where $B_i=S^\dag\partial_i{A}(S^\dag)^{-1}K^{-n-1}$. We can derive inequalities \[\label{eq:Remainder1} \begin{split} \norm{a_n}&=\norm{\sum_{k,l}\frac{1}{\lambda_k^n\lambda_l^n}B_i^{kl}B_j^{lk}} \leq\norm{\sum_{k,l}\frac{1}{\lambda_k^{n}\lambda_l^{n}}\norm{B_i^{kl}}\norm{B_j^{lk}}}\\ &\leq\frac{1}{\lambda_{\mathrm{min}}^{2n}}\norm{\sum_{k,l}\norm{B_i^{kl}}\norm{B_j^{lk}}}\leq\frac{1}{\lambda_{\mathrm{min}}^{2n}}\sqrt{\sum_{k,l}\norm{B_i^{kl}}^2}\sqrt{\sum_{k,l}\norm{B_j^{kl}}^2}\\ &=\frac{1}{\lambda_{\mathrm{min}}^{2n}}\sqrt{\mathrm{tr}[B_i^\dag B_i]}\sqrt{\mathrm{tr}[B_j^\dag B_j]}, \end{split} \end{equation} where the last inequality is the Cauchy-Schwarz inequality between $B_i^{kl}$ and $B_j^{lk}$ considered as vectors with $2N\times2N$ entries where $N$ is number of modes, $\lambda_{\mathrm{min}}:=\min_{i}\{\lambda_i\}$ is the smallest symplectic eigenvalue. Defining Hermitian matrix $C_i:=S^\dag \partial_i{A}KS$ we have \[\label{eq:Remainder2} \begin{split} \mathrm{tr}[(A\partial_i{A})^2]&=\mathrm{tr}[C_i^\dag D C_i D]=\sum_{k,l}\norm{C_i^{kl}}^2\lambda_k\lambda_l\\ &\geq\lambda_{\mathrm{min}}^2\mathrm{tr}[C_i^\dag C_i]=\lambda_{\mathrm{min}}^2\mathrm{tr}[B_i^\dag B_i]. \end{split} \end{equation} Combining \eqref{eq:Remainder1} and \eqref{eq:Remainder2} gives \[ \norm{a_n}\leq\sqrt{\mathrm{tr}\big[(A\partial_i{A})^2\big]}\sqrt{\mathrm{tr}\big[(A\partial_j{A})^2\big]}\lambda_{\mathrm{min}}^{-2n-2}. \end{equation} For $\lambda_{\mathrm{min}}>1$ we can estimate the remainder, \[ \begin{split} \norm{R_M}&\leq \frac{\sqrt{\mathrm{tr}\big[(A\partial_i{A})^2\big]}\sqrt{\mathrm{tr}\big[(A\partial_j{A})^2\big]}}{2}\!\!\sum_{n=M+1}^\infty\!\!\!\!{\lambda_{\mathrm{min}}^{-2n-2}}\\ &=\frac{\sqrt{\mathrm{tr}\big[(A\partial_i{A})^2\big]}\sqrt{\mathrm{tr}\big[(A\partial_j{A})^2\big]}}{2\lambda_{\mathrm{min}}^{2M+2}(\lambda_{\mathrm{min}}^2-1)}. \end{split} \end{equation} \section{Expressions for pure states}\label{app:pure_state_condition} Here we derive Eqs.~\eqref{eq:sld_pure} and~\eqref{eq:pure_state_condition}. We start by deriving \[ \begin{split} &\vectorization{{\mathscr{A}}_{i\nu}}=\mathfrak{M}_\nu^{-1}\vectorization{\partial_i\sigma}\\ &=(\nu^2\ov{\sigma}\otimes\sigma-K\otimes K)^{-1}\vectorization{\partial_i\sigma}\\ &=\tfrac{1}{\nu^2}(I-(\nu^2\ov{A}\otimes A)^{-1})^{-1}\big(\ov{\sigma}\otimes\sigma\big)^{-1}\vectorization{\partial_i\sigma}\\ &=\tfrac{1}{\nu^2}\sum_{n=0}^\infty(\nu^2\ov{A}\otimes A)^{-n}\big(\ov{\sigma}\otimes\sigma\big)^{-1}\vectorization{\partial_i\sigma}\\ &=\!\tfrac{1}{\nu^2}\bigg(\!\sum_{n=0}^\infty(\nu^2\ov{A}\otimes A)^{-2n}\!+\!(\nu^2\ov{A}\otimes A)^{-2n-1}\!\!\bigg)\big(\ov{\sigma}\otimes\sigma\big)^{-1}\!\!\!\vectorization{\partial_i\sigma}\\ &=\!\tfrac{1}{\nu^2}\!\!\sum_{n=0}^\infty\!\!(I+(\nu^2\ov{A}\otimes A)^{-1})(\nu^{-4}\ov{A}^{-2}\!\otimes\! A^{-2})^{n}\big(\ov{\sigma}\otimes\sigma\big)^{-1}\!\!\!\vectorization{\partial_i\sigma}\\ &=\tfrac{1}{\nu^2}(I+(\nu^2\ov{A}\otimes A)^{-1})\sum_{n=0}^\infty(\nu^{-4})^{n}\big(\ov{\sigma}\otimes\sigma\big)^{-1}\vectorization{\partial_i\sigma}\\ &=\tfrac{\nu^2}{\nu^4-1}(I+(\nu^2\ov{A}\otimes A)^{-1})\big(\ov{\sigma}\otimes\sigma\big)^{-1}\vectorization{\partial_i\sigma}\\ &=\tfrac{\nu^2}{\nu^4-1}(I+\tfrac{1}{\nu^2}(\ov{A}\otimes A)^{-1})\big(\ov{A}\otimes A\big)^{-1}\big(K\otimes K\big)\vectorization{\partial_i\sigma}\\ &=\tfrac{\nu^2}{\nu^4-1}(\ov{A}^{-1}\otimes A^{-1}+\tfrac{1}{\nu^2})\big(K\otimes K\big)\vectorization{\partial_i\sigma}\\ &=\tfrac{\nu^2}{\nu^4-1}(\ov{A}^{-1}\otimes A^{-1}+\tfrac{1}{\nu^2})\vectorization{\partial_iA K}\\ &=\tfrac{\nu^2}{\nu^4-1}(\vectorization{A^{-1}\partial_iA K (A^{-1})^\dag}+\tfrac{1}{\nu^2}\vectorization{\partial_iA K})\\ &=\tfrac{\nu^2}{\nu^4-1}(-\vectorization{\partial_iA A^{-1} K (A^{-1})^\dag}+\tfrac{1}{\nu^2}\vectorization{\partial_iA K})\\ &=\tfrac{\nu^2}{\nu^4-1}(-\vectorization{\partial_iA A^{-2}K}+\tfrac{1}{\nu^2}\vectorization{\partial_iA K})\\ &=\tfrac{\nu^2}{\nu^4-1}(-1+\tfrac{1}{\nu^2})\vectorization{\partial_iA K}\\ &=-\tfrac{1}{1+\nu^2}\vectorization{\partial_iA K}, \end{split} \end{equation} where we have used $A^2=1$ and $A^{-1}\partial_iA =-\partial_iA A^{-1}$, which holds for pure states. Taking the limit $\nu\rightarrow 1$ we obtain expression for pure states \[ {\mathscr{A}}_i=-\tfrac{1}{2}\partial_iA K, \end{equation} which after inserting into Eq.~\eqref{eq:SLD_mixed} gives \[ \hat{\mathscr{L}}_i({\boldsymbol{\epsilon}})=-\tfrac{1}{2}{\Delta\boldsymbol{\hat{A}}}^\dag \partial_iA K{\Delta\boldsymbol{\hat{A}}}+\tfrac{1}{4}\mathrm{tr}[A \partial_iA]+2{\Delta\boldsymbol{\hat{A}}}^\dag\sigma^{-1}\partial_i{\boldsymbol{d}}. \end{equation} The second term is zero, because $\mathrm{tr}[A \partial_iA]=\frac{1}{2}\partial_i\mathrm{tr}[A^2]=\frac{1}{2}\partial_i\mathrm{tr}[I]=0$. Alternatively, we can use \[ \partial_iA =\partial_iAA^{-2}=-A^{-1}\partial_iAA^{-1}=-\sigma^{-1}\partial_i\sigma \sigma^{-1} K \end{equation} to derive \[ \hat{\mathscr{L}}_i({\boldsymbol{\epsilon}})=\tfrac{1}{2}{\Delta\boldsymbol{\hat{A}}}^\dag\!\sigma^{-1}\! \partial_i\sigma \sigma^{-1}\!{\Delta\boldsymbol{\hat{A}}}+2{\Delta\boldsymbol{\hat{A}}}^\dag\!\sigma^{-1}\partial_i{\boldsymbol{d}}, \end{equation} which proves Eq.~\eqref{eq:sld_pure}. Further, \[ \begin{split} &\vectorization{\partial_i\sigma}^\dag\mathfrak{M}_{\nu}^{-1}(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\mathfrak{M}_{\nu}^{-1}\vectorization{\partial_j\sigma}\\ &=\tfrac{1}{(1+\nu^2)^2}\vectorization{\partial_iA K}^\dag(\ov{\sigma}\!\otimes\! K\!-\!K\!\otimes\!\sigma)\vectorization{\partial_jA K}\\ &=\tfrac{1}{(1+\nu^2)^2}\vectorization{\partial_iA K}^\dag(\vectorization{K\partial_jA A}-\vectorization{\sigma\partial_jA})\\ &=\tfrac{1}{(1+\nu^2)^2}\mathrm{tr}[(\partial_iA K)^\dag({K\partial_jA A}-{\sigma\partial_jA})]\\ &=\tfrac{1}{(1+\nu^2)^2}\mathrm{tr}[\partial_iA K({K\partial_jA A}-{\sigma\partial_jA})]\\ &=\tfrac{1}{(1+\nu^2)^2}\mathrm{tr}[A[\partial_iA,\partial_jA]]\\ &=\tfrac{1}{(1+\nu^2)^2}\mathrm{tr}[K\sigma[K\partial_i\sigma,K\partial_j\sigma]], \end{split} \end{equation} which in combination with the regularization procedure, Eq.~\eqref{eq:condition_QCRregularization_procedure2}, proves Eq.~\eqref{eq:pure_state_condition}. \end{document}
arXiv
Hermite number In mathematics, Hermite numbers are values of Hermite polynomials at zero argument. Typically they are defined for physicists' Hermite polynomials. Formal definition The numbers Hn = Hn(0), where Hn(x) is a Hermite polynomial of order n, may be called Hermite numbers.[1] The first Hermite numbers are: $H_{0}=1\,$ $H_{1}=0\,$ $H_{2}=-2\,$ $H_{3}=0\,$ $H_{4}=+12\,$ $H_{5}=0\,$ $H_{6}=-120\,$ $H_{7}=0\,$ $H_{8}=+1680\,$ $H_{9}=0\,$ $H_{10}=-30240\,$ Recursion relations Are obtained from recursion relations of Hermitian polynomials for x = 0: $H_{n}=-2(n-1)H_{n-2}.\,\!$ Since H0 = 1 and H1 = 0 one can construct a closed formula for Hn: $H_{n}={\begin{cases}0,&{\mbox{if }}n{\mbox{ is odd}}\\(-1)^{n/2}2^{n/2}(n-1)!!,&{\mbox{if }}n{\mbox{ is even}}\end{cases}}$ where (n - 1)!! = 1 × 3 × ... × (n - 1). Usage From the generating function of Hermitian polynomials it follows that $\exp(-t^{2}+2tx)=\sum _{n=0}^{\infty }H_{n}(x){\frac {t^{n}}{n!}}\,\!$ Reference [1] gives a formal power series: $H_{n}(x)=(H+2x)^{n}\,\!$ where formally the n-th power of H, Hn, is the n-th Hermite number, Hn. (See Umbral calculus.) Notes 1. Weisstein, Eric W. "Hermite Number." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/HermiteNumber.html
Wikipedia
How to deal with a huge action space, where, at every step, there is a variable number of legal actions? Active 6 days ago I am working on creating an RL-based AI for a certain board game. Just as a general overview of the game so that you understand what it's all about: It's a discrete turn-based game with a board of size $n \times n$ ($n$ depending on the number of players). Each player gets an $m$ number of pieces, which the player must place on the board. In the end, the one who has the least number of pieces wins. There are of course rules as to how the pieces can be placed so that not all placements are legal at every move. I have the game working in an OpenAI's gym environment (i.e. control by step function), have the board representation as the observation, and I have defined the reward function. The thing I am struggling with right now is to meaningfully represent the action space. I looked into how AlphaZero tackles chess. The action space there is $8*8*73 = 4672$: for every possible tile on the board, there are 73 movement-related modalities. So, for every move, the algorithm comes up with 4672 values, the illegal ones are set to zero and non-zero ones are re-normalized. Now, I am not sure if such an approach would be feasible for my use-case, as my calculations show that I have a theoretical cap of ~30k possible actions ($n * n * m$) if using the same way of calculation. I am not sure if this would still work on, especially considering that I don't have the DeepMind computing resources at hand. Therefore, my question: Is there any other way of doing it apart from selecting the legal actions from all theoretically possible ones? The legal actions would be just a fraction of the ~30k possible ones. However, at every step, the legal actions would change because every new piece determines the new placement possibilities (also, the already placed pieces are not available anymore, i.e. action space generally gets smaller with every step). I am thinking of games, like Starcraft 2, where action space must be larger still and they demonstrate good results, not only by DeepMind but also by private enthusiasts with for example DQN. I would appreciate any ideas, hints, or readings! reinforcement-learning dqn game-ai action-spaces board-games edited Jan 11 at 23:57 ytolochkoytolochko $\begingroup$ A paper by Google for you: Deep Reinforcement Learning in Large Discrete Action Spaces $\endgroup$ – Leyou Yang Jun 13 '19 at 1:46 One way I can think of is to redefine "actions" in a game to make them more fragmented, in such a way that a player has multiple actions per turn. In chess, for example, we can define an action as choosing a tile from which to move, or choosing the motion from the chosen tile, as 2 separate actions. As an example a turn might consist of the following two actions: Choose E4 Move forward 2 spaces That way there are 64 + 73, rather than 64 * 73 possible actions. The transition model would indicate that it's still the same player's turn after a "tile selection" action is done. Of course, this would require increasing the state space in such a way that you can determine which action is legal. So there's a difference between a board state where nothing is "selected" and the same board state where one tile is selected by one player. In the chess example, this would require 2 more boolean CNN layers, one for each player, indicating which tile (if any) is "selected". I never tried this myself, and I imagine that this might make the learning slower and more difficult, since it requires a deeper tree in the MCTS for the same set of actions. BridgeburnersBridgeburners Not the answer you're looking for? Browse other questions tagged reinforcement-learning dqn game-ai action-spaces board-games or ask your own question. How can you represent the state and action spaces for a card game in the case of a variable number of cards and actions? How to deal with different actions for different states of the environment? How to implement a variable action space in Proximal Policy Optimization? Is it feasible to use minimax to solve a board game with a large number of moves? Are there RL techniques to deal with incremental action spaces? How is MuZero's second binary plane for chess defined?
CommonCrawl
Click to Take the FREE Crash-Course A Gentle Introduction to the Jacobian By Stefania Cristina on August 2, 2021 in Calculus In the literature, the term Jacobian is often interchangeably used to refer to both the Jacobian matrix or its determinant. Both the matrix and the determinant have useful and important applications: in machine learning, the Jacobian matrix aggregates the partial derivatives that are necessary for backpropagation; the determinant is useful in the process of changing between variables. In this tutorial, you will review a gentle introduction to the Jacobian. The Jacobian matrix collects all first-order partial derivatives of a multivariate function that can be used for backpropagation. The Jacobian determinant is useful in changing between variables, where it acts as a scaling factor between one coordinate space and another. Photo by Simon Berger, some rights reserved. This tutorial is divided into three parts; they are: Partial Derivatives in Machine Learning The Jacobian Matrix Other Uses of the Jacobian We have thus far mentioned gradients and partial derivatives as being important for an optimization algorithm to update, say, the model weights of a neural network to reach an optimal set of weights. The use of partial derivatives permits each weight to be updated independently of the others, by calculating the gradient of the error curve with respect to each weight in turn. Many of the functions that we usually work with in machine learning are multivariate, vector-valued functions, which means that they map multiple real inputs, n, to multiple real outputs, m: For example, consider a neural network that classifies grayscale images into several classes. The function being implemented by such a classifier would map the n pixel values of each single-channel input image, to m output probabilities of belonging to each of the different classes. In training a neural network, the backpropagation algorithm is responsible for sharing back the error calculated at the output layer, among the neurons comprising the different hidden layers of the neural network, until it reaches the input. The fundamental principle of the backpropagation algorithm in adjusting the weights in a network is that each weight in a network should be updated in proportion to the sensitivity of the overall error of the network to changes in that weight. – Page 222, Deep Learning, 2019. This sensitivity of the overall error of the network to changes in any one particular weight is measured in terms of the rate of change, which, in turn, is calculated by taking the partial derivative of the error with respect to the same weight. For simplicity, suppose that one of the hidden layers of some particular network consists of just a single neuron, k. We can represent this in terms of a simple computational graph: A Neuron with a Single Input and a Single Output Again, for simplicity, let's suppose that a weight, wk, is applied to an input of this neuron to produce an output, zk, according to the function that this neuron implements (including the nonlinearity). Then, the weight of this neuron can be connected to the error at the output of the network as follows (the following formula is formally known as the chain rule of calculus, but more on this later in a separate tutorial): Here, the derivative, dzk / dwk, first connects the weight, wk, to the output, zk, while the derivative, derror / dzk, subsequently connects the output, zk, to the network error. It is more often the case that we'd have many connected neurons populating the network, each attributed a different weight. Since we are more interested in such a scenario, then we can generalise beyond the scalar case to consider multiple inputs and multiple outputs: This sum of terms can be represented more compactly as follows: Or, equivalently, in vector notation using the del operator, ∇, to represent the gradient of the error with respect to either the weights, wk, or the outputs, zk: The back-propagation algorithm consists of performing such a Jacobian-gradient product for each operation in the graph. This means that the backpropagation algorithm can relate the sensitivity of the network error to changes in the weights, through a multiplication by the Jacobian matrix, (∂zk / ∂wk)T. Hence, what does this Jacobian matrix contain? The Jacobian matrix collects all first-order partial derivatives of a multivariate function. Specifically, consider first a function that maps u real inputs, to a single real output: Then, for an input vector, x, of length, u, the Jacobian vector of size, u × 1, can be defined as follows: Now, consider another function that maps u real inputs, to v real outputs: Then, for the same input vector, x, of length, u, the Jacobian is now a v × u matrix, J ∈ ℝv×u, that is defined as follows: Reframing the Jacobian matrix into the machine learning problem considered earlier, while retaining the same number of u real inputs and v real outputs, we find that this matrix would contain the following partial derivatives: An important technique when working with integrals involves the change of variables (also referred to as, integration by substitution or u-substitution), where an integral is simplified into another integral that is easier to compute. In the single variable case, substituting some variable, x, with another variable, u, can transform the original function into a simpler one for which it is easier to find an antiderivative. In the two variable case, an additional reason might be that we would also wish to transform the region of terms over which we are integrating, into a different shape. In the single variable case, there's typically just one reason to want to change the variable: to make the function "nicer" so that we can find an antiderivative. In the two variable case, there is a second potential reason: the two-dimensional region over which we need to integrate is somehow unpleasant, and we want the region in terms of u and v to be nicer—to be a rectangle, for example. – Page 412, Single and Multivariable Calculus, 2020. When performing a substitution between two (or possibly more) variables, the process starts with a definition of the variables between which the substitution is to occur. For example, x = f(u, v) and y = g(u, v). This is then followed by a conversion of the integral limits depending on how the functions, f and g, will transform the u–v plane into the x–y plane. Finally, the absolute value of the Jacobian determinant is computed and included, to act as a scaling factor between one coordinate space and another. Deep Learning, 2017. Mathematics for Machine Learning, 2020. Single and Multivariable Calculus, 2020. Jacobian matrix and determinant, Wikipedia. Integration by substitution, Wikipedia. In this tutorial, you discovered a gentle introduction to the Jacobian. Why Do I Get Different Results Each Time in Machine… Introduction to Dimensionality Reduction for Machine… A Gentle Introduction To Partial Derivatives and… A Gentle Introduction To Gradient Descent Procedure TensorFlow 2 Tutorial: Get Started in Deep Learning… 17 Statistical Hypothesis Tests in Python (Cheat Sheet) About Stefania Cristina Stefania Cristina, PhD is a Lecturer with the Department of Systems and Control Engineering, at the University of Malta. View all posts by Stefania Cristina → backpropagation, first-order, integral calculus, jacobian, jacobian determinant, jacobian matrix, multivariate, partial derivatives Higher-Order Derivatives A Gentle Introduction To Hessian Matrices 17 Responses to A Gentle Introduction to the Jacobian Asmare Belay August 3, 2021 at 11:38 am # Stefania Cristina August 4, 2021 at 2:07 am # Italo August 16, 2021 at 12:59 am # So great, Stefania. Thanks!!! Stefania Cristina August 17, 2021 at 4:48 pm # Wuraola December 16, 2021 at 6:51 pm # Very interesting, l am working on the application of Jacobian matrix to algebra of order 4 Ananthapadmanabhan G August 5, 2021 at 11:14 pm # What are the uses of these in simultaneous equations in applied side. If you would like to solve a system of nonlinear equations by Newton's method, this is how the Jacobian would be used. Tarique Ahmad August 7, 2021 at 3:54 am # Well explained JG August 7, 2021 at 5:01 am # Great tutorial about math compact notations! Thank you! I did not know that Jacobian matrix was behind backpropagation algorithm! so it is one of the main pillars of machine learning … because of first order derivatives are behind the way to distribute the cost function (error) into each of the neuron layers's weights ..as some kind of error sensitivity to all weighs ! Anyway, many times math scares people because of symbols or compact notation …where so many things are expressed in a "simple" expression …they call it an elegant way … So I like the way where such compact functions are explained in detail, or break-down into an operative way …an later on rebuilt the expression with your own invented symbols :-)) Thank you for the insight! Rushikesh August 7, 2021 at 10:35 am # Nicely explain Keang August 7, 2021 at 4:11 pm # Is the lower left term in this formula a typo? Should it be ∂zk(v) / ∂wk(1) instead? https://machinelearningmastery.com/wp-content/uploads/2021/07/jacobian_10.png Indeed, thank you for that! Bhavin August 8, 2021 at 5:55 am # Thanks Stephanie for a nice article. Just one small comment that while you defined Jacobian matrix, I guess you want to say $J \in \R^{v\times u}$ Indeed, thank you for this! Your First Deep Learning Project in Python with Keras Step-By-Step Your First Machine Learning Project in Python Step-By-Step How to Create an ARIMA Model for Time Series Forecasting in Python Machine Learning for Developers The EBook Catalog is where you'll find the Really Good stuff.
CommonCrawl
Sandy Green (mathematician) James Alexander "Sandy" Green FRS (26 February 1926 – 7 April 2014) was a mathematician and Professor at the Mathematics Institute at the University of Warwick, who worked in the field of representation theory. Sandy Green Born(1926-02-26)26 February 1926 Rochester, New York, US Died7 April 2014(2014-04-07) (aged 88) CitizenshipBritish, American Alma materUniversity of St Andrews St John's College, Cambridge Known forWork on group representation theory, Green's relations AwardsSenior Berwick Prize (1984) de Morgan Medal (2001) Scientific career FieldsMathematics InstitutionsBletchley Park University of Manchester University of Sussex University of Warwick Doctoral advisorPhilip Hall, David Rees Doctoral studentsPerdita Stevens Early life Sandy Green was born in February 1926 in Rochester, New York, but moved to Toronto with his emigrant Scottish parents later that year. The family returned to Britain in May 1935 when his father, Frederick C. Green, took up the Drapers Professorship of French at the University of Cambridge. Education Green was educated at the Perse School, Cambridge. He won a scholarship to the University of St Andrews and matriculated aged 16 in 1942. He took an ordinary BSc in 1944, and then, after scientific service in the war, was awarded a BSc Honours in 1947. He gained his PhD at St John's College, Cambridge in 1951, under the supervision of Philip Hall and David Rees.[1][2][3] Career World War II In the summer of 1944, he was conscripted for national scientific service at the age of eighteen, and was he was assigned to work at Bletchley Park, where he acted as a human "computer" carrying out calculations in Hut F, the "Newmanry", a department led by Max Newman, which used special-purpose Colossus computers to assist in breaking German naval codes.[4] Academic work His first lecturing post (1950) was at the University of Manchester, where Newman was his Head of department. In 1964 he became a Reader at the University of Sussex, and then in 1965 was appointed as a professor at the newly formed Mathematics Institute at Warwick University, where he led the algebra group. He spent several periods as a visiting academic in the United States, beginning with a year at the Institute for Advanced Study in Princeton, New Jersey in 1960–61, as well as similar visits to universities in France, Germany and Portugal. After retiring from Warwick he became a member of the faculty and Professor Emeritus at the Mathematics Institute of the University of Oxford, in whose meetings he participated actively. His final publication was produced at the age of eighty. Work in mathematics Green found all the characters of general linear groups over finite fields (Green 1955) and invented the Green correspondence in modular representation theory. Both Green functions in the representation theory of groups of Lie type and Green's relations in the area of semigroups are named after him. His final publication (2007) was a revised and augmented edition of his 1980 work, Polynomial Representations of GL(n). Personal life Green met his wife, Margaret Lord, at Bletchley Park, where she worked as a Colossus operator, also in the Newmanry section (Hut F). The couple married in August 1950, and have two daughters and a son. Up to his death, he lived in Oxford. Honours He was elected to the Royal Society of Edinburgh in 1968 and the Royal Society in 1987[5] and was awarded two London Mathematical Society prizes: the Senior Berwick Prize in 1984[6] and the de Morgan Medal in 2001.[5][7][8] Bibliography • (1955) The characters of the finite general linear group, Trans. A. M. S. 80 402–447. • (2007) Polynomial Representations of GL_n, Lecture Notes in Mathematics, Springer, Vol. 830. 2nd edition with an Appendix on Schensted Correspondence and Littelmann Paths, K. Erdmann, J. A. Green and M. Shocker References 1. J. A. Green (1951) Abstract Algebra and Semigroups, PhD thesis, University of Cambridge 2. Green, James A. (1951). "On the structure of semigroups". Ann. Math. 2. 54 (1): 163–172. doi:10.2307/1969317. hdl:10338.dmlcz/100067. JSTOR 1969317. Zbl 0043.25601. 3. Green, J. A.; Roseblade, J. E.; Thompson, John G. (1984), "Obituary: Philip Hall", The Bulletin of the London Mathematical Society, 16 (6): 603–626, doi:10.1112/blms/16.6.603, ISSN 0024-6093, MR 0758133 4. O'Connor, John J.; Robertson, Edmund F., "James Alexander Green", MacTutor History of Mathematics archive, University of St Andrews 5. "Obituaries: James Alexander (Sandy) Green". London Mathematical Society. Archived from the original on 28 December 2012. Retrieved 5 July 2014. 6. "Berwick prizes". The MacTutor History of Mathematics archive. 7. "Citation for James Alexander Green". London Mathematical Society. Retrieved 5 July 2014. 8. Donkin, Stephen; Erdmann, Karin (30 December 2019). "James Alexander Green. 26 February 1926 – 7 April 2014". Biographical Memoirs of Fellows of the Royal Society. 67: 173–190. doi:10.1098/rsbm.2019.0012. External links • Sandy Green at the Mathematics Genealogy Project • O'Connor, John J.; Robertson, Edmund F., "Sandy Green (mathematician)", MacTutor History of Mathematics Archive, University of St Andrews • Warwick page Profile at Warwick University De Morgan Medallists • Arthur Cayley (1884) • James Joseph Sylvester (1887) • Lord Rayleigh (1890) • Felix Klein (1893) • S. Roberts (1896) • William Burnside (1899) • A. G. Greenhill (1902) • H. F. Baker (1905) • J. W. L. Glaisher (1908) • Horace Lamb (1911) • J. Larmor (1914) • W. H. Young (1917) • E. W. Hobson (1920) • P. A. MacMahon (1923) • A. E. H. Love (1926) • Godfrey Harold Hardy (1929) • Bertrand Russell (1932) • E. T. Whittaker (1935) • J. E. Littlewood (1938) • Louis Mordell (1941) • Sydney Chapman (1944) • George Neville Watson (1947) • A. S. Besicovitch (1950) • E. C. Titchmarsh (1953) • G. I. Taylor (1956) • W. V. D. Hodge (1959) • Max Newman (1962) • Philip Hall (1965) • Mary Cartwright (1968) • Kurt Mahler (1971) • Graham Higman (1974) • C. Ambrose Rogers (1977) • Michael Atiyah (1980) • K. F. Roth (1983) • J. W. S. Cassels (1986) • D. G. Kendall (1989) • Albrecht Fröhlich (1992) • W. K. Hayman (1995) • R. A. Rankin (1998) • J. A. Green (2001) • Roger Penrose (2004) • Bryan John Birch (2007) • Keith William Morton (2010) • John Griggs Thompson (2013) • Timothy Gowers (2016) • Andrew Wiles (2019) Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
\begin{document} \title[Quantum Time]{Time in Quantum Mechanics} \author{Curt A. Moyer} \affiliation{Department of Physics and Physical Oceanography, UNC Wilmington} \email{[email protected]} \begin{abstract} The failure of conventional quantum theory to recognize time as an observable and to admit time operators is addressed. Instead of focusing on the existence of a time operator for a given Hamiltonian, we emphasize the role of the Hamiltonian as the generator of translations in time to construct time states. Taken together, these states constitute what we call a timeline, or quantum history, that is adequate for the representation of any physical state of the system. Such timelines appear to exist even for the semi-bounded and discrete Hamiltonian systems ruled out by Pauli's theorem. However, the step from a timeline to a valid time operator requires additional assumptions that are not always met. Still, this approach illuminates the crucial issue surrounding the construction of time operators, and establishes quantum histories as legitimate alternatives to the familiar coordinate and momentum bases of standard quantum theory. \end{abstract} \pacs{03.65.Ca, 03.65.Ta} \maketitle \section{Introduction} The treatment of time in quantum mechanics is one of the challenging open questions in the foundations of quantum theory. On the one hand, time is the parameter entering Schr\"{o}dinger's equation and measured by an external laboratory clock. But time also plays the role of an \textit{observable} in questions involving the occurrence of an event (e.g. when a nucleon decays, or when a particle emerges from a potential barrier), and -- like every observable -- should be represented in the theory by an operator whose properties are predictors of the outcome of [event] time measurements made on physical systems. Yet no time operators occur in ordinary quantum mechanics. At its core, this is the \textit{quantum time problem}. As further testimony to this conundrum, the uncertainty principle for time/energy is known to have a different character than does the uncertainty principle for space/momentum. An important landmark in the historical development of the subject is an early `theorem' due to Pauli \cite{Pauli}. Pauli's argument essentially precludes the existence of a self-adjoint time operator for systems where the spectrum of the Hamiltonian is bounded, semi-bounded, or discrete, i.e., for most systems of physical interest. Pauli concluded that \textquotedblleft\ldots the introduction of an operator $\widehat{T}$ [time operator] must fundamentally be abandoned\ldots\textquotedblright. While counterexamples to Pauli's theorem are known, his assertion remains largely unquestioned and continues to be a major influence shaping much of the present work in this area. For a comprehensive, up-to-date review of this and related topics, see \cite{TQM},\cite{TQM2}. In this paper we advocate a different approach, one that emphasizes the statistical distribution of event times -- not the time operator -- as the primary construct. Essentially, we follow the program that regards time as a POVM (positive, operator-valued measure) observable \cite{Srinivas}. Event time distributions are calculated in the usual way from wave functions in the time basis. We show that this time basis exists even for the semi-bounded and discrete Hamiltonian systems ruled out by Pauli's theorem, and is adequate for the representation of any physical state. However, the step from a time basis to a valid time operator requires additional assumptions that are not always met. Still, this approach illuminates the crucial issue surrounding the construction of time operators and, at the same time (no pun intended), establishes the time basis as a legitimate alternative to the familiar coordinate and momentum bases of standard quantum theory. The plan of the paper is as follows: In Sec. II we introduce the time basis and establish the essential properties of its elements (the time states) for virtually any physical system. Sec. III explores the relationship between time states and time operators, and establishes existence criteria for the latter. In Secs. IV and V we show how some familiar results for specific systems can be recovered from the general theory advanced here, and in Sec. VI we obtain new results applicable to the free particle in a three-dimensional space. Results and conclusions are reported in Sec. VII. Throughout we adopt natural units in which $\hbar=1$, a choice that leads to improved transparency by simplifing numerous expressions. \section{Quantum Histories: A Novel Basis Set} We introduce basis states in the Hilbert space $|\,\tau\,\rangle$ labeled by a real variable $\tau$ that we will call \textit{system time}. For a quantum system described by the Hamiltonian operator $\widehat{H}$, these states are defined by the requirement that $\widehat{H}$ be the generator of translations among them. In particular, for any system time $\tau$ and every real number $\alpha$\ we require \begin{equation} |\,\tau+\alpha\rangle=\exp\left( -i\widehat{H}\alpha\right) |\,\tau \,\rangle\label{basis property} \end{equation} Since $\widehat{H}$ is assumed to be Hermitian, the transformation from $|\,\tau\,\rangle$ to $|\,\tau+\alpha\rangle$ will be unitary (and therefore norm-preserving). Eq.(\ref{basis property}) shows that if $|\,\tau\,\rangle$ is a member of this set then so too is $|\,\tau+\alpha\,\rangle$, implying that system time $\tau$ extends continuously from the remote `past' ($\tau=-\infty$) to the distant `future' ($\tau=+\infty$). We refer to the set $\left\{ |\,\tau\,\rangle:-\infty\leq\tau\leq\infty\right\} $ as a \textit{timeline} or \textit{quantum history}. Eq.(\ref{basis property}) is reminiscent of the propagation of quantum states, which evolve according to $|\,\psi(t)\,\rangle=\exp\left( -i\widehat{H} t\right) |\,\psi(0)\,\rangle$. It follows that the dynamical wave function in the time basis, $\langle\,\tau\,|\,\psi(t)\,\rangle$, obeys \begin{equation} \langle\,\tau\,|\,\psi(t)\,\rangle\,=\langle\,\tau-t\,|\,\psi(0)\,\rangle, \label{covariance} \end{equation} a property known in the quantum-time literature as \textit{covariance}. To appreciate its significance, recall that $\left\vert \langle\,\tau \,|\,\psi(t)\,\rangle\right\vert ^{2}$\ is essentially the probability that a given [system] time $\tau$ will be associated with some measurement after a [laboratory] time $t$ has passed; covariance ensures that the same probability will be obtained for the initial state at the earlier [system] time $\tau-t$. This is time-translation invariance, widely recognized as an essential feature that must be reproduced by any statistical distribution of time observables \cite{Kijowski}. We advance the conjecture that \textit{a quantum history exists for every system, with elements (the time states) sufficiently numerous to span the Hilbert space of physical states}. The completeness of this basis is expressed in the abstract by the following resolution of the identity: \begin{equation} 1= {\displaystyle\int_{-\infty}^{\infty}} |\,\tau\,\rangle\langle\,\tau\,|\,d\tau\label{closure rule} \end{equation} Specifically, we insist that for any two normalizable states $|\,\psi \,\rangle$\ and $|\,\varphi\,\rangle$, we must be able to write \begin{equation} \langle\varphi\,|\,\psi\,\rangle= {\displaystyle\int\limits_{-\infty}^{\infty}} \,\langle\varphi\,|\,\tau\,\rangle\langle\,\tau\,|\,\psi\,\rangle \,d\tau\label{closure-L2} \end{equation} The time states also are orthogonal, at least in a `weak' sense consistent with closure. More precisely, if $|\,\varphi\,\rangle$ in Eq.(\ref{closure-L2} ) can be replaced by a time state, we get \begin{equation} \langle\,\tau\,|\,\psi\,\rangle= {\displaystyle\int_{-\infty}^{\infty}} \langle\,\tau\,|\,\tau^{\prime}\,\rangle\langle\,\tau^{\prime}\,|\,\psi \,\rangle\,d\tau^{\prime} \label{weak-orthogonality} \end{equation} Eq.(\ref{weak-orthogonality}) expresses quantitatively the notion of `weak' orthogonality; it differs from standard usage (`strong' orthogonality) by uniquely specifying the domain of integration and restricting $|\,\psi \,\rangle$ to be a normalizable state. But owing to the continuous nature of time, the time states $|\,\tau\rangle$ are \textit{not} normalizable, so Eq.(\ref{closure-L2}) can be satisfied even when Eq.(\ref{weak-orthogonality}) is not. It follows that `weak' orthogonality is a stronger condition than closure, and subject to separate verification. Quantum histories are intimately related to spectral structure, and derivable from it. Indeed, we regard as axiomatic the premise that the eigenstates of $\widehat{H}$, say $|\,E\,\rangle$, also span the space of physically realizable states to form the \textit{spectral basis} $\left\{ |\,E\,\rangle :\forall E\right\} $. Since $\widehat{H}$ is by assumption Hermitian, the elements of the spectral basis can always be made mutually orthogonal (in the `strong' sense), and normalized so as to satisfy a closure rule akin to Eq.(\ref{closure rule}). Now Eq.(\ref{basis property}) dictates that the timeline--spectral transformation is characterized by functions $\langle \,\tau\,|\,E\,\rangle$ such that $\langle\,\tau+\alpha\,|\,E\,\rangle =\exp\left( iE\alpha\right) \langle\,\tau\,|\,E\,\rangle$; equivalently (with $\alpha\rightarrow\alpha-\tau$), \begin{equation} \exp\left( -iE\tau\right) \langle\,\tau\,|\,E\,\rangle=c_{E} \label{timeline-spectral link} \end{equation} This form is imposed by covariance. The constant on the right, while unspecified, is manifestly independent of $\tau$. This enables us to write \begin{equation} \langle\,\tau\,|\,\psi\,\rangle\equiv\sum\limits_{E}c_{E}\exp\left( iE\tau\right) \langle\,E\,|\,\psi\,\rangle\, \label{history-general} \end{equation} The sum in this equation is symbolic, translating into an ordinary sum over any discrete levels together with an integral over the continuum. \textit{A timeline exists if for every physical state }$|\,\psi\,\rangle$\textit{ the transform of Eq.(\ref{history-general}) maps the set }$\left\{ \langle \,E\,|\,\psi\,\rangle:\forall E\right\} $\textit{ into a square-integrable function }$\langle\,\tau\,|\,\psi\,\rangle$ on $-\infty<\tau<\infty$. The remainder of this section is devoted to confirming the existence of these timelines and verifying their essential properties (covariance, completeness, `weak' orthogonality) for a wide variety of spectra. More generally, we contend that timelines \textit{always} can be constructed with the properties expressed by Eqs.(\ref{basis property}) and (\ref{closure rule}) above. In this sense the Hamiltonian $\widehat{H}$ can be said to define its own time -- an intrinsic [system] time $\tau$ -- that is quite distinct from the laboratory time $t$ appearing in the Schr\"{o}dinger equation. \subsection{Accessible States Model: Non-degenerate Spectra with a Recurrence Time} We consider here spectra consisting exclusively of discrete, non-degenerate levels. The spacing between adjacent levels might be arbitrarily small (approximating a continuum), but all level separations are assumed to be non-zero. The levels are therefore countable; we ascribe to them energies $E_{j}:j=1,2,\ldots$ ordered by increasing energy, and label as $|\,E_{j} \,\rangle$ the state with energy $E_{j}$. Consistent with a Hamiltonian operator that is Hermitian, we will take these states to be mutually orthogonal and normalized (to unity). Generally, there will be an infinite number of such states that together span the Hilbert space of physical states. In the \textit{accessible states model} the first $N$ of these states are deemed sufficient to represent a given physical system subject to the available interactions. By increasing their number, we eventually include all states that are accessible from one another by some series of physical interactions. In this way we are left at every stage with a \textit{finite} [$N$-dimensional] Hilbert space spanned by a spectral basis whose elements are orthonormal. Admittedly, $N$ can be very large and is somewhat ill-defined; in consequence, results -- to be useful -- will have to be reported in a way that makes clear how we transition to the $N\rightarrow\infty$ limit. Lastly, for every $N$ we assume the existence of a \textit{recurrence time}, a reference to the smallest duration over which an arbitrary initial state regains its initial form. Recurrence has implications for spectral structure, without being unduly restrictive. Indeed, we believe the model just outlined can serve as a basic template for any realistic spectrum. Our approach is essentially that taken by Pegg \cite{Pegg} in his search for an operator conjugate to the Hamiltonian of periodic systems. What follows amounts to a restatement of those basic results and their extension to aperiodic systems, where the recurrence time becomes arbitrarily large. In keeping with Eq.(\ref{timeline-spectral link}), the spectral-time transformation in the accessible states model is specified by the functions \[ \langle\,\tau\,|\,E_{j}\,\rangle\equiv c_{j}\exp\left( iE_{j}\tau\right) \qquad j=1,2,\ldots,N \] The coefficients $c_{j}$\ are chosen to ensure closure of the resulting time states; in turn, this requires for all $k\neq j$ \[ 0= {\displaystyle\int} \langle\,E_{k}\,|\,\tau\,\rangle\langle\,\tau\,|\,E_{j}\,\rangle\,d\tau =c_{k}^{\ast}c_{j} {\displaystyle\int} \exp\left( iE_{j}\tau-iE_{k}\tau\right) \,d\tau, \] i.e., the functions $\left\{ \exp\left( iE_{j}\tau\right) :\forall j\right\} $ must constitute an orthogonal set over the domain of the time label $\tau$. This domain derives from the recurrence time (also known as the \textit{revival time}) denoted here by $\tau_{rev}$, a reference to the smallest duration over which an arbitrary initial state $|\,\psi\,\rangle$ regains its initial form, up to an overall [physically insignificant] phase. Since state evolution is governed by the system Hamiltonian, recurrence has consequences for the energy spectrum: specifically, a revival time implies the existence of a smallest integer $n_{j}$ for every energy level $E_{j}$ such that \begin{equation} E_{j}\tau_{rev}=2\pi n_{j}+\theta\label{revivals} \end{equation} To take a familiar example, let's assume the level distribution is uniform with spacing $\Delta\epsilon$, so that $E_{j}=E_{0}+j\,\Delta\epsilon,\quad j=0,1,2,\ldots$ (uniform level spacing is the hallmark of the harmonic oscillator). For this case $n_{j}=j$, $\theta=E_{0}\tau_{rev}$, and the recurrence time is just $\tau_{rev}=2\pi/\Delta\epsilon$. Another well-known example is the infinite square well, for which $E_{j}=j^{2}\,E_{1},\quad j=1,2,\ldots$ and $E_{1}$ is the energy of the ground state. In this instance we have $n_{j}=j^{2}$, $\theta=0$, and a recurrence time $\tau_{rev} =2\pi/E_{1}$. One consequence of Eq.(\ref{revivals}) is that the energy spectrum is commensurate, meaning that the ratio of any two levels -- after adjusting for a possible global offset -- is a rational number. That commensurability is a necessary but not sufficient condition for recurrence is illustrated by the discrete spectrum of hydrogen: $E_{j}=-\,E_{1}/j^{2},\quad j=1,2,\ldots$ with $-E_{1}$ the ground state energy. Here commensurability is evident (with zero offset), but there is no [finite] revival time unless the spectrum is truncated, say at $j=N$. This recurrence time clearly grows with increasing $N$ (the largest period is $2\pi/\left\vert E_{N}\right\vert $), and becomes infinite when all levels are included. With a revival time we should be able to limit the domain of $\tau$ to a single recurrence cycle; indeed, a straightforward calculation with the help of Eq.(\ref{revivals}) reveals that the functions $\left\{ \exp\left( iE_{j}\tau\right) :j=0,1,2,\ldots\right\} $ are in fact orthogonal over any one cycle, thereby rendering the spectral--time transformation essentially a Fourier series. In fact, with the understanding that $\langle\,E_{n} \,|\,\psi\,\rangle$ is zero unless $n$ is a member of the set $\left\{ n_{j}\right\} $ defined by Eq.(\ref{revivals}), Eq.(\ref{history-general}) can be recast as \[ \langle\,\tau\,|\,\psi\,\rangle=\exp\left( i\theta\tau/\tau_{rev}\right) {\displaystyle\sum_{n}} c_{n}\exp\left( i2\pi n\tau/\tau_{rev}\right) \,\langle\,E_{n} \,|\,\psi\,\rangle \] But for the multiplicative phase factor, this \textit{is} a Fourier series for functions that are periodic with period $\tau_{rev}$. At this point we could simply invoke a general result of Fourier analysis\ to argue that the above series converges [in the norm] to define a viable timeline function $\langle\,\tau\,|\,\psi\,\rangle$ for any normalizable state $|\,\psi\,\rangle$. However, it is far more illuminating to investigate in some detail how this actually comes about. To that end, we begin with an important observation reinforced by our earlier example of the discrete hydrogen spectrum: \textit{for a system with a finite number of energy levels -- no matter how large -- a revival time always exists in practice}. To see this, consider the $N-1$ `gaps' $\Delta E_{j}\equiv E_{j+1}-E_{j}$ separating adjacent discrete levels, and denote the smallest of these by $\Delta E_{\min }$. If the gap ratio $\Delta E_{j}/\Delta E_{\min}$ is a rational number, say $p_{j}/q_{j}$, then a revival time exists and is given by $\tau_{rev} =2\pi\cdot\left( \Pi_{j}q_{j}\right) /\Delta E_{\min}$. Since any irrational number can be approximated to arbitrary precision by a rational one, a revival time `almost' always exists (though it may far exceed the natural period $2\pi/\left\vert E_{j}\right\vert $ associated with any one energy). With $N$ orthogonal stationary states there can be no more than an identical number of linearly independent time states, i.e., the time basis $\left\{ |\,\tau\,\rangle:\tau_{0}\leq\tau<\tau_{0}+\tau_{rev}\right\} $ must be vastly \textit{overcomplete}. This suggests that closure might be achieved with just $N$ discrete time states, properly chosen. To verify this, we select from the time domain $\left[ \tau_{0},\tau_{0}+\tau_{rev}\right] $ a uniform mesh of $N$ points $\left\{ \tau_{p}:p=1,2,\ldots N\right\} $ and evaluate \[ {\displaystyle\sum_{p=1}^{N}} \langle E_{k}\,|\,\tau_{p}\,\rangle\langle\,\tau_{p}\,|\,E_{j}\,\rangle =c_{k}^{\ast}c_{j} {\displaystyle\sum_{p=1}^{N}} \exp\left[ i\left( E_{j}-E_{k}\right) p\Delta\tau\right] \] Each term in the sum on the right can be represented by a unit-amplitude phasor. The phasor sum is obtained by adding successive phasors tail-to-tip, resulting in an $N$-sided regular convex polygon with each exterior angle (angle between successive phasors) equal to $\left( E_{j}-E_{k}\right) \Delta\tau$. But since $N\Delta\tau=\tau_{rev}$, the cumulative exterior angle is always an integer multiple of $2\pi$ ($\left( E_{j}-E_{k}\right) \tau_{rev}=2\pi\left( n_{j}-n_{k}\right) $ from Eq.(\ref{revivals})), implying that said polygon invariably is \textit{closed}. Thus the sum vanishes for all $E_{k}\neq E_{j}$. (For $E_{k}=E_{j}$ the value is simply $N$, the number of terms in the sum.) Since $|\,E_{j}\,\rangle$ and $|\,E_{k}\,\rangle$ refer to any pair of stationary states (and these are complete by hypothesis),\ the preceding results imply the operator equivalence \[ \frac{1}{N} {\displaystyle\sum_{p=1}^{N}} \,|\,\tau_{p}\,\rangle\langle\,\tau_{p}\,|\,= {\displaystyle\sum_{j=1}^{N}} \,\left\vert c_{j}\right\vert ^{2}|\,E_{j}\,\rangle\langle\,E_{j}\,| \] \ It follows that these $N$ consecutive time states by themselves span this $N$-dimensional space provided $\left\vert c_{j}\right\vert >0$ for every $j$ (the only vector orthogonal to all is the null element). Closure equivalence with the spectral basis further requires all coefficients $c_{j}$ to have unit magnitude; since any phase can be absorbed into the stationary state $|\,E_{j}\,\rangle$, we will simply take $c_{j}=1$. The time basis $\left\{ |\,\tau_{p}\,\rangle:p=1,2,\ldots N\right\} $ has discrete elements, yet $\tau$ supposedly is a continuous variable. We resolve this apparent contradiction by noting that the argument leading to this discrete time basis is unaffected if $N$ is scaled by any integer and $\Delta\tau$ is reduced by the same factor (leaving $N\Delta\tau=\tau_{rev}$ unchanged). Thus, every set of time states \begin{equation} \left\{ |\,\tau_{p}^{\left( n\right) }\,\rangle:p=1,2,\ldots nN:\Delta \tau^{(n)}=\tau_{rev}/nN,\;n=1,2,\ldots\right\} \label{basis-discrete} \end{equation} is complete in this $N$-dimensional space, but all except the primitive one ($n=1$) is \textit{over}complete. Nonetheless, this observation allows us to write \begin{equation} {\displaystyle\sum_{j=1}^{N}} \,|\,E_{j}\,\rangle\langle\,E_{j}\,|\,=\frac{1}{\tau_{rev}} {\displaystyle\sum_{\tau=\tau_{0}}^{\tau_{0}+\tau_{rev}}} \,\,|\,\tau_{p}^{(n)}\,\rangle\langle\,\tau_{p}^{(n)}\,|\,\Delta\tau ^{(n)}\longrightarrow\frac{1}{\tau_{rev}}\int_{\tau_{0}}^{\tau_{0}+\tau_{rev} }|\,\tau\,\rangle\langle\,\tau\,|\,d\tau\label{closure-discrete} \end{equation} (Passage to the continuum limit presumes that $\tau$ has no granularity, even on the smallest scale.) Scaling the time states by $\sqrt{1/\tau_{rev}}$ then results in an indenumerable time basis that enjoys closure equivalence with the spectral basis in this $N$-dimensional space. Explicitly, the squared norm of any element $|\,\psi\,\rangle$ can be expressed as \begin{equation} \langle\,\psi\,|\,\psi\,\rangle= {\displaystyle\sum_{j=1}^{N}} \,\left\vert \langle\,E_{j}\,|\,\psi\,\rangle\right\vert ^{2}=\int_{\tau_{0} }^{\tau_{0}+\tau_{rev}}\left\vert \langle\,\tau\,|\,\psi\,\rangle\right\vert ^{2}\,d\tau\label{closure equivalence} \end{equation} with \begin{equation} \langle\,\tau\,|\,\psi\,\rangle=\frac{1}{\sqrt{\tau_{rev}}} {\displaystyle\sum_{j=1}^{N}} \exp\left( iE_{j}\tau\right) \,\langle\,E_{j}\,|\,\psi\,\rangle \label{history-discrete} \end{equation} While it could be argued that $N$ always has an upper bound in practice, it is typically quite large and known only imprecisely. Convergence of these results as $N\rightarrow\infty$ therefore is crucial to a viable theory of timelines. Now for every $N$ no matter how large, Eq.(\ref{closure-discrete}) establishes closure equivalence of the time basis with the spectral basis, and leads directly to the \textit{Plancherel identity} \cite{Olver} expressed by Eq.(\ref{closure equivalence}). But by definition, the norm of a normalizable state remains finite even in the limit $N\rightarrow\infty$, so the Plancherel identity ensures the existence of the square-integrable function $\langle\,\tau\,|\,\psi\,\rangle$ on $\left[ \tau_{0},\tau_{0}+\tau _{rev}\right] $ in this same limit. We conclude that Eq.(\ref{history-discrete}) maps the spectral components $\langle \,E_{j}\,|\,\psi\,\rangle$ into square-integrable functions $\langle \,\tau\,|\,\psi\,\rangle$ in the time basis, as required for a timeline.\ This establishes convergence [in the norm] for the timeline wave function $\langle\,\tau\,|\,\psi\,\rangle$. Our final task is to write the transformation law of Eq.(\ref{history-discrete}) in a form that is useful for calculation in the large $N$ limit. The way we proceed depends on how the recurrence time varies with $N$. As more states are included in the model, all levels might remain isolated no matter how numerous they become ($\tau_{rev}$ saturates at a finite value); alternatively, some levels may cluster to form a quasi-continuum ($\tau_{rev}\rightarrow\infty$), merging into a true continuum as $N\rightarrow\infty$. Anticipating a mix of the two, we write the quasi-continuum contributions to $\langle\,\tau\,|\,\psi\,\rangle$ in terms of the characteristic energy $\Delta E\equiv2\pi/\tau_{rev}$ associated with the recurrence time $\tau_{rev}$: \[ \frac{1}{\sqrt{\tau_{rev}}} {\displaystyle\sum_{\text{continuum}}} \left( \ldots\right) =\frac{1}{\sqrt{2\pi}} {\displaystyle\sum_{\text{continuum}}} \exp\left( iE_{j}\tau\right) \,\left[ \frac{\langle\,E_{j}\,|\,\psi \,\rangle}{\sqrt{\Delta E}}\right] \Delta E \] The significance of the bracketed term can be appreciated by comparing discrete and continuum contributions to the squared norm of the state $|\,\psi\,\rangle$: \[ \langle\,\psi\,|\,\psi\,\rangle= {\displaystyle\sum_{\text{discrete}}} \,\left\vert \langle\,E_{j}\,|\,\psi\,\rangle\right\vert ^{2}+ {\displaystyle\sum_{\text{continuum}}} \,\left\vert \frac{\langle\,E_{j}\,|\,\psi\,\rangle}{\sqrt{\Delta E} }\right\vert ^{2}\Delta E \] The replacement \[ \frac{\langle\,E_{j}\,|\,\psi\,\rangle}{\sqrt{\Delta E}}\rightarrow \langle\,E_{j}\,|\,\psi\,\rangle \] amounts to a renormalization of the quasi-continuum wave function -- known as \textit{energy normalization} \cite{Merzbacher} -- such that the [energy-] integrated density of the new function carries the same weight as does an isolated state. Replacing sums over the quasi-continuum with integrals becomes exact in the large $N$ limit (with the inclusion of additional levels, $\tau_{rev}\rightarrow\infty$ and $\Delta E\rightarrow0$). In this way, we arrive at a form of the transformation law that lends itself to computation as $N\rightarrow\infty$: \begin{equation} \langle\,\tau\,|\,\psi\,\rangle=\frac{1}{\sqrt{\tau_{rev}}} {\displaystyle\sum_{\text{discrete}}} \exp\left( iE_{j}\tau\right) \,\langle\,E_{j}\,|\,\psi\,\rangle+\frac {1}{\sqrt{2\pi}}\int_{E_{\min}}^{E_{\max}}\exp\left( iE\tau\right) \langle\,E\,|\,\psi\,\rangle\,dE \label{history-mixed} \end{equation} Eq.(\ref{history-mixed})\ assumes that all discrete stationary states are normalized to unity ($\langle\,E_{j}\,|\,E_{k}\,\rangle=\delta_{jk}$), and all [quasi-]continuum elements are energy-normalized ($\langle\,E\,|\,E^{\prime }\,\rangle=\delta\left( E\,-\,E^{\prime}\right) $). Notice that the continuum contribution references $N$ only indirectly through the spectral bounds $E_{\min}$, $E_{\max}$ \cite{eMax}. Furthermore, whenever a continuum is present (aperiodic systems, for which $\tau_{rev}\rightarrow\infty$), it makes the dominant contribution to $\langle\,\tau\,|\,\psi\,\rangle$ for any fixed value of $\tau$. To be fair, there is one limitation: $\tau$ cannot be so large that $\exp\left( iE\tau\right) $ varies appreciably over the step size $\Delta E=2\pi/\tau_{rev}$; this means the integral approximation to the quasi-continuum fails for $\tau\simeq\tau_{rev}$. In a similar vein, the discrete terms can never simply be dropped from Eq.(\ref{history-mixed}), as they make the largest contribution to $\langle\,\tau\,|\,\psi\,\rangle$ in the asymptotic regime. Although complete, the time basis typically includes non-orthogonal elements. In the absence of a continuum, and with uniform level spacing for the discrete terms, it is not difficult to show that the minimal time basis is composed of $N$ \textit{mutually orthogonal} states. But otherwise the existence of even one orthogonal pair is not guaranteed. By contrast, `weak' orthogonality is the rule in this $N$-dimensional space. This can be verified rigorously by using closure of the [discrete] time basis in Eq.(\ref{basis-discrete}) to write \[ \langle\,\tau_{q}^{(n)}\,|\,\psi\,\rangle=\frac{1}{\tau_{rev}} {\displaystyle\sum_{\tau=\tau_{0}}^{\tau_{0}+\tau_{rev}}} \,\,\langle\,\tau_{q}^{(n)}\,|\,\tau_{p}^{(n)}\,\rangle\langle\,\tau_{p} ^{(n)}\,|\,\psi\,\rangle\,\Delta\tau^{(n)} \] Scaling the time states by $\sqrt{1/\tau_{rev}}$ and passing to the continuum limit then gives \begin{equation} \langle\,\tau\,|\,\psi\,\rangle=\int_{\tau_{0}}^{\tau_{0}+\tau_{rev}} \langle\,\tau\,|\,\tau^{\prime}\,\rangle\langle\,\tau^{\prime}\,|\,\psi \,\rangle\,d\tau^{\prime}, \label{weak-ortho-discrete} \end{equation} with $\langle\,\tau\,|\,\psi\,\rangle$\ again calculated from Eq.(\ref{history-mixed}). Since $N$ is not referenced explicitly here, we conclude from Eq.(\ref{weak-ortho-discrete}) that `weak' orthogonality persists in the limit as $N\rightarrow\infty$ at every value $\tau$ where the timeline wave function $\langle\,\tau\,|\,\psi\,\rangle$ converges. \subsection{The Treatment of Degeneracy} Degenerate states require labels in addition to the energy to distinguish them. These extra labels derive from the underlying symmetry that is the root of all degeneracy. Thus, a central potential gives rise to a rotationally-invariant Hamiltonian and this, in turn, implies that the Hamiltonian operator commutes with angular momentum (the generator of rotations). In such cases, the energy label is supplemented with orbital and magnetic quantum numbers specifying the particle angular momentum. The larger point is simply this: with the additional labels comes again an unambiguous identification of the spectral states, and we write $|\,E_{j}\,\rangle \rightarrow|\,E_{j}\,,\sigma\,\rangle$, where $\sigma$ is a collective label that symbolizes \textit{all} additional quantum numbers needed to identify the spectral basis element with a given energy. With this simple modification, the arguments of the preceding section remain intact. Quantum histories can be constructed as before, but now are indexed by the same `good' quantum numbers that characterize the spectrum. That is, in the face of degeneracy we have not one -- but multiple -- timelines, and we write $|\,\tau\,\rangle \rightarrow|\,\tau,\sigma\,\rangle$. Notice that any two time states belonging to \textit{distinct} timelines will be orthogonal (in the `strong' sense) \begin{equation} \langle\,\tau,\sigma\,|\,\tau^{\prime},\sigma^{\prime}\,\rangle\propto \delta_{\sigma,\sigma^{\prime}}, \end{equation} and -- of course -- \textit{all} timelines must be included to span the entire Hilbert space, so that Eq.(\ref{closure rule}) becomes \begin{equation} 1=\sum_{\sigma} {\displaystyle\int_{-\infty}^{\infty}} |\,\tau,\sigma\,\rangle\langle\tau,\sigma\,|\,d\tau \end{equation} Oftentimes there is more than one way to select good quantum numbers. Following along with our earlier example, states having different magnetic quantum numbers may be combined to describe orbitals with highly directional characteristics that are important in chemical bonding. The crucial point here is that the new states are related to the old by a \textit{unitary} transformation (a `rotation') in the subspace spanned by the degenerate states. In turn, the `rotated' stationary states gives rise to new time states, related to the old as \begin{equation} |\,\tau^{\left( r\right) }\,\rangle=\sum_{\sigma}U_{r\sigma}|\,\tau ,\sigma\,\rangle, \label{subspace-rotation} \end{equation} where $U_{r\sigma}$ are elements of the \textit{same} unitary matrix that characterizes the subspace `rotation' (cf. Eq.(\ref{history-general})). Furthermore, unitarity ensures that the projector onto every degenerate subspace is representation-independent: \begin{align} \sum_{r}|\,\tau^{\left( r\right) }\,\rangle\langle\,\tau^{(r)}\,|\, & =\sum_{\sigma,\sigma^{\prime}}\left( \sum_{r}U_{r\sigma}U_{r\sigma^{\prime} }^{\ast}\right) |\,\tau,\sigma\,\rangle\langle\,\tau,\sigma^{\prime }\,|\nonumber\\ \, & =\sum_{\sigma,\sigma^{\prime}}\left( U^{\dag}U\right) _{\sigma ^{\prime}\sigma}|\,\tau,\sigma\,\rangle\langle\,\tau,\sigma^{\prime} \,|\,=\sum_{\sigma}|\,\tau,\sigma\,\rangle\langle\,\tau,\sigma\,| \label{subspace-projector} \end{align} As we shall soon see, Eq.(\ref{subspace-projector}) has important ramifications for the statistics of [event] time observables whenever degeneracy is present. One final observation: degenerate or not, the timelines of Eq.(\ref{history-mixed}) are \textit{not} unique, inasmuch as they can be altered by an (energy-dependent) phase adjustment to the eigenstates of $\widehat{H}$. Consider the replacement $|\,E\,\rangle\rightarrow\exp\left( i\theta_{E}\right) |\,E\,\rangle$. If $\theta_{E}$ is proportional to $E$, say $\theta_{E}=\tau_{0}E$, then $|\,\tau\,\rangle\rightarrow|\,\tau-\tau _{0}\,\rangle$, i.e., a linear (with energy) phase adjustment to the stationary states shifts the origin of system time, underscoring the notion that only \textit{durations} in system time can have measureable consequences. Other, more complicated phase adjustment schemes can be contemplated, with the time states always related by a suitable unitary transformation. Some of these have clear physical significance, as later examples will show. \section{Interpreting Time States, and an Operator for Local Time} The time states of Sec. II can be used to formally construct a time operator; for non-degenerate spectra, \begin{equation} \widehat{T}= {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} |\,\tau\,\rangle\tau\langle\,\tau\,|\,d\tau\label{time operator} \end{equation} (Degeneracy requires the replacement $|\,\tau\,\rangle\langle\,\tau \,|\,\rightarrow$\ $\sum_{\sigma}|\,\tau,\sigma\,\rangle\langle\,\tau ,\sigma\,|$ in this and subsequent expressions.) At a minimum, the existence of $\widehat{T}$ demands that matrix elements of Eq.(\ref{time operator}) taken between any two normalizable states $|\,\varphi\,\rangle$ and $|\,\psi\,\rangle$ be well-defined, i.e., $\left\vert \langle\varphi \,|\widehat{T\,}|\,\psi\,\rangle\right\vert <\infty$. Since the arguments of Sec. II show that the timeline functions $\langle\,\tau\,|\,\psi\,\rangle$ are square-integrable, this criterion clearly is met for all finite values of $\tau_{0}$ and $\tau_{rev}$. The issue assumes greater importance if $\tau_{rev}$ becomes arbitrarily large (aperiodic systems); we will return to this point later. Because time states are generally non-orthogonal and overcomplete, the significance of this time operator is not clear. Indeed, the states $|\,\tau\,\rangle$ cannot be eigenstates of the operator $\widehat{T}$, which is Hermitian (as evidenced by its matrix elements in the spectral basis). Nonetheless, the time basis constitutes a resolution of the identity, so the timeline wave function may admit a probability interpretation along conventional lines. For any state $|\,\psi\,\rangle$ (cf. Eq.(\ref{closure equivalence})) \[ \langle\psi\,|\,\psi\,\rangle= {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} \left\vert \langle\,\tau\,|\,\psi\,\rangle\right\vert ^{2}\,d\tau, \] suggesting that $\left\vert \langle\,\tau\,|\,\psi\,\rangle\right\vert ^{2}d\tau$ -- besides being positive definite -- is additive for disjoint sets and sums to unity for any properly normalized state, all attributes of a bona-fide probability distribution. In turn, this begs the question: if $\left\vert \langle\,\tau\,|\,\psi\,\rangle\right\vert ^{2}\,d\tau$ is a probability, to what does this probability refer? The answer is found in the notion of probability-operator measures, which asserts that the states $|\,\tau\,\rangle$ provide a \textit{positive operator-valued measure (POVM)} for the system time $\tau$; specifically, $\left\vert \langle\,\tau \,|\,\psi\,\rangle\right\vert ^{2}\,d\tau$ represents the probability that a suitable measuring instrument will return a result for system time between $\tau$ and $\tau+d\tau$ \cite{Helstrom}. How one actually performs such a measurement is an interesting question in its own right, and will not be examined here (but see Ref.\cite{Hegerfeldt}). The utility of the time operator defined by Eq.(\ref{time operator}) is that its expectation value in any normalized state $|\,\psi\,\rangle$ furnishes the average system time, or the `first moment' of this POVM: \begin{equation} \tau_{avg}= {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} \tau\left\vert \langle\,\tau\,|\,\psi\,\rangle\right\vert ^{2}\,d\tau =\langle\,\psi\,|\widehat{T}|\,\psi\,\rangle \end{equation} For any finite value of $\tau_{rev}$ the `second moment' of this POVM also exists, and implies that the function $\langle\,\tau\,|\,\varphi \,\rangle\equiv\tau\langle\,\tau\,|\,\psi\,\rangle$ is normalizable on $\left[ \tau_{0},\tau_{0}+\tau_{rev}\right] $. Applying `weak' orthogonality to $\langle\,\tau\,|\,\varphi\,\rangle$ gives (cf. Eq.(\ref{weak-ortho-discrete})) \begin{equation} \tau\langle\,\tau\,|\,\psi\,\rangle= {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} \langle\,\tau\,|\,\tau^{\prime}\,\rangle\tau^{\prime}\langle\,\tau^{\prime }\,|\,\psi\,\rangle\,d\tau^{\prime}=\langle\,\tau\,|\widehat{T}|\,\psi \,\rangle\label{weak-eigenstate} \end{equation} Since Eq.(\ref{weak-eigenstate}) holds for every normalizable state $|\,\psi\,\rangle$, $|\,\tau\,\rangle$ is sometimes said to be a `weak' eigenstate of $\widehat{T}$ \cite{Giannitrapani}. Being a `weak' eigenstate has important consequences; for one, it allows us to write the `second moment' of this POVM as \[ {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} \tau^{2}\left\vert \langle\,\tau\,|\,\psi\,\rangle\right\vert ^{2}\,d\tau= {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} \langle\,\psi\,|\widehat{T}|\,\tau\,\rangle\langle\,\tau\,|\widehat{T} |\,\psi\,\rangle\,d\tau=\langle\,\psi\,|\widehat{T}^{2}|\,\psi\,\rangle, \] In turn, the variance of the time distribution can be expressed in the form \begin{equation} \left( \Delta\tau\right) ^{2}\equiv {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} \left( \tau-\tau_{avg}\right) ^{2}\left\vert \langle\,\tau\,|\,\psi \,\rangle\right\vert ^{2}\,d\tau=\langle\,\psi\,|\widehat{T}^{2} |\,\psi\,\rangle-\langle\,\psi\,|\widehat{T}|\,\psi\,\rangle^{2}, \end{equation} which leads in the usual way to an uncertainty principle for [system] time and energy: \begin{equation} \Delta\tau\cdot\Delta E\geq\frac{1}{2}\left\vert \langle\,\psi\,|\left[ \widehat{T},\widehat{H}\right] |\,\psi\,\rangle\right\vert \label{uncertainty product} \end{equation} For a \textit{canonical time operator}, $\left[ \widehat{T},\widehat{H} \right] =i$, and we recover the familar uncertainty relation $\Delta\tau \cdot\Delta E\geq1/2$, but as will soon become clear, canonical time operators are not to be expected in periodic systems. On a related note, Eq.(\ref{uncertainty product}) does \textit{not} extend to aperiodic systems unless the variance $\left( \Delta\tau\right) ^{2}$ can be shown to exist in the aperiodic limit $\tau_{rev}\rightarrow\infty$. Eq.(\ref{time operator}) actually defines a family of related time operators $\widehat{T}(\tau_{0})$ whose members are parametrically dependent on the continuous variable $\tau_{0}$. The relationship between family members is readily demonstrated. On the one hand, we appeal to Eq.(\ref{basis property}) to write for any real number $\alpha$ \begin{align*} \widehat{T}\left( \tau_{0}+\alpha\right) & = {\displaystyle\int\limits_{\tau_{0}+\alpha}^{\tau_{0}+\alpha+\tau_{rev}}} |\,\tau\,\rangle\tau\langle\,\tau\,|\,d\tau= {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} |\,\tau+\alpha\,\rangle\left( \tau+\alpha\right) \langle\,\tau +\alpha\,|\,d\tau\\ & =\exp\left( -i\widehat{H}\alpha\right) \left\{ \widehat{T}\left( \tau_{0}\right) +\alpha\widehat{I}\right\} \exp\left( i\widehat{H} \alpha\right) , \end{align*} where $\widehat{I}$ is the identity operation. But the same operator can be expressed in another way: \[ \widehat{T}\left( \tau_{0}+\alpha\right) =\left\{ {\displaystyle\int\limits_{\tau_{0}+\alpha}^{\tau_{0}}} + {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\tau_{rev}}} + {\displaystyle\int\limits_{\tau_{0}+\tau_{rev}}^{\tau_{0}+\tau_{rev}+\alpha}} \right\} \,|\,\tau\,\rangle\tau\langle\,\tau\,|\,d\tau \] Exploiting the periodicity of the time states, we write the last integral on the right as \[ {\displaystyle\int\limits_{\tau_{0}+\tau_{rev}}^{\tau_{0}+\tau_{rev}+\alpha}} |\,\tau\,\rangle\tau\langle\,\tau\,|\,d\tau= {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\alpha}} |\,\tau\,\rangle\left( \tau+\tau_{rev}\right) \langle\,\tau\,|\,d\tau, \] leaving \[ \widehat{T}\left( \tau_{0}+\alpha\right) =\widehat{T}\left( \tau _{0}\right) +\tau_{rev} {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\alpha}} |\,\tau\,\rangle\langle\,\tau\,|\,d\tau \] Equating the two alternative forms for $\widehat{T}\left( \tau_{0} +\alpha\right) $ gives \begin{equation} \exp\left( -i\widehat{H}\alpha\right) \widehat{T}\left( \tau_{0}\right) \exp\left( i\widehat{H}\alpha\right) +\alpha\widehat{I}=\widehat{T}\left( \tau_{0}\right) +\tau_{rev} {\displaystyle\int\limits_{\tau_{0}}^{\tau_{0}+\alpha}} |\,\tau\,\rangle\langle\,\tau\,|\,d\tau\label{time operator (Heisenberg)} \end{equation} Writing Eq.(\ref{time operator (Heisenberg)}) for infinitesimal $\alpha$ leads to the commutation relation between $\widehat{T}\left( \tau_{0}\right) $ and $\widehat{H}$: \begin{equation} \left[ \widehat{T}\left( \tau_{0}\right) ,\widehat{H}\right] =i\widehat{I}-i\tau_{rev}|\,\tau_{0}\,\rangle\langle\,\tau_{0}\,| \label{commutator-periodic} \end{equation} \textit{Thus, while the time operator of Eq.(\ref{time operator}) exists for any periodic system, it is never canonically conjugate to the Hamiltonian.} Although a bit unsettling, this negative conclusion is an inevitable consequence of periodicity, as has been argued persuasively by Pegg \cite{Pegg}. According to standard theory, the commutator of any operator with the Hamiltonian dictates the time dependence of the associated observable. Applied to the time operator $\widehat{T}$ and any normalized state $|\,\psi (t)\,\rangle$, this principle combined with Eq.(\ref{commutator-periodic}) gives \begin{align} \frac{d\tau_{avg}}{dt} & =-i\,\langle\,\psi(t)\,|\left[ \widehat{T}\left( \tau_{0}\right) ,\widehat{H}\right] |\,\psi(t)\,\rangle\nonumber\\ & =1-\tau_{rev}\left\vert \langle\,\tau_{0}\,|\,\psi(t)\,\rangle\right\vert ^{2}=1-\tau_{rev}\left\vert \langle\,\tau_{0}-t\,|\,\psi(0)\,\rangle \right\vert ^{2} \label{system-time change} \end{align} We see at once that average system time $\tau_{avg}$ faithfully tracks laboratory time $t$ so long as the overlap $\langle\,\tau_{0}-t\,|\,\psi (0)\,\rangle$ is negligible, but in periodic systems it is equally clear that this happy state of affairs cannot last throughout an entire recurrence cycle. Eq.(\ref{system-time change}) also implies that $\tau_{avg}$ is unchanging in any stationary state, as expected for a time that is related to some [event] observable. The discussion thus far leaves open the possibility that a canonical time operator might still exist in aperiodic systems ($\tau_{rev}\rightarrow\infty $), provided we choose $\tau_{0}$ judiciously. That hope is reinforced by the observation that $\left\vert \langle\,\tau\,|\,\psi\,\rangle\right\vert ^{2}$ vanishes in the asymptotic regime for any normalizable state $|\,\psi \,\rangle$ ($\langle\,\tau\,|\,\psi\,\rangle$ is square-integrable). Explicitly, if $\tau_{0}$ scales with $\tau_{rev}$ and $\langle\,\tau _{0}\,|\,\psi\,\rangle$ approaches zero `fast enough', the non-canonical term \ in Eq.(\ref{commutator-periodic}) will vanish in the aperiodic limit. Interestingly, the presence of even one truly discrete level in the spectrum of $\widehat{H}$ precludes this possibility, since then Eq.(\ref{history-mixed}) shows that $\tau_{rev}\left\vert \langle\,\tau _{0}\,|\,\psi\,\rangle\right\vert ^{2}$ is $O(1)$ as $\tau_{rev} \rightarrow\infty$, no matter how we choose $\tau_{0}$. But \textit{absent any isolated levels, the choice }$\tau_{0}\propto\tau_{rev}$ \textit{together with the square-integrability of }$\langle\,\tau\,|\,\psi\,\rangle$\textit{ is enough to ensure that the time operator of Eq.(\ref{time operator}) is canonical to the Hamiltonian in the aperiodic limit }$\tau_{rev} \rightarrow\infty$. Having said this, however, we must caution that there is no \textit{a-priori} guarantee that the integral of Eq.(\ref{time operator}) actually exists in this limit! From an existence standpoint, the most forgiving choice for $\tau_{0}$ appears to be\textit{ }$\tau_{0}=-\tau_{rev}/2$\textit{,} which leads to the definition of a time operator in aperiodic systems as a Cauchy principal value: \begin{equation} \widehat{T}_{aperiodic}\equiv P {\displaystyle\int\limits_{-\infty}^{\infty}} |\,\tau\,\rangle\tau\langle\,\tau\,|\,d\tau, \label{Cauchy-T} \end{equation} The existence of $\widehat{T}_{aperiodic}$ hinges on the asymptotic behavior of the timeline function $\langle\,\tau\,|\,\psi\,\rangle$. Generally, we can expect asymptotic behavior consistent with the square-integrability of $\langle\,\tau\,|\,\psi\,\rangle$ on $\left( -\infty,\infty\right) $, but this alone is insufficient to secure the convergence of the integral in Eq.(\ref{Cauchy-T}). As is well-known, the asymptotics of the (Fourier) integral in Eq.(\ref{history-mixed}) are dictated by the properties of the spectral wave function $\langle\,E\,|\,\psi\,\rangle$ \cite{Erdelyi}. We will assume that $\langle\,E\,|\,\psi\,\rangle$ is continuously differentiable throughout the continuum $E_{\min}\leq E\leq E_{\max}$, with a derivative that is integrable over $E_{\min}<E<E_{\max}$. \textit{If,} \textit{additionally, }$\langle\,E\,|\,\psi\,\rangle$\textit{ vanishes at the continuum edge(s), an integration by parts of Eq.(\ref{history-mixed}) shows that }$\langle \,\tau\,|\,\psi\,\rangle$\textit{ is }$o\left( \tau^{-1}\right) $\textit{, and the integral of Eq.(\ref{Cauchy-T}) converges to define a valid time operator for aperiodic systems.} As a corollary, we note that if the spectrum has no natural bounds ($E_{\min}\rightarrow-\infty$ \textit{and} $E_{\max }\rightarrow+\infty$), then the square-integrability of $\langle \,E\,|\,\psi\,\rangle$ by itself is enough to guarantee a valid time operator. Otherwise, the existence (or not) of $\widehat{T}_{aperiodic}$ involves more delicate questions of convergence having to do with the Cauchy principle value; such issues are best addressed in individual cases. Finally, we examine how relevant properties of the spectral wave function translate into the language of stationary state wave functions. To that end, we write $\langle\,E\,|\,\psi\,\rangle$ in a generic coordinate basis whose elements we denote simply as $|\,q\,\rangle$: \[ \langle\,E\,|\,\psi\,\rangle= {\displaystyle\int} \psi(q)\,\langle\,q\,|\,E\,\rangle^{\ast}\,dq \] Here $\langle\,q\,|\,E\,\rangle$ is a stationary wave with energy $E$ and $\psi(q)=\langle\,q\,|\,\psi\,\rangle$ is the Schr\"{o}dinger wave function for the state $|\,\psi\,\rangle$. But this representation is valid for \textit{any} normalizable state (the coordinate basis is complete), and so we are free to choose $\psi(q)$ as we please, provided only that it is a square-integrable function. Taking $\psi(q)$ to have support over only an arbitrarily narrow interval about $q_{0}$ essentially `picks out' the stationary wave value at $q_{0}$. In this way we argue that \textit{any demand placed on }$\langle\,E\,|\,\psi\,\rangle$\textit{ for every [normalizable] state }$|\,\psi\,\rangle$\textit{ becomes a condition on the [energy-normalized] stationary wave }$\langle\,q\,|\,E\,\,\rangle$\textit{ that must be met for all }$\,q$. \section{Example: Particle in Free Fall} In this -- arguably the simplest -- case, we take $\widehat{H}=\widehat{p} ^{2}/2m-F\widehat{x}$, with $F=-mg$ denoting the classical force of gravity. (With $F=q\varepsilon$, the same Hamiltonian describes a charge $q$ in a uniform electric field $\varepsilon$.) The spectrum is non-degenerate, and stretches continuously from $E_{\min}=-\infty$ to $E_{\max}=\infty$. While the unbounded nature of the spectrum from below is considered unphysical, this model nonetheless serves a useful purpose by sidestepping the issue of boundary conditions at the potential energy minimum. With no isolated levels and an unbounded continuum, Eq.(\ref{history-mixed}) becomes \begin{equation} \langle\,\tau\,|\,\psi\,\rangle\equiv\frac{1}{\sqrt{2\pi}} {\displaystyle\int\limits_{-\infty}^{\infty}} \exp\left( iE\tau\right) \langle\,E\,|\,\psi\,\rangle\,dE \label{history-open} \end{equation} Eq.(\ref{history-open}) will be recognized as a conventional Fourier transform, the properties of which follow from the extensive theory on Fourier integrals. In particular, the integral maps square-integrable functions $\langle\,E\,|\,\psi\,\rangle$ into new functions $\langle\,\tau \,|\,\psi\,\rangle$ (wave functions in the time basis) that are themselves square-integrable \cite{Olver2}. In the coordinate basis, the stationary states are $\langle x|\,E\,\rangle =C_{E}Ai(-z)$, with $Ai(\ldots)$ the Airy function, $z\equiv\kappa\left( x+E/F\right) $, and $\kappa\equiv\left( 2mF\right) ^{1/3}$ \cite{Landau}. The constant $C_{E}$ is fixed such that these stationary states are energy-normalized, i.e., $\langle\,E\,|\,E^{\prime}\,\rangle=\delta\left( E\,-\,E^{\prime}\right) $. From the orthogonality relation for Airy functions \cite{Aspnes}, we find \begin{align*} \langle\,E\,|\,E^{\prime}\,\rangle & =C_{E}C_{E^{\prime}} {\displaystyle\int\limits_{-\infty}^{\infty}} \,dx\,Ai(-\kappa x-\kappa E/F)\,Ai(-\kappa x-\kappa E^{\prime}/F)\\ & =C_{E}^{2}\frac{1}{\kappa}\delta\left( \kappa E/F\,-\kappa E^{\prime }/F\right) =C_{E}^{2}\frac{\left\vert F\right\vert }{\kappa^{2}}\delta\left( E\,-E^{\prime}\right) \end{align*} Thus, the desired normalization follows if we take \begin{equation} C_{E}=\sqrt{\kappa^{2}/\left\vert F\right\vert } \label{norm-freefall} \end{equation} \subsection{Free-Fall Timelines} The Schr\"{o}dinger wave function associated with a time state follows by taking $|\,\psi\,\rangle=|\,x\,\rangle$ in Eq.(\ref{history-open}). Defining $\langle\,x\,|\,\tau\,\rangle\equiv\Xi_{\tau}(x)$ , we find with the help of Eq.(\ref{norm-freefall}) \begin{equation} \Xi_{\tau}(x)=\sqrt{\frac{\kappa^{2}}{2\pi\left\vert F\right\vert }} {\displaystyle\int\limits_{-\infty}^{\infty}} \,dE\exp\left( -iE\tau\right) \,Ai(-\kappa x-\kappa E/F) \label{integral-freefall} \end{equation} The integral is essentially the Fourier transform of the Airy function; this is readily identified from the integral representation for $Ai$ \cite{Abramowitz} to give \begin{equation} \Xi_{\tau}(x)=\sqrt{\frac{\left\vert F\right\vert }{2\pi}}\exp\left( iFx\tau-iF^{2}\tau^{3}/6m\right) \label{waves-open} \end{equation} Notice that $\Xi_{\tau}^{\ast}(x)=\Xi_{-\tau}(x)$, a result that also follows from inspection of the integral form, Eq.(\ref{integral-freefall}). The timeline wave for this case is simplicity itself: except for a [physically insignificant] phase factor and a different normalization, Eq.(\ref{waves-open}) is the usual plane wave associated with the momentum eigenstate $|\,p\,\rangle$ for momentum $p=F\tau$! Accordingly, at the system time $\tau$ the particle attains a specified value of momentum ($F\tau$). Routine -- albeit not rigorous -- means for establishing directly the properties of the resulting timeline rely on the integral representation of the Dirac delta function \begin{equation} \delta\left( k\right) =\frac{1}{2\pi} {\displaystyle\int\limits_{-\infty}^{\infty}} \exp\left( ikx\right) \,\,dx \label{Dirac delta} \end{equation} coupled with a certain flair for manipulation. For example, using Eq.(\ref{Dirac delta}) we easily discover that the timeline waves for this case are truly orthogonal: \begin{align*} \langle\,\tau^{\prime}\,|\,\tau\,\rangle & \equiv {\displaystyle\int\limits_{-\infty}^{\infty}} \,dx\,\Xi_{\tau^{\prime}}^{\ast}(x)\,\Xi_{\tau}(x)\\ & =\frac{\left\vert F\right\vert }{2\pi}\exp\left[ iF^{2}\left( \tau^{\prime3}-\tau^{3}\right) /6m\right] {\displaystyle\int\limits_{-\infty}^{\infty}} \,dx\,\exp\left[ iFx\left( \tau-\tau^{\prime}\right) \right] =\delta (\tau^{\prime}-\tau) \end{align*} Timeline closure (cf. Eq.(\ref{closure rule})) can be confirmed with equal ease: \[ {\displaystyle\int\limits_{-\infty}^{\infty}} \,d\tau\,\Xi_{\tau}^{\ast}(x^{\prime})\,\Xi_{\tau}(x)=\frac{\left\vert F\right\vert }{2\pi} {\displaystyle\int\limits_{-\infty}^{\infty}} \,d\tau\,\exp\left[ iF\tau\left( x-x^{\prime}\right) \right] =\delta(x-x^{\prime}) \] These cavalier manipulations find their ultimate justification in the theory of distributions, or generalized functions, which gives precise meaning to integrals such as Eq.(\ref{Dirac delta}) that do not converge in any standard sense. \subsection{Time Operator for a Freely-Falling Particle} With a spectrum that is unbounded both above and below, a particle in free-fall is described by timeline functions $\langle\,\tau\,|\,\psi\,\rangle$ that support a canonical time operator, as discussed in Sec. III. Indeed, Eq.(\ref{history-open}) can be integrated once by parts to show that $\langle\,\tau\,|\,\psi\,\rangle$ is $o\left( \tau^{-1}\right) $ as $\tau\rightarrow\pm\infty$, just enough to secure convergence of the integral in Eq.(\ref{Cauchy-T}). The restrictions leading to this conclusion are quite modest ($\langle\,E\,|\,\psi\,\rangle$\ must be continuously differentiable and its derivative integrable over the entire real line), and likely to be met in all but the most pathological cases. Notice that the existence of a time operator here does \textit{not} contradict Pauli's argument, since all energies are allowed for a freely-falling mass. With still more manipulative flair, we can proceed to assign matrix elements of $\widehat{T}$ in the coordinate basis (cf. Eq.(\ref{time operator})): \begin{align*} \langle\,x\,|\widehat{T}|\,x^{\prime}\rangle & = {\displaystyle\int\limits_{-\infty}^{\infty}} \langle\,x\,|\,\tau\,\rangle\tau\langle\,\tau\,|\,x^{\prime}\rangle\,d\tau= {\displaystyle\int\limits_{-\infty}^{\infty}} \,d\tau\,\Xi_{\tau}^{\ast}(x^{\prime})\tau\,\Xi_{\tau}(x)\\ & =\frac{\left\vert F\right\vert }{2\pi} {\displaystyle\int\limits_{-\infty}^{\infty}} \,d\tau\,\tau\exp\left[ iF\tau\left( x-x^{\prime}\right) \right] =\frac {1}{iF}\frac{\partial}{\partial x}\delta(x-x^{\prime}) \end{align*} These are reminiscent of matrix elements of the momentum operator $\widehat{p}$ in this basis: comparing the two, we arrive at the identification \begin{equation} \widehat{T}_{free-fall}=\frac{1}{F}\widehat{\,p} \label{T-freefall} \end{equation} By inspection, the time operator of Eq.(\ref{T-freefall}) clearly is Hermitian and canonically conjugate to $\widehat{H}$, $\left[ \widehat{T} ,\widehat{H}\right] =i$. Thus,\textit{ the canonical time operator for a freely-falling particle is simply a scaled version of the operator for particle momentum!} \section{Example: Free Particle in One Dimension} The Hamiltonian for this case $\widehat{H}=\widehat{p}^{2}/2m$ describes a particle free to move along the line $-\infty<x<\infty$. The spectrum of $\widehat{H}$\ extends from $E_{\min}=0$ to $E_{\max}=\infty$, and each energy level is doubly-degenerate. Accordingly, the timeline waves in this example $\langle\,\tau\,|\,\psi\,\rangle$\ are calculated from the expression (cf. Eq.(\ref{history-mixed})) \begin{equation} \langle\,\tau\,|\,\psi\,\rangle\equiv\frac{1}{\sqrt{2\pi}} {\displaystyle\int\limits_{0}^{\infty}} \exp\left( iE\tau\right) \langle\,E\,|\,\psi\,\rangle\,dE \label{history-semiopen} \end{equation} (The two-fold degeneracy of the free-particle continuum implies that the stationary states carry an additional label, as elaborated below.) Eq.(\ref{history-semiopen}) is a \textit{holomorphic Fourier transform}, with very close ties to the standard Fourier transform encountered in Section IV. Indeed, if we agree to extend the function $\langle\,E\,|\,\psi\,\rangle$ to all real energies by the rule $\langle\,E\,|\,\psi\,\rangle\equiv0$ for $E<0$, then Eq.(\ref{history-semiopen}) reverts to the familiar Fourier integral. For square-integrable functions $\langle\,E\,|\,\psi\,\rangle$, Fourier integral theory then guarantees that the transform function $\langle\,\tau \,|\,\psi\,\rangle$ also is square-integrable over its domain, $-\infty <\tau<\infty$ \cite{Olver2}. Furthermore, the inverse transform is \begin{equation} \langle\,E\,|\,\psi\,\rangle=\frac{1}{\sqrt{2\pi}} {\displaystyle\int\limits_{-\infty}^{\infty}} \exp\left( -iE\tau\right) \langle\,\tau\,|\,\psi\,\rangle\,d\tau \label{spectrum-semiopen} \end{equation} The essential new feature introduced by a spectrum bounded from below is that $\langle\,\tau\,|\,\psi\,\rangle$ calculated from Eq.(\ref{history-semiopen}) is analytic (holomorphic) for all complex values of $\tau$ in the upper half-plane $\Im m\,\tau>0$, and vanishes as $\left\vert \tau\right\vert \rightarrow\infty$ in the entire sector $0\leq\arg(\tau)\leq\pi$. In turn, these properties of $\langle\,\tau\,|\,\psi\,\rangle$ in the complex plane ensure that $\langle\,E\,|\,\psi\,\rangle$ calculated from Eq.(\ref{spectrum-semiopen}) is truly zero for all negative values of $E$ (follows from applying the residue calculus to a contour of integration consisting of the real axis closed by an infinite semicircle in the upper half-plane). \subsection{Free-Particle Timelines} We begin by taking the degenerate eigenfunctions to be plane waves, writing $|\,E\,\rangle\rightarrow|\,k\,\rangle$ with $\langle\,x\,|\,k\,\rangle =C_{k}\exp\left( ikx\right) $. These are harmonic oscillations with wavenumber $k$ and energy $E_{k}=k^{2}/2m$. Orthogonality of these waves is expressed by \begin{align*} \langle\,k\,|\,k^{\prime}\,\rangle & =C_{k}C_{k^{\prime}} {\displaystyle\int\limits_{-\infty}^{\infty}} \,dx\,\exp\left( ikx-ik^{\prime}x\right) \\ & =C_{k}^{2}2\pi\,\delta\left( k-k^{\prime}\right) =C_{k}^{2}\,\frac {2\pi\left\vert k\right\vert }{m}\,\delta\left( E_{k}\,-\,E_{k^{\prime} }\right) , \end{align*} so that energy normalization in this case requires \begin{equation} C_{k}=\sqrt{\frac{m}{2\pi\left\vert k\right\vert }} \label{norm-free} \end{equation} Plane waves running in opposite directions ($\pm k$) give rise to distinct quantum histories, which we distinguish by the direction of wave propagation: $|\,\tau\,\rangle\rightarrow|\,\tau,\rightleftarrows\rangle$. Timeline elements in this representation are described by the Schr\"{o}dinger wave functions $\langle\,x\,|\,\tau,\rightleftarrows\rangle\equiv\Xi_{\tau }^{\rightleftarrows}(x)$, obtained by taking $|\,\psi\,\rangle=|\,x\,\rangle$ in Eq.(\ref{spectrum-semiopen}): \begin{align} \Xi_{\tau}^{\rightleftarrows}(x) & =\frac{1}{\sqrt{2\pi}} {\displaystyle\int\limits_{0}^{\infty}} \exp\left( -iE_{k}\tau\right) C_{k}\exp(\pm ikx)\,dE_{k}\nonumber\\ & =\frac{1}{2\pi\sqrt{m}} {\displaystyle\int\limits_{0}^{\infty}} \,dk\,\sqrt{k}\exp\left( \pm ikx-ik^{2}\tau/2m\right) \label{integral-directional} \end{align} In this and subsequent expressions, the right (left) arrow is associated with the upper (lower) sign. The integral of Eq.(\ref{integral-directional}) is related to the parabolic cylinder function $D_{\nu}(\ldots)$; in particular, we have for $\Im m\,\tau<0$ ($\Re e\left( \,i\tau\right) >0$) \cite{Gradshteyn2} \begin{equation} \Xi_{\tau}^{\rightleftarrows}(x)=\frac{1}{4\sqrt{\pi m}}z^{3/2}\exp\left( -x^{2}z^{2}/4\right) D_{-3/2}\left( \mp ixz\right) \text{\qquad} z\equiv\sqrt{\frac{m}{i\tau}} \label{waves-directional} \end{equation} This form holds for $\left\vert \arg\,z\right\vert <\pi/4$, but since $D_{\nu }$ is an entire function of its argument \cite{Bateman2} the result can be analytically continued to all real values of $\tau$ ($\arg\,z=\pm\pi/4$). Clearly, $\Xi_{\tau}^{\rightleftarrows}(-x)=\Xi_{\tau}^{\leftrightarrows}(x)$. For $\tau$ real and negative we take $\arg\,z=\pi/4$ in Eq.(\ref{waves-directional}) to obtain \begin{align} \Xi_{-\tau}^{\rightleftarrows}(x) & =\frac{1}{4\sqrt{\pi m}}\left\vert z\right\vert ^{3/2}\exp\left( -ix^{2}\left\vert z\right\vert ^{2} /4+i3\pi/8\right) D_{-3/2}\left( \mp ix\left\vert z\right\vert \exp\left( +i\pi/4\right) \right) \nonumber\\ & =\left[ \Xi_{\tau}^{\leftrightarrows}(x)\right] ^{\ast}, \label{time-symmetry} \end{align} a relation that also is evident from the integral representation, Eq.(\ref{integral-directional}). These results are consistent with the pioneering 1974 work of Kijowski \cite{Kijowski}, who used an axiomatic approach to construct a distribution of arrival times in the momentum representation; however, the coordinate form $\Xi_{\tau}^{\rightleftarrows }(x)$ given by Eq.(\ref{waves-directional}) did not appear in the literature until more than twenty years later \cite{Muga}. Another representation better suited to numerical computation relies on the degeneracy of free-particle waves to construct histories from standing wave combinations of plane waves. Since standing waves are parity eigenfunctions, parity -- not direction of travel -- is the `good' quantum label in this scheme. The competing descriptions in terms of running waves and standing waves are connected by a unitary transformation; as noted in Sec. II, this same transformation also relates the timelines stemming from the two representations (cf. Eq.(\ref{subspace-rotation})): \begin{equation} \begin{bmatrix} |\,\tau,\rightarrow\rangle\\ |\,\tau,\leftarrow\rangle \end{bmatrix} =\frac{1}{\sqrt{2}} \begin{bmatrix} 1 & i\\ 1 & -i \end{bmatrix} \begin{bmatrix} |\,\tau,+\,\rangle\\ |\,\tau,-\,\rangle \end{bmatrix} \label{dir-parity map} \end{equation} As it happens, standing waves are simply related to Bessel functions $J_{\alpha}$ of order $\alpha=\pm1/2$. Using $\exp(\pm ikx)=\sqrt{\pi kx/2}\left[ J_{-1/2}(kx)\pm iJ_{\mp1/2}(kx)\right] $ in Eq.(\ref{integral-directional}), we find on comparing with Eq.(\ref{dir-parity map}) that timeline elements in the standing-wave representation are described by the coordinate-space forms $\langle \,x\,|\,\tau,\pm\rangle\equiv\Xi_{\tau}^{\pm}(x)$, where \begin{equation} \Xi_{\tau}^{\pm}(x)=\frac{\sqrt{x}}{2\sqrt{\pi m}} {\displaystyle\int\limits_{0}^{\infty}} \,dk\,k\exp\left( -ik^{2}\tau/2m\right) J_{\mp1/2}(kx)\text{\qquad}x\geq0 \label{integral-parity} \end{equation} The sign label $\left( \pm\right) $ specifies the parity of these waves and prescribes their extension to $x<0$. Once again the integrals in Eq.(\ref{integral-parity}) can be evaluated in closed form. The odd-parity timeline waves for $\tau>0$ and $x\geq0$ are given by \cite{Gradshteyn} \begin{equation} \Xi_{\tau}^{-}(x)=\sqrt{\frac{2}{m}}\left( xz\right) ^{3/2}\exp\left( ix^{2}z-i\pi/8\right) \,\left[ J_{3/4}\left( x^{2}z\right) -iJ_{-1/4} \left( x^{2}z\right) \right] \text{\qquad}z\equiv\frac{m}{4\tau} \label{waves-free-o} \end{equation} Unlike Eq.(\ref{waves-directional}), $z$ in this expression is real and positive. Eq.(\ref{waves-free-o}) is essentially the result reported in a recent paper by Galapon et. al. \cite{Galapon}. The odd-parity states by themselves constitute a complete history for an otherwise free particle that is confined to the half-axis $x>0$ (e.g., by an infinite potential wall at the origin), but for a truly free particle we also need the even-parity states. The even-parity timeline waves for $\tau>0$ and $x\geq0$ are \cite{Gradshteyn} \begin{equation} \Xi_{\tau}^{+}(x)=\sqrt{\frac{2}{m}}\left( xz\right) ^{3/2}\exp\left( ix^{2}z+i\pi/8\right) \,\left[ J_{1/4}\left( x^{2}z\right) -iJ_{-3/4} \left( x^{2}z\right) \right] \text{\qquad}z\equiv\frac{m}{4\tau} \label{waves-free-e} \end{equation} Eqs.(\ref{waves-free-o}) and (\ref{waves-free-e}) are valid for $\tau>0$; results for $\tau<0$ follow from $\Xi_{-\tau}^{\pm}(x)=\left[ \Xi_{\tau} ^{\pm}(x)\right] ^{\ast}$ (cf. Eq.(\ref{integral-parity})). Timeline waves of either parity are well-behaved for all finite values of $x$, but diverge (as $\left\vert x\right\vert ^{1/2}$) for $\left\vert x\right\vert \rightarrow \infty$. The time states constructed from running waves admit an interesting physical interpretation. For any $\tau>0$ the rightward-running timeline wave $\Xi_{\tau}^{\rightarrow}(x)$ diverges as $x^{1/2}$ for $x\rightarrow\infty$, but vanishes as $\left\vert x\right\vert ^{-3/2}$ for $x\rightarrow-\infty$; more precisely, the asymptotics of the parabolic cylinder function \cite{Bateman3} show that for any $\tau>0$ ($\arg\,z=-\pi/4$) and large $\left\vert x\right\vert $ \[ \Xi_{\tau}^{\rightarrow}(x)\sim \begin{array} [c]{cc} \exp\left( imx^{2}/2\tau\right) x^{1/2} & \qquad x>0\\ \left\vert x\right\vert ^{-3/2} & \qquad x<0 \end{array} \] These features are reversed for the leftward-running wave $\Xi_{\tau }^{\leftarrow}(x)$. (Corresponding results for $\tau<0$ follow directly from the relation $\Xi_{-\tau}^{\rightleftarrows}(x)=\left[ \Xi_{\tau }^{\leftrightarrows}(x)\right] ^{\ast}$.) The changeover in behavior occurs in a small neighborhood of $x=0$. The width of this \textit{transition region} narrows with diminishing values of $\tau$, approaching zero for $\tau=0$. The Schr\"{o}dinger waveform $\Xi_{\tau}^{\rightarrow}(x)$ for two [system] times straddling $\tau=0$\ are shown as Figs. 1 and 2, using a color-for-phase plotting style that captures both the modulus and phase of these complex-valued functions. The functions at any [laboratory] time other than $t=0$ are found by replacing $|\,\tau\,\rangle$ in the above argument with the evolved state $\exp\left( -i\widehat{H}t\right) |\,\tau\,\rangle =|\,\tau\,+t\,\rangle$. Thus, for $t\neq0$ the function behavior shown in the figures is unaltered, but the origin of time is shifted so that the abrupt change in behavior around $x=0$ occurs generally at the [system] time $\tau=-t$. Consequently, $-\tau$ is designated an \textit{arrival time}, inasmuch as it signals the [laboratory] time when the bulk of probability shifts from one side of the coordinate origin to the other \cite{Muga}. (The minus sign can be understood by noting that as system time increases, the time to arrival diminishes.) In summary, the construction of Eq.(\ref{waves-directional}) leads in this case to \textit{time-of-arrival states} for leftward [$\leftarrow$] or rightward [$\rightarrow$]-running waves, with $-\tau$ specifying the arrival time at the coordinate origin $x=0$. This interpretation receives further support from the recent work of Galapon \cite{Galapon}, who showed that similar states in a confined space (where they can be normalized) are such that the event of the centroid arriving at the origin coincides with the uncertainty in position being minimal. Arguably this is the best we can do in defining arrival times for entities subject to quantum uncertainty. Time-of-arrival states specific to an arbitrary coodinate point, say $x=a$, can be obtained as spatial translates of those constructed here: $|\,\tau,\rightleftarrows,a\,\rangle=\exp\left( -i\widehat{p}a\right) |\,\tau,\rightleftarrows\,\rangle$ ($\widehat{p}$, the particle momentum operator, is the generator of displacements). The associated Schr\"{o}dinger wave function is $\langle\,x\,|\,\tau,\rightleftarrows,a\rangle=\langle \,x-a\,|\,\tau,\rightleftarrows\rangle=\Xi_{\tau}^{\rightleftarrows}(x-a)$. In keeping with our earlier observation concerning the phase ambiguity of timelines, we note that spatial translates also can be recovered from Eq.(\ref{integral-directional}) by re-defining the phases of the stationary waves as $\langle\,x\,|\,k\,\rangle\rightarrow\exp\left( \mp ika\right) \langle\,x\,|\,k\,\rangle$. \begin{figure} \caption{The timeline waveform $\Xi_{\tau}^{\rightarrow}(x)$ for $\tau =-0.005$\ constructed from rightward-running plane waves. The shading (coloring) represents varying phase values for this complex function. In units where $\hbar=m=1$, the plot extends from $x=-1$ to $x=+1$. Except for an overall phase factor, this also represents the conjugate of the waveform $\Xi_{\tau}^{\leftarrow}(x)$ at the system time $\tau=0.005$ (cf. Eq.(\ref{time-symmetry})).} \label{fig1} \end{figure} \begin{figure} \caption{The timeline waveform $\Xi_{\tau}^{\rightarrow}(x)$ for $\tau =0.005$\ constructed from rightward-running plane waves. The shading (coloring) represents varying phase values for this complex function. In units where $\hbar=m=1$, the plot extends from $x=-1$ to $x=+1$. Except for an overall phase factor, this also represents the conjugate of the waveform $\Xi_{\tau}^{\leftarrow}(x)$ at the system time $\tau=-0.005$ (cf. Eq.(\ref{time-symmetry})).} \label{fig2} \end{figure} \subsection{A Free-Particle Time Operator in One Dimension} A time operator for free particles can be constructed following the recipe of Sec. III. The invariance expressed by Eq.(\ref{subspace-projector}) ensures that the same time operator results no matter which [degenerate] representation we choose for the computation. With parity as the `good' quantum number, the free-particle time operator is composed from operators in the even- and odd-parity subspaces: $\widehat{T}=\widehat{T}_{+} \oplus\widehat{T}_{-}$, where (cf. Eq.(\ref{Cauchy-T})) \begin{equation} \widehat{T}_{\pm}\equiv P {\displaystyle\int\limits_{-\infty}^{\infty}} |\,\tau,\pm\rangle\tau\langle\,\tau,\pm|\,d\tau \end{equation} Now the energy-normalized stationary waves of odd parity vanish at the lower spectral bound as $E^{1/4}$(cf. Eq.(\ref{norm-free})), so the general theory of Sec. III implies that $\widehat{T}_{-}$ is well-defined by the integral above, the principal value notwithstanding. The coordinate-space matrix elements of this operator are simply related to one of a class of integrals $I_{l}\left( r,r^{\prime}\right) $ studied in Appendix A; using the result reported there, we find \begin{equation} \langle\,x\,|\widehat{T}_{-}|\,x^{\prime}\rangle= {\displaystyle\int\limits_{-\infty}^{\infty}} \left[ \Xi_{\tau}^{-}(x)\right] ^{\ast}\tau\Xi_{\tau}^{-}(x^{\prime} )\,d\tau=\frac{1}{2}xx^{\prime}I_{0}\left( x,x^{\prime}\right) =i\frac{m} {4}x_{<}\operatorname{sgn}\left( x-x^{\prime}\right) \end{equation} The case for $\widehat{T}_{+}$ is more delicate, since the energy-normalized stationary waves of even parity actually diverge at the lower spectral bound as $E^{-1/4}$ (cf. Eq.(\ref{norm-free})). Nonetheless, Appendix A confirms that the principal value integral for the coordinate space matrix elements of $\widehat{T}_{+}$ remains well-defined, and can be evaluated in closed form to give \begin{equation} \langle\,x\,|\widehat{T}_{+}|\,x^{\prime}\rangle= {\displaystyle\int\limits_{-\infty}^{\infty}} \left[ \Xi_{\tau}^{+}(x)\right] ^{\ast}\tau\Xi_{\tau}^{+}(x^{\prime} )\,d\tau=\frac{1}{2}xx^{\prime}I_{-1}\left( x,x^{\prime}\right) =i\frac {m}{4}x_{>}\operatorname{sgn}\left( x-x^{\prime}\right) \end{equation} Combining the even and odd-parity computations, we arrive at the provocatively simple form \begin{equation} \langle\,x\,|\widehat{T}_{1d-free}|\,x^{\prime}\rangle=i\frac{m}{4}\left( x+x^{\prime}\right) \operatorname{sgn}\left( x-x^{\prime}\right) \label{T-1d-free} \end{equation} Eq.(\ref{T-1d-free}) agrees with the formula reported by Galapon et. al. \cite{Galapon} for a particle confined to a section of the real line, in the limit where the domain size becomes infinite. Here we arrive at the same result in an unbounded space using an alternative limiting process -- the accessible states model. \section{Example: Free Particle in Three Dimensions} In this case, the Hamiltonian $\widehat{H}$ is the operator for kinetic energy in a three-dimensional space. The spectrum of $\widehat{H}$ is semi-infinite (bounded from below by $E=0$, but no upper limit) and composed of degenerate levels. This degeneracy breeds multiple timelines, conveniently indexed by the same quantum numbers that label the spectral states. Again there is some flexibility in labeling here depending upon what dynamical variables we opt to conserve along with particle energy, but the general timeline wave $\langle\,\tau\,|\,\psi\,\rangle$ is constructed from its spectral counterpart $\langle\,E\,|\,\psi\,\rangle$ following the same prescription used in the one-dimensional case, Eq.(\ref{history-semiopen}). \subsection{Angular Momentum Timelines for a Free Particle} In the angular momentum representation, the stationary states are indexed by a continuous wave number $k$ (any non-negative value), an orbital quantum number $l$ (a non-negative integer), and a magnetic quantum number $m_{l}$ (an integer between $-l$ and $+l$, not to be confused with particle mass): $|\,E\,\rangle\rightarrow|\,klm_{l}\,\rangle$. This stationary state has energy $E_{k}=k^{2}/2m$. The associated Schr\"{o}dinger waveforms are spherical waves $\langle\overrightarrow{r}\,|\,klm_{l}\,\rangle=$ $C_{k}\,\,j_{l}(kr)Y_{l}^{m_{l}}(\Omega_{r})$, formed as a product of a spherical Bessel function $j_{l}$ with a spherical harmonic $Y_{l}^{m_{l}}$. $C_{k}$ is a constant that -- for the construction of timelines -- is fixed by energy normalization. Noting that the spherical harmonics are themselves normalized over the unit sphere, we apply the Bessel function closure rule \cite{Arfken} to evaluate the remaining portion of the normalization integral: \begin{align*} \langle\,klm_{l}\,|\,k^{\prime}lm_{l}\rangle & =C_{k}C_{k^{\prime}} {\displaystyle\int\limits_{0}^{\infty}} \,dr\,r^{2}j_{l}\left( kr\right) j_{l}\left( k^{\prime}r\right) \\ & =C_{k}^{2}\frac{\pi}{2k^{2}}\,\delta\left( k-k^{\prime}\right) =C_{k} ^{2}\,\frac{\pi}{2mk}\,\delta\left( E_{k}\,-\,E_{k}^{\prime}\right) \end{align*} Thus, energy normalization of these spherical waves requires \begin{equation} C_{k}=\sqrt{\frac{2mk}{\pi}} \label{norm-spherical} \end{equation} The time states $|\,\tau lm_{l}\rangle$ in this representation have components in the coordinate basis given by $\langle\overrightarrow{r}\,|\,\tau lm_{l}\,\rangle\equiv\Xi_{\tau}^{l}(r)Y_{l}^{m_{l}}(\Omega_{r})$ where $\Xi_{\tau}^{l}(r)$, the radial piece of the timeline wave, is calculated from (cf. Eq.(\ref{history-semiopen})): \begin{align} \Xi_{\tau}^{l}(r) & =\frac{1}{\sqrt{2\pi}} {\displaystyle\int\limits_{0}^{\infty}} \exp\left( -iE_{k}\tau\right) \,C_{k}\,j_{l}\left( kr\right) \,dE_{k}\nonumber\\ & =\frac{1}{\sqrt{2\pi mr}} {\displaystyle\int\limits_{0}^{\infty}} \exp\left( -ik^{2}\tau/2m\right) \,kJ_{l+1/2}\left( kr\right) \,dk \label{integral-spherical} \end{align} The last line follows from the connection between spherical Bessel functions and the (cylinder) Bessel functions of the first kind. The closure rule obeyed by these time states \[ {\displaystyle\int\limits_{-_{\infty}}^{\infty}} \,\Xi_{\tau}^{l}(r)\,\left[ \Xi_{\tau}^{l}(r^{\prime})\right] ^{\ast} d\tau=\frac{1}{r^{2}}\,\delta\left( r^{\prime}-r\right) \] can be confirmed from the integral representation of Eq.(\ref{integral-spherical}) using the closure rule for Bessel functions \cite{Arfken}. The integral in Eq.(\ref{integral-spherical}) converges for all $\Im m\,\tau\leq0$ and any $l\geq0$. Defining $2\alpha\equiv l-1/2$, we find \cite{Gradshteyn} \begin{equation} \Xi_{\tau}^{l}(r)=\sqrt{\frac{4r}{m}}z^{3/2}\exp\left( ir^{2}z-i\pi \frac{2\alpha+1}{4}\right) \,\left[ J_{\alpha+1}\left( r^{2}z\right) -iJ_{\alpha}\left( r^{2}z\right) \right] \text{\qquad}z\equiv\frac{m} {4\tau} \label{waves-BesselJ} \end{equation} For fixed $\alpha$, $J_{\alpha}(\ldots)$ is a regular function of its argument throughout the complex plane cut along the negative real axis. Thus, through the magic of analytic continuation, Eq.(\ref{waves-BesselJ}) extends $\Xi_{\tau}^{l}(r)$ to the whole cut $z$-plane$~\left\vert \arg(z)\right\vert <\pi$. Now for any real $\tau>0$, $z$ is a positive number, say $z=x$. To recover results for $\tau<0$, $z$ must approach the negative real axis from \textit{above} ($\arg(z)\rightarrow\pi$ for $\arg(\tau)\rightarrow-\pi$). Writing $z=x\exp\left( i\pi\right) $ in Eq.(\ref{waves-BesselJ}) and using $J_{\alpha}(\exp\left( i\pi\right) x)=\exp\left( i\pi\alpha\right) J_{\alpha}(x)$ \cite{Bateman} leads to the relation \begin{equation} \Xi_{-\tau}^{l}(r)=\left[ \Xi_{\tau}^{l}(r)\right] ^{\ast} \end{equation} for any real value of $\tau$, a result that also is evident from the integral form, Eq.(\ref{integral-spherical}). The behavior of $\Xi_{\tau}^{l}(r)$ for small $r$ and/or large $\tau$ follows directly from the power series representation of the Bessel function \cite{Abramowitz2}. Apart from numerical factors, we find from Eq.(\ref{waves-BesselJ}) \begin{equation} \Xi_{\tau}^{l}(r)\simeq\frac{z^{3/2+\alpha}}{r^{5/2}}\,\sim\frac{r^{l}} {\tau^{l/2+5/4}}\text{\qquad}r^{2}\left\vert z\right\vert \ll1 \end{equation} and this result is valid in any sector of the cut $z$-plane. Similarly, the asymptotic series for the Bessel function \cite{Abramowitz3} furnishes a large-argument approximation to $\Xi_{\tau}^{l}(r)$, valid for any $l\geq0$ and $\left\vert \arg(z)\right\vert <\pi$: \begin{equation} \Xi_{\tau}^{l}(r)\sim\frac{1}{r^{5/2}}\sqrt{\frac{2}{\pi m}}\left[ \left( \frac{l\left( l+1\right) +1/4}{4}-i2z\right) \exp\left( i2z-i\pi \frac{2l+1}{4}\right) +\frac{2l+1}{4}\right] \text{\qquad}r^{2}\left\vert z\right\vert \gg1 \end{equation} \subsection{Uni-Directional Timelines for a Free Particle} Free particles also can be described by momentum eigenstates labeled by a wave vector $\overrightarrow{k}$. These momentum states have energy\ $E_{k} =k^{2}/2m$, and so must be expressible as a superposition of angular momentum states with the same energy: \begin{equation} |\,\overrightarrow{k}\,\rangle= {\displaystyle\sum_{l=0}^{\infty}} {\displaystyle\sum_{\,m_{l}=-l}^{l}} U_{l}^{m_{l}}\left( \widehat{k}\right) |\,klm_{l}\,\rangle \label{sph-2-plane transform} \end{equation} Here $\widehat{k}$\ is the unit vector specifying the orientation of the wave vector with modulus $k$. The transformation from the angular momentum representation to the linear one should be unitary to preserve the energy normalization required for the construction of timelines. To identify the transformation coefficients $U_{l}^{m_{l}}\left( \widehat{k}\right) $, we note first that the Schr\"{o}dinger waveforms associated with $|\,\overrightarrow{k}\,\rangle$\ are plane waves multiplied by a suitable normalizing constant $C_{k}^{(uni)}$: \begin{equation} \langle\overrightarrow{r}\,|\,\overrightarrow{k}\,\rangle=C_{k}^{(uni)} \,\,\exp\left( i\overrightarrow{k}\cdot\overrightarrow{r}\right) \label{plw-free3d} \end{equation} Next, we appeal to the spherical wave decomposition of a plane wave \cite{Newton} \[ \exp\left( i\overrightarrow{k}\cdot\overrightarrow{r}\right) =4\pi {\displaystyle\sum_{l=0}^{\infty}} i^{l}j_{l}(kr) {\displaystyle\sum_{\,m_{l}=-l}^{l}} Y_{l}^{m_{l}}(\Omega_{r})\left[ Y_{l}^{m_{l}}(\Omega_{k})\right] ^{\ast} \] to write the coordinate-space projection of Eq.(\ref{sph-2-plane transform}): \[ 4\pi C_{k}^{(uni)} {\displaystyle\sum_{l=0}^{\infty}} i^{l}j_{l}(kr) {\displaystyle\sum_{\,m_{l}=-l}^{l}} Y_{l}^{m_{l}}(\Omega_{r})\left[ Y_{l}^{m_{l}}(\Omega_{k})\right] ^{\ast}= {\displaystyle\sum_{l=0}^{\infty}} {\displaystyle\sum_{\,m_{l}=-l}^{l}} U_{l}^{m_{l}}\left( \widehat{k}\right) C_{k}\,\,j_{l}(kr)Y_{l}^{m_{l} }(\Omega_{r}) \] This will be satisfied if for every $l\geq0$ and $\left\vert m_{l}\right\vert \leq l$ we have \[ 4\pi C_{k}^{(uni)}i^{l}\left[ Y_{l}^{m_{l}}(\Omega_{k})\right] ^{\ast} =U_{l}^{m_{l}}\left( \widehat{k}\right) C_{k}\,\, \] For $l$ and $m_{l}$ both zero this last relation reduces to $\sqrt{4\pi} C_{k}^{(uni)}=U_{0}^{0}\left( \widehat{k}\right) C_{k}$, leaving $\sqrt {4\pi}U_{0}^{0}\left( \widehat{k}\right) i^{l}\left[ Y_{l}^{m_{l}} (\Omega_{k})\right] ^{\ast}=U_{l}^{m_{l}}\left( \widehat{k}\right) $. Setting $\sqrt{4\pi}U_{0}^{0}\left( \widehat{k}\right) =1$ then leads to \begin{equation} U_{l}^{m_{l}}\left( \widehat{k}\right) =i^{l}\left[ Y_{l}^{m_{l}} (\Omega_{k})\right] ^{\ast} \label{sph-uni map} \end{equation} that describes the desired unitary transformation \cite{Arfken4}: \[ {\displaystyle\sum_{l=0}^{\infty}} {\displaystyle\sum_{\,m_{l}=-l}^{l}} U_{l}^{m_{l}}\left( \widehat{k_{2}}\right) \left[ U_{l}^{m_{l}}\left( \widehat{k_{1}}\right) \right] ^{\ast}= {\displaystyle\sum_{l=0}^{\infty}} {\displaystyle\sum_{\,m_{l}=-l}^{l}} \left[ Y_{l}^{m_{l}}(\Omega_{2})\right] ^{\ast}Y_{l}^{m_{l}}(\Omega _{1})=\delta\left( \Omega_{1}\,-\,\Omega_{2}\right) \] It follows that the energy-normalized plane waves are described by the normalizing factor \begin{equation} C_{k}^{(uni)}=\frac{C_{k}}{\sqrt{4\pi}}U_{0}^{0}\left( \widehat{k}\right) =\frac{1}{4\pi}\sqrt{\frac{2mk}{\pi}} \label{norm-free3d} \end{equation} \textit{Uni-directional time states} are formed from plane waves all moving in the same direction, but with differing energy. Accordingly, we adopt the unit vector $\widehat{k}$ as an additional label for such time states, writing $|\,\tau\,\rangle\rightarrow|\,\tau,\widehat{k}\,\rangle$. These uni-directional time states can be related to the angular momentum time states of the preceding section. Combining Eqs.(\ref{history-semiopen}), (\ref{sph-2-plane transform}), and (\ref{sph-uni map}), we find that the uni-directional timeline wave in the coordinate basis, $\langle \overrightarrow{r}\,|\,\tau,\widehat{k}\,\rangle\equiv\Xi_{\tau} ^{\widehat{k}\,}(\overrightarrow{r})$, can be computed from the spherical-wave expansion \begin{equation} \Xi_{\tau}^{\widehat{k}\,}(\overrightarrow{r})= {\displaystyle\sum_{l=0}^{\infty}} i^{l}\Xi_{\tau}^{l}(r) {\displaystyle\sum_{\,m_{l}=-l}^{l}} Y_{l}^{m_{l}}(\Omega_{r})\left[ Y_{l}^{m_{l}}(\Omega_{k})\right] ^{\ast}, \end{equation} where $\Xi_{\tau}^{l}(r)$ is the radial timeline wave of Eq.(\ref{waves-BesselJ}).\qquad Alternatively, we might try to calculate $\Xi_{\tau}^{\widehat{k} \,}(\overrightarrow{r})$ directly by taking $|\,\psi\,\rangle =|\,\overrightarrow{r}\,\rangle$ in Eq.(\ref{history-semiopen}). With the help of Eqs.(\ref{plw-free3d}) and (\ref{norm-free3d}), we obtain in this way \begin{align} \Xi_{\tau}^{\widehat{k}\,}(\overrightarrow{r}) & =\frac{1}{\sqrt{2\pi}} {\displaystyle\int\limits_{0}^{\infty}} \exp\left( -iE_{k}\tau\right) \,\langle\,\overrightarrow{r} \,|\,\overrightarrow{k}\,\rangle\,dE_{k}\nonumber\\ & =\frac{1}{4\pi^{2}\sqrt{m}} {\displaystyle\int\limits_{0}^{\infty}} \exp\left( -ik^{2}\tau/2m+i\,k\widehat{k}\cdot\overrightarrow{r}\right) \,k^{3/2}\,dk \label{integral-directional-3d} \end{align} Eq.(\ref{integral-directional-3d}) shows that the dependence of $\Xi_{\tau }^{\widehat{k}\,}(\overrightarrow{r})$ on $\widehat{k}$ and on $\overrightarrow{r}$ occurs only through the combination $\xi\equiv \widehat{k}\cdot\overrightarrow{r}$, which is nothing more than the projection of the coordinate vector $\overrightarrow{r}$\ onto the direction of plane wave propagation. (Indeed, $\Xi_{\tau}^{\widehat{k}\,}(\overrightarrow{r})$ itself is a plane wave -- albeit not a harmonic one -- with the surfaces of constant wave amplitude oriented perpendicular to $\widehat{k}$.) In terms of $\xi$, then, there is a universal timeline applicable to any direction in space, as befits the expected isotropy of a free-particle environment. This universal timeline has elements that we denote simply as $\Xi_{\tau}^{\,} (\xi)$, and are given by \begin{equation} \Xi_{\tau}^{\,}(\xi)=\frac{1}{4\pi^{2}\sqrt{m}} {\displaystyle\int\limits_{0}^{\infty}} \exp\left( -ik^{2}\tau/2m+i\,k\xi\right) \,k^{3/2}\,dk \label{integral-universal} \end{equation} Unlike a similar integral encountered in the one-dimensional case, Eq.(\ref{integral-universal}) fails to converge for real values of $\tau$. But the integral does define a function that is analytic throughout the lower half plane $\Im m\,\tau<0$, and can be analytically continued onto the real axis. For $\Re e\left( \,i\tau\right) >0$ we have \cite{Gradshteyn2} \begin{equation} \Xi_{\tau}^{\,}(\xi)=\frac{3}{16\sqrt{\pi^{3}m}}z^{5/2}\exp\left( -\xi ^{2}z^{2}/4\right) D_{-5/2}\left( -i\xi z\right) \text{\qquad}z\equiv \sqrt{\frac{m}{i\tau}}, \label{waves-directional-3d} \end{equation} where $D_{-5/2}\left( \ldots\right) $ is another of the parabolic cylinder functions. Eq.(\ref{waves-directional-3d}) limits $\tau$ to the sector $-\pi<\arg\,\tau<0$ ($\Re e\left( \,i\tau\right) >0$), but the mapping from $z$ to $\tau$ allows analytic continuation to the whole $\tau$-plane cut along the negative real axis, $-\pi\leq\arg\,\tau<\pi$. The complex variable $z$ then is mapped into the sector $-3\pi/4<\arg\,z\leq\pi/4$. And because $D_{\nu}$ is an entire function of its argument \cite{Bateman2}, Eq.(\ref{waves-directional-3d}) defines a single-valued function throughout this domain. Comparing Eq.(\ref{waves-directional-3d}) for $\tau>0$ ($\arg\,z=-\pi/4$) and $\tau<0$ ($\arg\,z=\pi/4$), we discover for all real values of $\xi$ and $\tau$ \begin{equation} \Xi_{\tau}(\xi)=\left[ \Xi_{-\tau}(-\xi)\right] ^{\ast} \label{time-symmetry-univ} \end{equation} For $\tau>0$ ($\arg\,z=-\pi/4$), the asymptotics of the parabolic cylinder function \cite{Bateman3} imply \[ \Xi_{\tau}(\xi)\sim \begin{array} [c]{cc} \exp\left( im\xi^{2}/2\tau\right) \xi^{3/2} & \qquad\xi>0\\ \left\vert \xi\right\vert ^{-5/2} & \qquad\xi<0 \end{array} \] Thus, $\Xi_{\tau}(\xi)$ diverges as $\xi^{3/2}$ for $\xi\rightarrow\infty$ and vanishes as $\left\vert \xi\right\vert ^{-5/2}$ for $\xi\rightarrow-\infty$. Analogous results for $\tau<0$ follow from Eq.(\ref{time-symmetry-univ}). The behavior is reminiscent of the timeline functions constructed from running waves in one dimension. Indeed, it appears that in $\Xi_{\tau}(\xi)$ we again have time-of-arrival functions, with $-\tau$ denoting the arrival time at the coordinate origin for waves moving in the direction of $\widehat{k}$, and $\xi\equiv\widehat{k}\cdot\overrightarrow{r}$. This interpretation is supported by the illustrations in Figs. 3 and 4 showing $\Xi_{\tau}(\xi)$ for system times just prior to, and immediately following, arrival at the coordinate origin. \begin{figure} \caption{The universal timeline waveform $\Xi_{\tau}(\xi)$ for the system time $\tau=-0.005$.\ The shading (coloring) represents varying phase values for this complex function. In units where $\hbar=m=1$, the plot extends from $\xi=-1$ to $\xi=+1$.} \label{fig3} \end{figure} \begin{figure} \caption{The universal timeline waveform $\Xi_{\tau}(\xi)$ for the system time $\tau=+0.005$. The shading (coloring) represents varying phase values for this complex function. In units where $\hbar=m=1$, the plot extends from $\xi=-1$ to $\xi=+1$.} \label{fig4} \end{figure} \subsection{The Three-Dimensional Free-Particle Time Operator} Lastly, we investigate the time operator for this example. We exercise the freedom allowed by the degeneracy of free particle states to work in the angular momentum representation. From Eq.(\ref{norm-spherical}) we find that the stationary spherical waves $\langle\overrightarrow{r}\,|\,klm_{l} \,\rangle=C_{k}\,\,j_{l}(kr)Y_{l}^{m_{l}}(\Omega_{r})$ vanish at the lower spectral edge ($\lim_{k\rightarrow0}\langle\overrightarrow{r}\,|\,klm_{l} \,\rangle=0$), so that a free-particle time operator in three space dimensions does exist by the theory of Sec. III. The matrix elements of $\widehat{T}$ in the coordinate basis are given by (cf. Eq.(\ref{time operator})) \begin{align*} \langle\,\overrightarrow{r_{1}}\,|\widehat{T}_{3d-free} |\,\overrightarrow{r_{2}}\,\rangle & = {\displaystyle\sum_{l,\,m_{l}}} P {\displaystyle\int\limits_{-\infty}^{\infty}} \langle\,\overrightarrow{r_{1}}\,|\,\tau lm_{l}\rangle\tau\langle\tau lm_{l}\,|\,\overrightarrow{r_{2}}\,\rangle\,d\tau\\ & = {\displaystyle\sum_{l,\,m_{l}}} Y_{l}^{m_{l}}(\Omega_{1})\left[ Y_{l}^{m_{l}}(\Omega_{2})\right] ^{\ast}P {\displaystyle\int\limits_{-\infty}^{\infty}} \tau\,\Xi_{\tau}^{l}(r_{1})\left[ \Xi_{\tau}^{l}(r_{2})\right] ^{\ast }\,d\tau \end{align*} The principal value integrals are studied in Appendix A, where their existence is rigorously established and a closed-form expression given for their evaluation: \[ P {\displaystyle\int\limits_{-\infty}^{\infty}} \tau\,\Xi_{\tau}^{l}(r_{1})\left[ \Xi_{\tau}^{l}(r_{2})\right] ^{\ast }\,d\tau=i\frac{m}{2}\operatorname{sgn}\left( r_{1}-r_{2}\right) \frac {1}{r_{>}}\left( \frac{r_{<}}{r_{>}}\right) ^{l} \] Collecting the above results, we obtain \[ \langle\,\overrightarrow{r_{1}}\,|\widehat{T}_{3d-free} |\,\overrightarrow{r_{2}}\,\rangle=i\frac{m}{2}\operatorname{sgn}\left( r_{1}-r_{2}\right) \frac{1}{r_{>}} {\displaystyle\sum_{l,\,m_{l}}} Y_{l}^{m_{l}}(\Omega_{1})\left[ Y_{l}^{m_{l}}(\Omega_{2})\right] ^{\ast }\left( \frac{r_{<}}{r_{>}}\right) ^{l} \] The remaining sums also can be evaluated in closed form. Combining the generating function for the Legendre polynomials \cite{Arfken2} with the addition theorem for spherical harmonics \cite{Arfken3}, we obtain for any $\left\vert t\right\vert <1$ \begin{align*} \frac{1}{\sqrt{1-2t\cos\gamma+t^{2}}} & = {\displaystyle\sum_{l=0}^{\infty}} P_{l}(\cos\gamma)t^{l}\\ & =4\pi {\displaystyle\sum_{l=0}^{\infty}} {\displaystyle\sum_{\,m_{l}=-l}^{l}} Y_{l}^{m_{l}}(\Omega_{1})\left[ Y_{l}^{m_{l}}(\Omega_{2})\right] ^{\ast }\frac{t^{l}}{2l+1}, \end{align*} from which it follows that \begin{align*} 2\pi {\displaystyle\sum_{l=0}^{\infty}} {\displaystyle\sum_{\,m_{l}=-l}^{l}} Y_{l}^{m_{l}}(\Omega_{1})\left[ Y_{l}^{m_{l}}(\Omega_{2})\right] ^{\ast }t^{l} & =t^{1/2}\frac{\partial}{\partial t}\left( \frac{t^{1/2}} {\sqrt{1-2t\cos\gamma+t^{2}}}\right) \\ & =\frac{1-t^{2}}{2\left( 1-2t\cos\gamma+t^{2}\right) ^{3/2}} \end{align*} Finally, identifying $t$ with $r_{<}/r_{>}$ gives \begin{align} \langle\,\overrightarrow{r_{1}}\,|\widehat{T}_{3d-free} |\,\overrightarrow{r_{2}}\,\rangle & =i\frac{m}{2}\operatorname{sgn}\left( r_{1}-r_{2}\right) \frac{1}{4\pi}\frac{r_{>}^{2}-r_{<}^{2}}{\left( r_{>} ^{2}-2r_{<}r_{>}\cos\gamma+r_{<}^{2}\right) ^{3/2}}\nonumber\\ & =\frac{im}{8\pi}\frac{\overrightarrow{r_{1}}\cdot\overrightarrow{r_{1} }-\overrightarrow{r_{2}}\cdot\overrightarrow{r_{2}}}{\,\left\vert \overrightarrow{r_{1}}-\overrightarrow{r_{2}}\right\vert ^{3}}=\frac{im}{8\pi }\frac{\left( \overrightarrow{r_{1}}-\overrightarrow{r_{2}}\right) }{\,\left\vert \overrightarrow{r_{1}}-\overrightarrow{r_{2}}\right\vert ^{3} }\cdot\left( \overrightarrow{r_{1}}+\overrightarrow{r_{2}}\right) \label{T-3d-free} \end{align} The vector form for these matrix elements is pleasingly compact, and frees the result from the spherical coordinates adopted for the computation. It is apparent that Eq.(\ref{T-3d-free}) specifies matrix elements of a Hermitian operator, i.e., $\langle\,\overrightarrow{r_{1}}\,|\widehat{T} _{3d-free}|\,\overrightarrow{r_{2}}\,\rangle=\langle\,\overrightarrow{r_{2} }\,|\widehat{T}_{3d-free}|\,\overrightarrow{r_{1}}\,\rangle^{\ast}$ for $\overrightarrow{r_{1}}\neq\overrightarrow{r_{2}}$. That these matrix elements also specify an operator that is canonically conjugate to the free-particle Hamiltonian is confirmed in Appendix B. Thus, \textit{a canonical time operator for a free particle in three dimensions exists, with coordinate-space matrix elements given by Eq.(\ref{T-3d-free})}. \section{Summary and Conclusions} Contrary to conventional wisdom, we contend that [event] time is a legitimate observable, and fits within the framework of standard quantum theory if we extend the latter to include POVM's -- and not just self-adjoint operators -- for representing observables. This modest change in emphasis places the focus squarely on probability amplitudes, in keeping with the seemingly evident fact that [event] time statistics can be generated empirically for virtually any quantum system. As with every other observable, we show these [event] time statistics derive from wave functions expressed in a suitable basis (the time basis), which is complete for the representation of any physical state. We refer to this basis as a \textit{timeline}, or \textit{quantum history}, with elements labeled by a continuous variable we call the \textit{system time}. While time states are typically \textit{not orthogonal}, they do lead to wave functions and statistics that are \textit{covariant} (time-translation invariant), and probabilities that add to unity. We propose a recipe for calculating wave functions in the time basis from those in the spectral basis. This recipe is dictated solely by the demands of covariance and completeness, and applies to virtually any Hamiltonian system. The phase ambiguity inherent in the stationary states translates here into a freedom to construct time statistics pertinent to different kinds of events. The leap from time states to a time operator is non-trivial, involving additional assumptions that are not always met. Indeed, it is the nature of time statistics that they need not admit a well-defined mean, or variance. Time operators -- when they exist -- are system specific, useful for calculating moments of the [event] time distribution in those instances where said moments can be shown to converge. Interestingly, we find that time operators for periodic systems are never canonical to the Hamiltonian, but canonical time operators can and do arise in [aperiodic] systems with a vanishing point spectrum (no isolated levels). As examples of these general principles, we have examined several systems (particle in free-fall, free particle in one dimension) for which results have been reported previously in the literature. Our objective has been to illustrate how these diverse results follow from the unified approach developed here. We also have gone beyond the familiar and applied that same approach to the free particle in three dimensions. To the best of our knowledge, results for the latter have never before appeared. Most importantly, they confirm that the notion of an \textit{arrival time} -- first encountered in the one-dimensional case -- extends to three dimensions, complete with an accompanying canonical time operator. Possibilities for future investigations abound. For instance, how to generate correct arrival-time statistics for a particle scattering from even the simplest one-dimensional barrier remains a subject of controversy \cite{Baute}; we expect that discussion -- and numerous others -- to be informed by the results presented here. \appendix{} \section{Time Operators and The Integrals $I_{l}\left( r_{1},r_{2}\right) $} In this Appendix we investigate the principal value integrals that arise in the construction of a canonical time operator for free particles: \begin{equation} I_{l}\left( r_{1},r_{2}\right) \equiv P {\displaystyle\int\limits_{-\infty}^{\infty}} \tau\,\Xi_{\tau}^{l}(r_{1})\left[ \Xi_{\tau}^{l}(r_{2})\right] ^{\ast }\,d\tau\equiv\lim_{\tau_{R}\rightarrow\infty} {\displaystyle\int\limits_{-\tau_{R}}^{+\tau_{R}}} \tau\,\Xi_{\tau}^{l}(r_{1})\left[ \Xi_{\tau}^{l}(r_{2})\right] ^{\ast }\,d\tau \end{equation} Here $\Xi_{\tau}^{l}(r)$ is the timeline wavefunction in the $l^{th}$ angular momentum subspace, given by Eq.(\ref{waves-BesselJ}). Inspection of Eqs.(\ref{waves-free-o}) and (\ref{waves-free-e}) shows that the $l=-1$ and $l=0$ integrals also appear in the context of the time operator for a free particle in one dimension. Our objective here is to establish the existence of these integrals, and obtain closed-form expressions suitable for their evaluation. The relation $\Xi_{-\tau}^{l}(r)=\left[ \Xi_{\tau}^{l}(r)\right] ^{\ast}$ can be used to show that $I_{l}\left( r_{1},r_{2}\right) $ is purely imaginary, as well as antisymmetric under the interchange $r_{1} \longleftrightarrow r_{2}$, properties that can be used to reduce the integral to the half-axis $\tau\geq0$: \begin{equation} I_{l}\left( r_{1},r_{2}\right) =2i\,\Im m\left[ \lim_{\tau_{R} \rightarrow\infty} {\displaystyle\int\limits_{0}^{\tau_{R}}} \tau\,\Xi_{\tau}^{l}(r_{1})\left[ \Xi_{\tau}^{l}(r_{2})\right] ^{\ast }\,d\tau\right] \end{equation} Substituting from Eq.(\ref{waves-BesselJ}), this becomes (recall $2\alpha\equiv l-1/2$) \[ I_{l}\left( r_{1},r_{2}\right) =i\frac{m\sqrt{r_{1}r_{2}}}{2}\left( I_{1}+I_{2}\right) \] where \begin{align*} I_{1} & \equiv\lim_{s_{R}\rightarrow0^{+}} {\displaystyle\int\limits_{s_{R}}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) \left[ J_{\alpha +1}\left( s\,r_{1}^{2}\right) J_{\alpha+1}\left( s\,r_{2}^{2}\right) +J_{\alpha}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) \right] \,ds\\ I_{2} & \equiv\lim_{s_{R}\rightarrow0^{+}} {\displaystyle\int\limits_{s_{R}}^{\infty}} \cos\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) \left[ J_{\alpha +1}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) -J_{\alpha}\left( s\,r_{1}^{2}\right) J_{\alpha+1}\left( s\,r_{2} ^{2}\right) \right] \,ds \end{align*} The small-argument behavior of $J_{\alpha}$ ensures that both integrals exist in the indicated limits provided $\alpha>-1$: accordingly, explicit reference to the limits will be omitted from subsequent expressions. Our next goal is to relate $I_{2}$ to $I_{1}$. To that end we define the related (and simpler) integrals $\widetilde{I}_{1,2}$ by \begin{align*} \widetilde{I}_{1}^{\alpha}\left( r_{1},r_{2}\right) & \equiv {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) \,ds\\ \widetilde{I}_{2}^{\alpha}\left( r_{1},r_{2}\right) & \equiv {\displaystyle\int\limits_{0}^{\infty}} \cos\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha+1}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) \,ds \end{align*} Then $I_{1}\equiv\widetilde{I}_{1}^{\alpha+1}\left( r_{1},r_{2}\right) +\widetilde{I}_{1}^{\alpha}\left( r_{1},r_{2}\right) $ and $I_{2} \equiv\widetilde{I}_{2}^{\alpha}\left( r_{1},r_{2}\right) -\widetilde{I} _{2}^{\alpha}\left( r_{2},r_{1}\right) $. Integrating $\widetilde{I}_{2}$ once by parts (the out-integrated part vanishes for $\alpha>-1$) and using the Bessel recursion relation \cite{Bateman} \[ J_{\nu-1}\left( z\right) -J_{\nu+1}\left( z\right) =2\frac{dJ_{\nu}\left( z\right) }{dz} \] results in \begin{align*} \widetilde{I}_{2}^{\alpha}\left( r_{1},r_{2}\right) & =-\frac{1}{r_{1} ^{2}-r_{2}^{2}} {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) \frac{\partial }{\partial s}\left[ J_{\alpha+1}\left( s\,r_{1}^{2}\right) J_{\alpha }\left( s\,r_{2}^{2}\right) \right] \,ds\\ & =\frac{r_{2}^{2}}{2\left( r_{1}^{2}-r_{2}^{2}\right) } {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha+1}\left( s\,r_{1}^{2}\right) \left[ J_{\alpha+1}\left( s\,r_{2}^{2}\right) -J_{\alpha-1}\left( s\,r_{2}^{2}\right) \right] \,ds\\ & +\frac{r_{1}^{2}}{2\left( r_{1}^{2}-r_{2}^{2}\right) } {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha}\left( s\,r_{2}^{2}\right) \left[ J_{\alpha+2}\left( s\,r_{1}^{2}\right) -J_{\alpha}\left( s\,r_{1}^{2}\right) \right] \,ds \end{align*} In terms of yet a third integral $\widetilde{I}_{3}$ defined as \begin{align*} \widetilde{I}_{3}^{\alpha}\left( r_{1},r_{2}\right) & \equiv {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha+1}\left( s\,r_{1}^{2}\right) J_{\alpha-1}\left( s\,r_{2}^{2}\right) \,ds\\ & =\frac{2\alpha}{r_{2}^{2}} {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha+1}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) \,\frac{ds} {s}-\widetilde{I}_{1}^{\alpha+1}\left( r_{1},r_{2}\right) \\ & =\frac{2\alpha}{r_{1}^{2}} {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha}\left( s\,r_{1}^{2}\right) J_{\alpha-1}\left( s\,r_{2}^{2}\right) \,\frac{ds} {s}-\widetilde{I}_{1}^{\alpha-1}\left( r_{1},r_{2}\right) \end{align*} we can write the result for $\widetilde{I}_{2}$ compactly as \begin{align*} \widetilde{I}_{2}^{\alpha}\left( r_{1},r_{2}\right) & =\frac{1}{2\left( r_{1}^{2}-r_{2}^{2}\right) }\left[ -r_{1}^{2}\widetilde{I}_{1}^{\alpha }\left( r_{1},r_{2}\right) +r_{2}^{2}\widetilde{I}_{1}^{\alpha+1}\left( r_{1},r_{2}\right) -r_{2}^{2}\widetilde{I}_{3}^{\alpha}\left( r_{1} ,r_{2}\right) +r_{1}^{2}\widetilde{I}_{3}^{\alpha+1}\left( r_{1} ,r_{2}\right) \right] \\ & =\frac{1}{\left( r_{1}^{2}-r_{2}^{2}\right) }\left[ -r_{1} ^{2}\widetilde{I}_{1}^{\alpha}\left( r_{1},r_{2}\right) +r_{2} ^{2}\widetilde{I}_{1}^{\alpha+1}\left( r_{1},r_{2}\right) + {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) J_{\alpha+1}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) \,\frac{ds} {s}\right] \end{align*} Then \begin{align*} I_{2} & =\frac{1}{\left( r_{1}^{2}-r_{2}^{2}\right) }\left[ -r_{1} ^{2}\widetilde{I}_{1}^{\alpha}\left( r_{1},r_{2}\right) +r_{2} ^{2}\widetilde{I}_{1}^{\alpha+1}\left( r_{1},r_{2}\right) +r_{2} ^{2}\widetilde{I}_{1}^{\alpha}\left( r_{1},r_{2}\right) -r_{1} ^{2}\widetilde{I}_{1}^{\alpha+1}\left( r_{1},r_{2}\right) \right] \\ & +\frac{1}{\left( r_{1}^{2}-r_{2}^{2}\right) } {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) \left[ J_{\alpha +1}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) -J_{\alpha+1}\left( s\,r_{2}^{2}\right) J_{\alpha}\left( s\,r_{1} ^{2}\right) \right] \,\frac{ds}{s}\\ & =-I_{1}+\frac{1}{\left( r_{1}^{2}-r_{2}^{2}\right) } {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) \left[ J_{\alpha +1}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) -J_{\alpha+1}\left( s\,r_{2}^{2}\right) J_{\alpha}\left( s\,r_{1} ^{2}\right) \right] \,\frac{ds}{s} \end{align*} and so \begin{equation} I_{l}\left( r_{1},r_{2}\right) =i\frac{m\sqrt{r_{1}r_{2}}}{2\left( r_{1}^{2}-r_{2}^{2}\right) } {\displaystyle\int\limits_{0}^{\infty}} \sin\left( s\left[ r_{1}^{2}-r_{2}^{2}\right] \right) \left[ J_{\alpha +1}\left( s\,r_{1}^{2}\right) J_{\alpha}\left( s\,r_{2}^{2}\right) -J_{\alpha+1}\left( s\,r_{2}^{2}\right) J_{\alpha}\left( s\,r_{1} ^{2}\right) \right] \,\frac{ds}{s} \label{integral-reduced} \end{equation} Finally, from integral tables \cite{Gradshteyn3} we have for $a,b>0$ and $\alpha>-1$: \begin{align*} {\displaystyle\int\limits_{0}^{\infty}} J_{\alpha+1}\left( ax\right) J_{\alpha}\left( bx\right) \sin\left( cx\right) \,\frac{dx}{x} & =0\text{ \ \ \ for }0<c<b-a\\ & =a^{-\alpha-1}b^{\alpha}c\text{ \ \ for }0<c<a-b \end{align*} This result is applied separately to each integral in Eq.(\ref{integral-reduced}). In terms of the smaller ($r_{<}$) and larger ($r_{>}$) of its two arguments, the final form for $I_{l}\left( r_{1} ,r_{2}\right) $ can be written most compactly as \begin{equation} I_{l}\left( r_{1},r_{2}\right) =i\frac{m}{2}\operatorname{sgn}\left( r_{1}-r_{2}\right) \frac{1}{r_{>}}\left( \frac{r_{<}}{r_{>}}\right) ^{l} \label{integral-result} \end{equation} \section{Canonical Property of the Free-Particle Time Operator in Three Dimensions} In this Appendix we establish that the time operator of Eq.(\ref{T-3d-free}) is canonical to the free-particle Hamiltonian $\widehat{H}=\widehat{p}^{2} /2m$. To that end we examine the coordinate-space matrix elements of the commutator \begin{align} \langle\,\overrightarrow{r_{1}}\,|\left[ \widehat{T}_{3d-free},\widehat{H} \right] |\,\overrightarrow{r_{2}}\,\rangle & =\frac{1}{2m}\langle \,\overrightarrow{r_{1}}\,|\widehat{T}_{3d-free}\widehat{p}^{2}-\widehat{p} ^{2}\widehat{T}_{3d-free}|\,\overrightarrow{r_{2}}\,\rangle\nonumber\\ & =\frac{1}{2m}\left( \nabla_{1}^{2}-\nabla_{2}^{2}\right) \langle \,\overrightarrow{r_{1}}\,|\widehat{T}_{3d-free}|\,\overrightarrow{r_{2} }\,\rangle\label{laplacian12} \end{align} For evaluating the Laplacians in this expression, we apply the vector calculus identity \cite{Griffiths} \begin{equation} \nabla\left( \overrightarrow{A}\cdot\overrightarrow{B}\right) =\overrightarrow{A}\times\left( \nabla\times\overrightarrow{B}\right) +\overrightarrow{B}\times\left( \nabla\times\overrightarrow{A}\right) +\left( \overrightarrow{A}\cdot\nabla\right) \overrightarrow{B}+\left( \overrightarrow{B}\cdot\nabla\right) \overrightarrow{A} \label{grad-dot} \end{equation} with the identifications \[ \overrightarrow{A}=\frac{\left( \overrightarrow{r_{1}}-\overrightarrow{r_{2} }\right) }{\,\left\vert \overrightarrow{r_{1}}-\overrightarrow{r_{2} }\right\vert ^{3}};\quad\overrightarrow{B}=\overrightarrow{r_{1} }+\overrightarrow{r_{2}} \] Noting that $\overrightarrow{A}$ is essentially the electrostatic field of a point charge, we have \begin{align*} \nabla_{1}\cdot\overrightarrow{A} & =4\pi\,\delta\left( \overrightarrow{r_{1}}-\overrightarrow{r_{2}}\right) =-\nabla_{2} \cdot\overrightarrow{A}\\ \nabla_{1}\times\overrightarrow{A} & =0=\nabla_{2}\times\overrightarrow{A} \end{align*} $\overrightarrow{B}$ also is curl-free, so the first two terms in Eq.(\ref{grad-dot}) are zero. Further, with $\overrightarrow{r_{1,2}}=\left( x_{1,2},\,y_{1,2},\,z_{1,2}\right) $, the simplicity of $\overrightarrow{B}$ allows us to write \begin{align*} \left( \overrightarrow{A}\cdot\nabla_{1}\right) B_{x} & =\left( A_{x}\frac{\partial}{\partial x_{1}}+A_{y}\frac{\partial}{\partial y_{1} }+A_{z}\frac{\partial}{\partial z_{1}}\right) \left( x_{1}+x_{2}\right) \\ & =A_{x},\text{ \ etc.} \end{align*} and \begin{align*} \frac{\partial}{\partial x_{1}}\left[ \left( \overrightarrow{B}\cdot \nabla_{1}\right) A_{x}\right] & =\frac{\partial}{\partial x_{1}}\left[ \left( x_{1}+x_{2}\right) \frac{\partial A_{x}}{\partial x_{1}}+\left( y_{1}+y_{2}\right) \frac{\partial A_{x}}{\partial y_{1}}+\left( z_{1} +z_{2}\right) \frac{\partial A_{x}}{\partial z_{1}}\right] \\ & =\left( \overrightarrow{B}\cdot\nabla_{1}\right) \frac{\partial A_{x} }{\partial x_{1}}+\frac{\partial A_{x}}{\partial x_{1}},\text{ \ etc.} \end{align*} Then \begin{equation} \nabla_{1}^{2}\left( \overrightarrow{A}\cdot\overrightarrow{B}\right) =\nabla_{1}\cdot\nabla_{1}\left( \overrightarrow{A}\cdot\overrightarrow{B} \right) =2\left( \nabla_{1}\cdot\overrightarrow{A}\right) +\left( \overrightarrow{B}\cdot\nabla_{1}\right) \left( \nabla_{1}\cdot \overrightarrow{A}\right) \label{laplacian1} \end{equation} Replacing $\nabla_{1}$ with $\nabla_{2}$ in this last expression generates an equally valid result, but since $\overrightarrow{A}$ depends only on $\overrightarrow{r_{1}}-\overrightarrow{r_{2}}$, we obtain \begin{equation} \nabla_{2}^{2}\left( \overrightarrow{A}\cdot\overrightarrow{B}\right) =-2\left( \nabla_{1}\cdot\overrightarrow{A}\right) +\left( \overrightarrow{B}\cdot\nabla_{1}\right) \left( \nabla_{1}\cdot \overrightarrow{A}\right) \label{laplacian2} \end{equation} and finally, \begin{align} \langle\,\overrightarrow{r_{1}}\,|\left[ \widehat{T}_{3d-free},\widehat{H} \right] |\,\overrightarrow{r_{2}}\,\rangle & =i\frac{1}{16\pi}\left( \nabla_{1}^{2}-\nabla_{2}^{2}\right) \left( \overrightarrow{A} \cdot\overrightarrow{B}\right) \nonumber\\ & =i\frac{1}{4\pi}\,\left( \nabla_{1}\cdot\overrightarrow{A}\right) =i\,\delta\left( \overrightarrow{r_{1}}-\overrightarrow{r_{2}}\right) \label{commutator-3d-free} \end{align} We conclude that $\left[ \widehat{T}_{3d-free},\widehat{H}\right] =i$, i.e., that $\widehat{T}_{3d-free}$ is canonical to the free-particle Hamiltonian $\widehat{H}$. \end{document}
arXiv
Evaluate $\left\lfloor\left|-\frac{23}{9}\right|\right\rfloor$. First evaluating the absolute value, $\left|-\frac{23}{9}\right|=\frac{23}{9}$. The largest integer less than $\frac{23}{9}$ is then $\boxed{2}$.
Math Dataset
In evolutionary game theory and related fields, one often needs to visualize the dynamics of three-dimensional systems, e.g. competition between three strategies $x_1$, $x_2$ and $x_3$ for which $x_1 + x_2 + x_3 = 1$. This is most conveniently done on a 2-simplex (ternary plot, de Finetti diagram), and the following code snippet defines a minimal way of visualizing data on a 2-simplex using R base graphics. The function takes a minimum of four arguments: x and y are vectors holding the $x_1$ and $x_2$ values (it is not necessary to input the remaining, third value, as $x_3 = 1 - x_1 - x_2$); label is a vector of length 3 giving labels for the vertices of the simplex. Additional standard graphical parameters can be specified, e.g. will request a line plot with line width 2 and a red colour.
CommonCrawl
\begin{document} \date{} \title{Sorting with Asymmetric Read and Write Costs\footnote{The original version of this paper is published in SPAA 2015. In this version we fixed some minor typos and errors, and clarified some definitions.}} \author{Guy E. Blelloch\\Carnegie Mellon University\\[email protected] \and Jeremy T. Fineman\\Georgetown University\\[email protected] \and Phillip B. Gibbons\\Carnegie Mellon University\\ [email protected] \and Yan Gu\\Carnegie Mellon University\\ [email protected] \and Julian Shun\\Carnegie Mellon University\\ [email protected]} \maketitle \begin{abstract} Emerging memory technologies have a significant gap between the cost, both in time and in energy, of writing to memory versus reading from memory. In this paper we present models and algorithms that account for this difference, with a focus on write-efficient sorting algorithms. First, we consider the PRAM model with asymmetric write cost, and show that sorting can be performed in $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes, $O\mathopen{}\mathclose\bgroup\originalleft(n \log n\aftergroup\egroup\originalright)$ reads, and logarithmic depth (parallel time). Next, we consider a variant of the External Memory (EM) model that charges $\omega > 1$ for writing a block of size $B$ to the secondary memory, and present variants of three EM sorting algorithms (multi-way mergesort, sample sort, and heapsort using buffer trees) that asymptotically reduce the number of writes over the original algorithms, and perform roughly $\omega$ block reads for every block write. Finally, we define a variant of the Ideal-Cache model with asymmetric write costs, and present write-efficient, cache-oblivious parallel algorithms for sorting, FFTs, and matrix multiplication. Adapting prior bounds for work-stealing and parallel-depth-first schedulers to the asymmetric setting, these yield parallel cache complexity bounds for machines with private caches or with a shared cache, respectively. \end{abstract} \section{Introduction}\label{sec:intro} Emerging nonvolatile/persistent memory (NVM) technologies such as Phase-Change Memory (PCM), Spin-Torque Transfer Magnetic RAM (STT-RAM), and Memristor-based Resistive RAM (ReRAM) offer the promise of significantly lower energy and higher density (bits per area) than DRAM. With byte-addressability and read latencies approaching or improving on DRAM speeds, these NVM technologies are projected to become the dominant memory within the decade~\cite{Meena14,Yole13}, as manufacturing improves and costs decrease. Although these NVMs could be viewed as just a layer in the memory hierarchy that provides persistence, there is one important distinction: \emph{Writes are significantly more costly than reads,} suffering from higher latency, lower per-chip bandwidth, higher energy costs, and endurance problems (a cell wears out after $10^{8}$--$10^{12}$ writes~\cite{Meena14}). Thus, unlike DRAM, there is a significant (often an order of magnitude or more) asymmetry between read and write costs~\cite{Akel11,Athanassoulis12, Dong09,Dong08,ibm-pcm14b,Kim14,Qureshi12,Xu11}, motivating the study of {\em write-efficient} ({\em write-limited}, {\em write-avoiding}) algorithms, which reduce the number of writes relative to existing algorithms. \myparagraph{Related Work} While read-write asymmetry in NVMs has been the focus of many systems efforts~\cite{ChoL09,LeeIMB09,yang:iscas07,ZhouZYZ09,ZWT13}, there have been very few papers addressing this from the algorithmic perspective. Reducing the number of writes has long been a goal in disk arrays, distributed systems, cache-coherent multiprocessors, and the like, but that work has not focused on NVMs and the solutions are not suitable for their properties. Several papers~\cite{BT06,Eppstein14,Gal05,nath:vldbj10,ParkS09,Viglas14} have looked at read-write asymmetries in the context of NAND flash memory. This work has focused on (i) the fact that in NAND flash chips, bits can only be cleared by incurring the overhead of erasing a large block of memory, and/or (ii) avoiding individual cell wear out. Eppstein et al.~\cite{Eppstein14}, for example, presented a novel cuckoo hashing algorithm that chooses where to insert/reinsert each item based on the number of writes to date for the cells it hashes to. Emerging NVMs, in contrast, do not suffer from (i) because they can write arbitrary bytes in-place. As for (ii), we choose not to focus on wear out (beyond reducing the total number of writes to all of memory) because system software (e.g., the garbage collector, virtual memory manager, or virtual machine hypervisor) can readily balance application writes across the physical memory over the long time horizons (many orders of magnitude longer than NAND Flash) before an individual cell would wear out from too many writes to it. A few prior works~\cite{Chen11,Viglas14,Viglas12} have looked at algorithms for asymmetric read-write costs in emerging NVMs, in the context of databases. Chen et al.~\cite{Chen11} presented analytical formulas for PCM latency and energy, as well as algorithms for B-trees and hash joins that are tuned for PCM. For example, their B-tree variant does not sort the keys in a leaf node nor repack a leaf after a deleted key, thereby avoiding the write cost of sorting and repacking, at the expense of additional reads when searching. Similarly, Viglas~\cite{Viglas12} traded off fewer writes for additional reads by rebalancing a B$^+$-tree only if the cost of rebalancing has been amortized. Viglas~\cite{Viglas14} presented several ``write-limited'' sorting and join algorithms within the context of database query processing. \myparagraph{Our Results} In this paper, we seek to systematically study algorithms under asymmetric read and write costs. We consider natural extensions to the RAM/PRAM models, to the External Memory model, and to the Ideal-Cache model to incorporate an integer cost, $\omega > 1$, for writes relative to reads. We focus primarily on sorting algorithms, given their fundamental role in computing and as building blocks for other problems, especially in the External Memory and Parallel settings---but we also consider additional problems. We first observe that in the RAM model, it is well known that sorting by inserting each key into a balanced search tree requires only $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes with no increase in reads ($O\mathopen{}\mathclose\bgroup\originalleft(n \log n\aftergroup\egroup\originalright)$). Applying this idea to a carefully-tuned sort for the asymmetric CRCW PRAM yields a parallel sort with $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes, $O\mathopen{}\mathclose\bgroup\originalleft(n \log n\aftergroup\egroup\originalright)$ reads and $O\mathopen{}\mathclose\bgroup\originalleft(\omega{} \log n\aftergroup\egroup\originalright)$ depth (with high probability\footnote{\emph{With high probability (w.h.p.)} means with probability $1 - n^{-c}$, for a constant $c$.}). Next, we consider an Asymmetric External Memory (AEM) model, which has a small primary memory (cache) of size $M$ and transfers data in blocks of size $B$ to (at a cost of $\omega$) and from (at unit cost) an unbounded external memory. We show that three asymptotically-optimal EM sorting algorithms can each be adapted to the AEM with reduced write costs. First, following~\cite{ParkS09,Viglas14}, we adapt multi-way mergesort by merging $O(\omega) M/B$ sorted runs at a time (instead of $M/B$ as in the original EM version). This change saves writes by reducing the depth of the recursion. Each merge makes $O(\omega)$ passes over the runs, using an in-memory heap to extract values for the output run for the pass. Our algorithm and analysis is somewhat simpler than~\cite{ParkS09,Viglas14}. Second, we present an AEM sample sort algorithm that uses $O(\omega) M/B$ splitters at each level of recursion (instead of $M/B$ in the original EM version). Again, the challenge is to both find the splitters and partition using them while incurring only $O\mathopen{}\mathclose\bgroup\originalleft(N/B\aftergroup\egroup\originalright)$ writes across each level of recursion. We also show how this algorithm can be parallelized to run with linear speedup on the Asymmetric Private-Cache model (Section~\ref{sec:prelims}) with $p = n/M$ processors. Finally, our third sorting algorithm is an AEM heapsort using a buffer-tree-based priority queue. Compared to the original EM algorithm, both our buffer-tree nodes and the number of elements stored outside the buffer tree are larger by a factor of $O(\omega{})$, which adds nontrivial changes to the data structure. All three sorting algorithms have the same asymptotic complexity on the AEM. Finally, we define an Asymmetric Ideal-Cache model, which is similar to the AEM model in terms of $M$ and $B$ and having asymmetric read/write costs, but uses an asymmetric ideal replacement policy instead of algorithm-specified transfers. We extend important results for the Ideal-Cache model and thread schedulers to the asymmetric case---namely, the Asymmetric Ideal-Cache can be (constant factor) approximated by an asymmetric-LRU cache, and it can be used in conjunction with a work-stealing (parallel-depth-first) scheduler to obtain good parallel cache complexity bounds for machines with private caches (a shared cache, respectively). We use this model to design write-efficient cache-oblivious algorithms for sorting, Fast Fourier Transform, and matrix multiplication. Our sorting algorithm is adapted from~\cite{BGS10}, and again deals with the challenges of reducing the number of writes. All three algorithms use $\Theta\mathopen{}\mathclose\bgroup\originalleft(\omega\aftergroup\egroup\originalright)$ times more reads than writes and have good parallelism. \section{Preliminaries and Models}\label{sec:prelims} This section presents background material on NVMs and models, as well as new (asymmetric cost) models and results relating models. We first consider models whose parallelism is in the parallel transfer of the data in a larger block, then consider models with parallel processors. \myparagraph{Emerging NVMs} While DRAM stores data in capacitors that typically require refreshing every few milliseconds, and hence must be continuously powered, emerging NVM technologies store data as ``states'' of the given material that require no external power to retain. Energy is required only to read the cell or change its value (i.e., its state). While there is no significant cost difference between reading and writing DRAM (each DRAM read of a location not currently buffered requires a write of the DRAM row being evicted, and hence is also a write), emerging NVMs such as Phase-Change Memory (PCM), Spin-Torque Transfer Magnetic RAM (STT-RAM), and Memristor-based Resistive RAM (ReRAM) each incur significantly higher cost for writing than reading. This large gap seems fundamental to the technologies themselves: to change the physical state of a material requires relatively significant energy for a sufficient duration, whereas reading the current state can be done quickly and, to ensure the state is left unchanged, with low energy. An STT-RAM cell, for example, can be read in 0.14 $ns$ but uses a 10 $ns$ writing pulse duration, using roughly $10^{-15}$ joules to read versus $10^{-12}$ joules to write~\cite{Dong08} (these are the raw numbers at the materials level). A Memristor ReRAM{} cell uses a 100 $ns$ write pulse duration, and an 8MB Memrister ReRAM{} chip is projected to have reads with 1.7 $ns$ latency and 0.2 $nJ$ energy versus writes with 200 $ns$ latency and 25 $nJ$ energy~\cite{Xu11}---over two orders of magnitude differences in latency and energy. PCM is the most mature of the three technologies, and early generations are already available as I/O devices. A recent paper~\cite{Kim14} reported 6.7 $\mu s$ latency for a 4KB read and 128 $\mu s$ latency for a 4KB write. Another reported that the sector I/O latency and bandwidth for random 512B writes was a factor of 15 worse than for reads~\cite{ibm-pcm14b}. As a future memory/cache replacement, a 512Mb PCM memory chip is projected to have 16 $ns$ byte reads versus 416 $ns$ byte writes, and writes to a 16MB PCM L3 cache are projected to be up to 40 times slower and use 17 times more energy than reads~\cite{Dong09}. While these numbers are speculative and subject to change as the new technologies emerge over time, there seems to be sufficient evidence that writes will be considerably more costly than reads in these NVMs. \myparagraph{Sorting} The sorting problem we consider is the standard comparison based sorting with $n$ records each containing a key. We assume the input is in an unsorted array, and the output needs to be placed into a sorted array. Without loss of generality, we assume the keys are unique (a position index can always be added to make them unique). \myparagraph{The Asymmetric RAM model} This is the standard RAM model but with a cost $\omega > 1$ for writes, while reads are still unit cost. \myparagraph{The (Asymmetric) External Memory model} The widely studied External Memory (EM) model~\cite{AggarwalV88} (also called I/O model, Disk Model and Disk Access Model) assumes a two level memory hierarchy with a fixed size primary memory (cache) of size $M$ and a secondary memory of unbounded size. Both are partitioned into blocks of size $B$. Standard RAM instructions can be used within the primary memory, and in addition the model has two special \emph{memory transfer} instructions: a \emph{read} transfers (alternatively, copies) an arbitrary block from secondary memory to primary memory, and a \emph{write} transfers an arbitrary block from primary to secondary memory. The I/O complexity of an algorithm is the total number of memory transfers. Sorting $n$ records can be performed in the EM model with I/O complexity \begin{equation} \label{eqn:sortbound} \Theta\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B} \log_{\frac{M}{B}} \frac{n}{B}\aftergroup\egroup\originalright) \end{equation} This is both an upper and lower bound~\cite{AggarwalV88}. The upper bound can be achieved with at least three different algorithms, a multi-way mergesort~\cite{AggarwalV88}, a distribution sort~\cite{AggarwalV88}, and a priority-queue (heap) sort based on buffer trees~\cite{Arge03}. The \emph{Asymmetric External Memory} (AEM) model simply adds a parameter $\omega$ to the EM model, and charges this for each write of a block. Reading a block still has unit cost. Throughout the paper, we assume that $M$ and $B$ are measured in terms of the number of data objects. If we are sorting, for example, it is the number records. We assume that the memory has an extra $O\mathopen{}\mathclose\bgroup\originalleft(\log M\aftergroup\egroup\originalright)$ locations just for storing a stack to compute with. \myparagraph{The (Asymmetric) Ideal-Cache model} The Ideal-Cache model~\cite{Frigo99} is a variant of the EM model. The machine model is still organized in the same way with two memories each partitioned into blocks, but there are no explicit memory transfer instructions. Instead all addressable memory is in the secondary memory, but any subset of up to $M/B$ of the blocks can have a copy resident in the primary memory (cache). Any reference to a resident block is a \emph{cache hit} and is free. Any reference to a word in a block that is not resident is a \emph{cache miss} and requires a memory transfer from the secondary memory. The cache miss can replace a block in the cache with the loaded block, which might require \emph{evicting} a cache block. The model makes the \emph{tall cache assumption} where $M = \Omega(B^2)$, which is easily met in practice. The I/O or \emph{cache complexity} of an algorithm is the number of cache misses. An optimal (offline) cache eviction policy is assumed---i.e., one that minimizes the I/O complexity. It is well known that the optimal policy can be approximated using the online least recently used (LRU) policy at a cost of at most doubling the number of misses, and doubling the cache size~\cite{Sleator85}. The main purpose of the Ideal-Cache model is for the design of \emph{cache-oblivious algorithms}. These are algorithms that do not use the parameters $M$ and $B$ in their design, but for which one can still derive effective bounds on I/O complexity. This has the advantage that the algorithms work well for any cache sizes on any cache hierarchies. The I/O complexity of cache-oblivious sorting is asymptotically the same as for the EM model. We define the \emph{Asymmetric Ideal-Cache} model by distinguishing reads from writes, as follows. A cache block is \emph{dirty} if the version in the cache has been modified since it was brought into the cache, and \emph{clean} otherwise. When a cache miss evicts a clean block the cost is $1$, but when evicting a dirty block the cost is $1 + \omega$, $1$ for the read and $\omega$ for the write. Again, we assume an ideal offline cache replacement policy---i.e., minimizing the total I/O cost. Under this model we note that the LRU policy is no longer 2-competitive. However, the following variant is competitive within a constant factor. The idea is to separately maintain two equal-sized pools of blocks in the cache (primary memory), a read pool and a write pool. When reading a location, (i) if its block is in the read pool we just read the value, (ii) if it is in the write pool we copy the block to the read pool, or (iii) if it is in neither, we read the block from secondary memory into the read pool. In the latter two cases we evict the LRU block from the read pool if it is full, with cost 1. The rules for the write pool are symmetric when writing to a memory location, but the eviction has cost $\omega+1$ because the block is dirty. We call this the read-write LRU policy. This policy is competitive with the optimal offline policy: \begin{lemma} For any sequence $S$ of instructions, if it has cost $Q_I\mathopen{}\mathclose\bgroup\originalleft(S\aftergroup\egroup\originalright)$ on the Asymmetric Ideal-Cache model with cache size $M_I$, then it will have cost \[Q_L\mathopen{}\mathclose\bgroup\originalleft(S\aftergroup\egroup\originalright) \leq \frac{M_L}{\mathopen{}\mathclose\bgroup\originalleft(M_L - M_I\aftergroup\egroup\originalright)}Q_I\mathopen{}\mathclose\bgroup\originalleft(S\aftergroup\egroup\originalright) + (1 + \omega) M_I/B\] on an asymmetric cache with read-write LRU policy and cache sizes (read and write pools) $M_L$. \end{lemma} \begin{proof} Partition the sequence of instructions into regions that contain memory reads to exactly $M_L/B$ distinct memory blocks each (except perhaps the last). Each region will require at most $M_L/B$ misses under LRU. Each will also require at least $(M_L - M_I)/B$ cache misses on the ideal cache since at most $M_I/B$ blocks can be in the cache at the start of the region. The same argument can be made for writes, but in this case each operation involves evicting a dirty block. The $(1 + \omega) M_I/B$ is for the last region. To account for the last region, in the worst case at the start of the last write region the ideal cache starts with $M_I/B$ blocks which get written to, while the LRU starts with none of those blocks. The LRU therefore invokes an addition $M_I/B$ write misses each costing $1 + \omega$ (1 for the load and $\omega$ for the eviction). Note that if the cache starts empty then we do not have to add this term since an equal amount will be saved in the first round. \end{proof} \myparagraph{The Asymmetric PRAM model} In the \emph{Asymmetric PRAM}, the standard PRAM is augmented such that each write costs $\omega$ and all other instructions cost $1$. In this paper we analyze algorithms in terms of work (total cost of the operations) and depth (parallel time using an unbounded number of processors). If we have depth $d\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ and separate the work into $w\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes and $r\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ other instructions, then the time on $p$ processors is bounded by: \[T\mathopen{}\mathclose\bgroup\originalleft(n,p\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\frac{\omega w\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) + r\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)}{p} + d\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)\] using Brent's theorem~\cite{JaJa92}. This bound assumes that work can be allocated to processors efficiently. We allow for concurrent reads and writes (CRCW), and for concurrent writes we assume an arbitrary write takes effect. Note that a parallel algorithm that require $O\mathopen{}\mathclose\bgroup\originalleft(D\aftergroup\egroup\originalright)$ depth in the PRAM model requires $O\mathopen{}\mathclose\bgroup\originalleft(\omega D\aftergroup\egroup\originalright)$ depth in the asymmetric PRAM model to account for the fact that writes are $\omega$ times more expensive than reads. \myparagraph{The Asymmetric Private-Cache model} In the \emph{Asymmetric Private-Cache} model (a variant of the Private-Cache model~\cite{Acar02,Arge2008}), each processor has its own primary memory of size $M$, and all processors share a secondary memory. We allow concurrent reads but do not use concurrent writes. As in the AEM model, transfers are in blocks of size $B$ and transfers to the shared memory cost $\omega$. \myparagraph{The (Asymmetric) Low-depth Cache-Oblivious Paradigm} The final model that we consider is based on developing low-depth cache-oblivious algorithms~\cite{BGS10}. In the model algorithms are defined as nested parallel computations based on parallel loops, possibly nested (this is a generalization of a PRAM). The depth of the computation is the longest chain of dependences---i.e., the depth of a sequential strand of computation is its sequential cost, and the depth of a parallel loop is the maximum of the depths of its iterates. The computation has a natural sequential order by converting each parallel loop to a sequential loop. The cache complexity can be analyzed on the Ideal-Cache model under this sequential order. Using known scheduling results the depth and sequential cache complexity of a computation are sufficient for deriving bounds on parallel cache complexity. In particular, let $D$ be the depth and $Q_1$ be the sequential cache complexity. Then for a $p$-processor shared-memory machine with private caches (each processor has its own cache) using a work-stealing scheduler, the total number of misses $Q_p$ across all processors is at most $Q_1 + O\mathopen{}\mathclose\bgroup\originalleft(pDM/B\aftergroup\egroup\originalright)$ with high probability~\cite{Acar02}. For a $p$-processor shared-memory machine with a shared cache of size $M + p B D$ using a parallel-depth-first (PDF) scheduler, $Q_p \leq Q_1$~\cite{BlGi04}. These bounds can be extended to multi-level hierarchies of private or shared caches, respectively~\cite{BGS10}. Thus, algorithms with low depth have good parallel cache complexity. Our asymmetric variant of the low-depth cache-oblivious paradigm simply accounts for $\omega$ in the depth and uses the Asymmetric Ideal-Cache model for sequential cache complexity. We observe that the above scheduler bounds readily extend to this asymmetric setting. The $O\mathopen{}\mathclose\bgroup\originalleft(pDM/B\aftergroup\egroup\originalright)$ bound on the additional cache misses under work-stealing arises from an $O\mathopen{}\mathclose\bgroup\originalleft(pD\aftergroup\egroup\originalright)$ bound on the number of steals and the observation that each steal requires the stealer to incur $O\mathopen{}\mathclose\bgroup\originalleft(M/B\aftergroup\egroup\originalright)$ misses to ``warm up'' its cache. Pessimistically, we will charge $2M/B$ writes (and reads) for each steal, because each line may be dirty and need writing back before the stealer can read it into its cache and, once the stealer has completed the stolen work (reached the join corresponding to the fork that spawned the stolen work), the contents of its cache may need to be written back. Therefore for private caches we have $Q_P \leq Q_1 + O\mathopen{}\mathclose\bgroup\originalleft(pkDM/B\aftergroup\egroup\originalright)$. The PDF bounds extend because there are no additional cache misses and hence no additional reads or writes. \section{Sorting on RAM/PRAM}\label{sec:ram-models} The number of writes on an asymmetric RAM can be bound for a variety of algorithms and data structures using known techniques. For example, there has been significant research on maintaining balanced search trees such that every insertion and deletion only requires a constant number of rotations (see e.g.,~\cite{Ottmann} and references within). While the motivation for that work is that for certain data structures rotations can be asymptotically more expensive than visiting a node (e.g., if each node of a tree maintains a secondary set of keys), the results apply directly to improving bounds on the asymmetric RAM. Sorting can be done by inserting $n$ records into a balanced search tree data structure, and then reading them off in order. This requires $O\mathopen{}\mathclose\bgroup\originalleft(n \log n\aftergroup\egroup\originalright)$ reads and $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes, for total cost $O\mathopen{}\mathclose\bgroup\originalleft(n \mathopen{}\mathclose\bgroup\originalleft(\omega + \log n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$. Similarly, we can maintain priority queues (insert and delete-min) and comparison-based dictionaries (insert, delete and search) in $O\mathopen{}\mathclose\bgroup\originalleft(1\aftergroup\egroup\originalright)$ writes per operation. \begin{algorithm}[!tp] \caption{\textsc{Asymmetric-PRAM Sort}} \label{alg:pramsort} \textbf{Input:} An array of records $A$ of length $n$ \begin{algorithmic}[1] \State {Select a sample $S$ from $A$ independently at random with per-record probability $1/\log n$, and sort the sample.} \State {Use every $(\log n)$-th element in the sorted $S$ as splitters, and for each of the about $n/\log^2n$ buckets defined by the splitters allocate an array of size $c\log^2 n$.} \State {In parallel locate each record's bucket using a binary search on the splitters.} \State {In parallel insert the records into their buckets by repeatedly trying a random position within the associated array and attempting to insert if empty.} \State {Pack out all empty cells in the arrays and concatenate all arrays. // Step 6 is an optional step used to obtain $O(\omega\log n)$ depth} \State {{\bf For} round $r\leftarrow 1$ to 2 {\bf do}} \Statexi {{\bf for each} array $A'$ generated in previous round} \Statexii {Deterministically select $|A'|^{1/3}-1$ samples as splitters} \Statexii {and apply integer sort on the bucket number to partition} \Statexii {$A'$ into $|A'|^{1/3}$ sub-arrays.} \State {{\bf For each} subarray apply the asymmetric RAM sort.} \State {Return the sorted array.} \end{algorithmic} \end{algorithm} We now consider how to sort on an asymmetric CRCW PRAM (arbitrary write). Algorithm~\ref{alg:pramsort} outlines a sample sort (with over-sampling) that does $O\mathopen{}\mathclose\bgroup\originalleft(n\log n\aftergroup\egroup\originalright)$ reads and $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes and has depth $O\mathopen{}\mathclose\bgroup\originalleft(\omega \log n\aftergroup\egroup\originalright)$. It is similar to other sample sorts~\cite{Blelloch91,Frazer70,JaJa92}. We consider each step in more detail and analyze its cost. Step 1 can use Cole's parallel mergesort~\cite{Cole1988} requiring $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ reads and writes w.h.p.~(because the sample is size $\Theta(n/\log n)$ w.h.p.), and $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ depth. In step 2 for sufficiently large $c$, w.h.p.~all arrays will have at least twice as many slots as there are records belonging to the associated bucket~\cite{Blelloch91}. The cost of step 2 is a lower-order term. Step 3 requires $O(n \log n)$ reads, $O(n)$ writes and $O(\omega + \log n)$ depth for the binary searches and writing the resulting bucket numbers. Step 4 is an instance of the so-called placement problem (see~\cite{RR89,Reif99}). This can be implementing by having each record select a random location within the array associated with its bucket and if empty, attempting to insert the record at that location. This is repeated if unsuccessful. Since multiple records might try the same location at the same time, each record needs to check if it was successfully inserted. The expected number of tries per record is constant. Also, if the records are partitioned into groups of size $\log n$ and processed sequentially within the group and in parallel across groups, then w.h.p.~no group will require more than $O(\log n)$ tries across all of its records~\cite{RR89}. Therefore, w.h.p., the number of reads and writes for this step are $O(n)$ and the depth is $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$. Step 5 can be done with a prefix sum, requiring a linear number of reads and writes, and $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ depth. At this point we could apply the asymmetric RAM sort to each bucket giving a total of $O\mathopen{}\mathclose\bgroup\originalleft(n\log n\aftergroup\egroup\originalright)$ reads, $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes and a depth of $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log^2n+\log^2n\log\log n\aftergroup\egroup\originalright)$ w.h.p.~(the first term for the writes and second term for the reads). We can reduce the depth to $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ by further deterministically sampling inside each bucket (step 6) using the following lemma: \begin{lemma}\label{lemma:partition} We can partition $m$ records into $m^{1/3}$ buckets $M_1,\ldots,M_{m^{1/3}}$ such that for any $i$ and $j$ where $i < j$ all records in $M_i$ are less than all records in $M_j$, and for all $i$, $|M_i| < m^{2/3}\log m$. The process requires $O\mathopen{}\mathclose\bgroup\originalleft(m\log m\aftergroup\egroup\originalright)$ reads, $O\mathopen{}\mathclose\bgroup\originalleft(m\aftergroup\egroup\originalright)$ writes, and $O\mathopen{}\mathclose\bgroup\originalleft(\omega\sqrt{m}\aftergroup\egroup\originalright)$ depth. \end{lemma} \begin{proof} We first split the $m$ records into groups of size $m^{1/3}$ and sort each group with the RAM sort. This takes $O\mathopen{}\mathclose\bgroup\originalleft(m\log m\aftergroup\egroup\originalright)$ reads, $O\mathopen{}\mathclose\bgroup\originalleft(m\aftergroup\egroup\originalright)$ writes and $O\mathopen{}\mathclose\bgroup\originalleft(\omega m^{1/3}\log m\aftergroup\egroup\originalright)$ depth. Then for each sorted group, we place every $\log m$'th record into a sample. Now we sort the sample of size $m/\log m$ using Cole's mergesort, and use the result as splitters to partition the remaining records into buckets. Finally, we place the records into their respective buckets by integer sorting the records based on their bucket number. This can be done with a parallel radix sort in a linear number of reads/writes and $O\mathopen{}\mathclose\bgroup\originalleft(\omega\sqrt{m}\aftergroup\egroup\originalright)$ depth~\cite{RR89}. To show that the largest bucket has size at most $m^{2/3}\log m$, note that in each bucket, we can pick at most $\log m$ consecutive records from each of the $m^{2/3}$ groups without picking a splitter. Otherwise there will be a splitter in the bucket, which is a contradiction. \end{proof} Step 6 applies two iterations of Lemma~\ref{lemma:partition} to each bucket to partition it into sub-buckets. For an initial bucket of size $m$, this process will create sub-buckets of at most size $O\mathopen{}\mathclose\bgroup\originalleft(m^{4/9}\log^{5/3}m\aftergroup\egroup\originalright)$. Plugging in $m = O\mathopen{}\mathclose\bgroup\originalleft(\log^2 n\aftergroup\egroup\originalright)$ gives us that the largest sub-bucket is of size $O\mathopen{}\mathclose\bgroup\originalleft(\log^{8/9} n\mathopen{}\mathclose\bgroup\originalleft(\log\log n\aftergroup\egroup\originalright)^{5/3}\aftergroup\egroup\originalright)$. We can now apply the RAM sort to each bucket in $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ depth. This gives us the following theorem. \begin{theorem} Sorting $n$ records can be performed using $O(n$ $\log n)$ reads, $O\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)$ writes, and in $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ depth w.h.p.~on the Asymmetric CRCW PRAM. \end{theorem} This implies \[T\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\frac{n \log n + \omega n}{p} + \omega \log n\aftergroup\egroup\originalright)\] time. Allocating work to processors is outlined above or described in the cited references. In the standard PRAM model, the depth of our algorithm matches that of the best PRAM sorting algorithm~\cite{Cole1988}, although ours is randomized and requires the CRCW model. We leave it open whether the same bounds can be met deterministically and on a PRAM without concurrent writes. \section{External Memory Sorting}\label{sec:em-sorting} In this section, we present sorting algorithms for the Asymmetric External Memory model. We show how the three approaches for EM sorting---mergesort, sample sort, and heapsort (using buffer trees)---can each be adapted to the asymmetric case. In each case we trade off a factor of $k=O(\omega)$ additional reads for a larger branching factor ($k{}M/B$ instead of $M/B$), hence reducing the number of rounds. It is interesting that the same general approach works for all three types of sorting. The first algorithm, the mergesort, has been described elsewhere~\cite{ParkS09} although in a different model (their model is specific to NAND flash memory and has different sized blocks for reading and writing, among other differences). Our parameters are therefore different, and our analysis is new. To the best of our knowledge, our other two algorithms are new. \subsection{Mergesort}\label{sec:io-merge-sort} We use an $l$-way mergesort---i.e., a balanced tree of merges with each merge taking in $l$ sorted arrays and outputting one sorted array consisting of all records from the input. We assume that once the input is small enough a different sort (the \emph{base case}) is applied. For $l = M/B$ and a base case of $n \leq M$ (using any sort since it fits in memory), we have the standard EM mergesort. With these settings there are $\log_{M/B} \mathopen{}\mathclose\bgroup\originalleft(n/M\aftergroup\egroup\originalright)$ levels of recursion, plus the base case, each costing $O\mathopen{}\mathclose\bgroup\originalleft({n}/{B}\aftergroup\egroup\originalright)$ memory operations. This gives the well-known overall bound from Equation~\ref{eqn:sortbound}~\cite{AggarwalV88}. To modify the algorithm for the asymmetric case, we increase the branching factor and the base case by a factor of $k$ where $1\leq k\leq \omega$ (i.e. $l = k {}M /B$ and a base case of $n \leq k M$). This means that it is no longer possible to keep the base case in the primary memory, nor one block for each of the input arrays during a merge. The modified algorithm is described in Algorithm~\ref{alg:mergesort}. \begin{algorithm}[!t] \caption{\textsc{AEM-Mergesort}} \label{alg:mergesort} \textbf{Input:} An array $A$ of records of length $n$ \begin{algorithmic}[1] \If {$|A|\leq k M$} ~~~~// base case \State { Sort $A$ using $k |A|/B$ reads and $|A|/B$ writes, and return. } \EndIf \State {Evenly partition $A$ into $l=k M/B$ subarrays $A_1,\ldots,A_{l}$ (at the granularity of blocks) and recursively apply \textsc{AEM-Mergesort} to each.} \State {{\bf Initialize Merge.} Initialize an empty output array $O$, a load buffer and an empty store buffer each of size $B$, an empty priority queue $Q$ of size $M$, an array of pointers $I_1, \ldots, I_{l}$ that point to the start of each sorted subarray, $c=0$, and $\mb{lastV}=-\infty$. Associated with $Q$ is $Q.\mb{max}$, which holds the maximum element in $Q$ if $Q$ is full, and $+\infty$ otherwise.} \While {$c<|A|$} \label{line:while} \For {$i\leftarrow 1$ to $l$} \label{line:for} \State {\textsc{Process-Block}($i$).} \EndFor \While {$Q$ is not empty} \label{line:while2} \State {$e\leftarrow Q.\mb{deleteMin}$.} \State {Write $e$ to the store buffer, $c\leftarrow c+1$.} \State {If the store buffer is full, flush it to $O$ and update $\mb{lastV}$.} \If {$e$ is marked as last record in its subarray block} \State {$i = e.\mb{subarray}$.} \State {Increment $I_i$ to point to next block in subarray $i$.} \State {\textsc{Process-Block}($i$).} \EndIf \EndWhile \EndWhile \State {$A\leftarrow O$.~~// Logically, don't actually copy} \Function{Process-Block}{subarray $i$} \State {\textbf{If} $I_i$ points to the end of the subarray \textbf{then} return.} \State {Read the block $I_i$ into the load buffer.} \ForAll {records $e$ in the block} \If {$e.key$ is in the range $\mathopen{}\mathclose\bgroup\originalleft(\mb{lastV},Q.\mb{max}\aftergroup\egroup\originalright)$} \State {If $Q$ is full, eject $Q.\mb{max}$.} \State {Insert $e$ into $Q$, and mark if last record in block. } \EndIf \EndFor \EndFunction \end{algorithmic} \end{algorithm} Each merge proceeds in a sequence of rounds, where a round is one iteration of the {\bf while} loop starting on line~\ref{line:while}. During each round we maintain a priority queue within the primary memory. Because operations within the primary memory are free in the model, this can just be kept as a sorted array of records, or even unsorted, although a balanced search tree can be a feasible solution in practice. Each round consists of two phases. The first phase (the {\bf for} loop on line~\ref{line:for}) considers each of the $l$ input subarrays in turn, loading the current block for the subarray into the load buffer, and then inserting each record $e$ from the block into the priority queue if not already written to the output (i.e. $e.key > \mb{lastV}$), and if smaller than the maximum in the queue (i.e. $e.key < Q.\mb{max}$). This might bump an existing element out of the queue. Also, if a record is the last in its block then it is marked and tagged with its subarray number. The second phase (the {\bf while} loop starting on line~\ref{line:while2}) starts writing the priority queue to the output one block at a time. Whenever reaching a record that is marked as the last in its block, the algorithm increments the pointer to the corresponding subarray and processes the next block in the subarray. We repeat the rounds until all records from all subarrays have been processed. To account for the space for the pointers $I=I_1,\ldots,I_l$, let $\alpha = (\log n)/{s}$, where $s$ is the size of a record in bits, and $n$ is the total number of records being merged. The cost of the merge is bounded as follows: \begin{lemma}\label{lem:merge} $l = k{}M/B$ sorted sequences with total size $n$ (stored in $\lceil n/B \rceil$ blocks, and block aligned) can be merged using at most $(k{} + 1) \lceil n/B \rceil$ reads and $\lceil n/B \rceil$ writes, on the AEM model with primary memory size $\mathopen{}\mathclose\bgroup\originalleft(M+2B+2\alphak M/B\aftergroup\egroup\originalright)$. \end{lemma} \begin{proof} Each round (except perhaps the last) outputs at least $M$ records, and hence the total number of rounds is at most $\lceil n/M \rceil$. The first phase of each round requires at most $k{}M/B$ reads, so the total number of reads across all the first phases is at most $k{}\lceil n/B \rceil$ (the last round can be included in this since it only loads as many blocks as are output). For the second phase, a block is only read when incrementing its pointer, therefore every block is only read once in the second phase. Also every record is only written once. This gives the stated bounds on the number of reads and writes. The space includes the space for the in-memory heap ($M$), the load and store buffers, the pointers $I$ ($\alpha k M/B$), and pointers to maintain the last-record in block information ($\alpha k M/B$). \end{proof} We note that it is also possible to keep $I$ in secondary memory. This will double the number of writes because every time the algorithm moves to a new block in an input array $i$, it would need to write out the updated $I_i$. The increase in reads is small. Also, if one uses a balanced search tree to implement the priority queue $Q$ then the size increases by $< M (\log M)/s$ in order to store the pointers in the tree. For the base case when $n \leq k{}M <\omega{}M$ we use the following lemma. \begin{lemma}\label{lem:selsort} $n \leq k{}M$ records stored in $\lceil n/B \rceil$ blocks can be sorted using at most $k{} \lceil n/B \rceil$ reads and $\lceil n/B \rceil$ writes, on the AEM model with primary memory size $M + B$. \end{lemma} \begin{proof} We sort the elements using a variant of selection sort, scanning the input list a total of at most $k{}$ times. In the first scan, store in memory the $M$ smallest elements seen so far, performing no writes and $\lceil n/B \rceil$ reads. After completing the scan, output all the $\min(M,n)$ elements in sorted order using $\lceil\min(M,n)/B\rceil$ writes. Record the maximum element written so far. In each subsequent phase (if not finished), store in memory the $M$ smallest records larger than the maximum written so far, then output as before. The cost is $\lceil n/B \rceil$ reads and $M/B$ writes per phase (except perhaps the last phase). We need one extra block to hold the input. The largest output can be stored in the $O\mathopen{}\mathclose\bgroup\originalleft(\log M\aftergroup\egroup\originalright)$ locations we have allowed for in the model. This gives the stated bounds because every element is written out once and the input is scanned at most $\omega$ times. \end{proof} Together we have: \begin{theorem}\label{lem:mergesort} Algorithm~\ref{alg:mergesort} sorts $n$ records using \[R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) \leq \mathopen{}\mathclose\bgroup\originalleft(k{}+1\aftergroup\egroup\originalright)\mathopen{}\mathclose\bgroup\originalleft\lceil \frac{n}{B}\aftergroup\egroup\originalright\rceil\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\] reads, and \[W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) \leq \mathopen{}\mathclose\bgroup\originalleft\lceil\frac{n}{B}\aftergroup\egroup\originalright\rceil\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\] writes on an AEM with primary memory size $\mathopen{}\mathclose\bgroup\originalleft(M+2B+2\alphak M/B\aftergroup\egroup\originalright)$. \end{theorem} \begin{proof} The number of recursive levels of merging is bounded by $\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$, and when we add the additional base round we have $1 + \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil = \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\frac{k{}M}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil = \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$. The cost for each level is at most $(k + 1) \lceil n/B \rceil$ reads and $\lceil n/B \rceil$ writes (only one block on each level might not be full). \end{proof} \begin{corollary} Assuming that $n$ is big enough so we ignore the ceiling function in number of levels for merging, then picking $k$ such that $\displaystyle{k \over \logk}<{\omega\over \log{M \over B}}$ gives an overall improvement of the total I/O complexity (reads and writes). \end{corollary} The details of computing this can be found in appendix~\ref{app:branchfactor}. In real-world applications for sorting, $\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$ is usually a small constant $p$ (between $2$ to $6$). In this case, we can always try all $k=\lceil\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)^{1/p'}/\frac{M}{B}\rceil$ for all integer $p'$ and $1\leq p'\leq p$, and pick the appropriate $k$ that gives the minimum total I/O complexity. Our new mergesort will never perform worse than the classic $M/B$-way mergesort, since we always have the option to pick $k$ as $1$, and the new algorithm will performe exactly the same as the classic EM mergesort. \subsection{Sample Sort} \label{sec:io-sample-sort} We now describe an $l$-way randomized sample sort~\cite{Blelloch91,Frazer70} (also called distribution sort), which asymptotically matches the I/O bounds of the mergesort. The idea of sample sort is to partition $n$ records into $l$ approximately equally sized buckets based on a sample of the keys within the records, and then recurse on each bucket until an appropriately-sized base case is reached. Similar to the mergesort, here we use a branching factor $l = k{}M/B$, where $1\leqk=\omega$. Again this branching factor reduces the number of levels of recursion relative to the standard EM sample sort which uses $l = M/B$~\cite{AggarwalV88}, and the analysis of picking appropriate $k$ in previous section also holds here. We describe how to process each partition and the base case. The partitioning starts by selecting a set of splitters. This can be done using standard techniques, which we review later. The splitters partition the input into buckets that w.h.p.~are within a constant factor of the average size $n/l$. The algorithm now needs to bucket the input based on the splitters. The algorithm processes the splitters in $k{}$ rounds of size $M/B$ each, starting with the first $M/B$ splitters. For each round the algorithm scans the whole input array, partitioning each value into the one of $M/B$ buckets associated with the splitters, or skipping a record if its key does not belong in the current buckets. One block for each bucket is kept in memory. Whenever a block for one of the buckets is full, it is written out to memory and the next block is started for that bucket. Each $k$ rounds reads all of the input and writes out only the elements associated with these buckets (roughly a $1/k$ fraction of the input). The base case occurs when $n\leq k{}M$, at which point we apply the selection sort from Lemma~\ref{lem:selsort}. Let $n_0$ be the original input size. The splitters can be chosen by randomly picking a sample of keys of size $m = \Theta(l \log n_0)$, sorting them, and then sub-selecting the keys at positions $m/l, 2m/l, \ldots, (l-1)m/l$. By selecting the constant in the $\Theta$ sufficiently large, this process ensures that, w.h.p., every bucket is within a constant factor of the average size~\cite{Blelloch91}. To sort the samples apply a RAM mergesort, which requires at most $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(l \log n_0\aftergroup\egroup\originalright)/B\aftergroup\egroup\originalright) \log \mathopen{}\mathclose\bgroup\originalleft(l \log n_0/M\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. This is a lower-order term when $l = O(n\,/\log^2 n)$, but unfortunately this bound on $l$ may not hold for small subproblems. There is a simple solution---when $n\leq k{}^2M^2/B$, instead use $l=n/(k{}M)$. With this modification, we always have $l \leq \sqrt{n/B}$. It is likely that the splitters could also be selected deterministically using an approach used in the original I/O-efficient distribution sort~\cite{AggarwalV88}. \begin{theorem} The $k{}M/B$-way sample sort sorts $n$ records using, w.h.p., \[R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\frac{k{}n}{B} \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\aftergroup\egroup\originalright)\] reads, and \[W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\aftergroup\egroup\originalright)\] writes on an AEM with primary memory size $\mathopen{}\mathclose\bgroup\originalleft(M+B+M/B\aftergroup\egroup\originalright)$. \end{theorem} \begin{proof} (Sketch) The primary-memory size allows one block from each bucket as well as the $M/B$ splitters to remain in memory. Each partitioning step thus requires $\lceil n/B \rceil + k{}M/B$ writes, where the second term arises from the fact that each bucket may use a partial block. Since $n \geq k{}M$ (this is not a base case), the cost of each partitioning step becomes $O(n/B)$ writes and $O(k{}n/B)$ reads. Because the number of splitters is at most $\sqrt{n} = O(n/\log^2n)$, choosing and sorting the splitters takes $O(n/B)$ reads and writes. Observe that the recursive structure matches that of a sample sort with an effective memory of size $k{}M$, and that there will be at most two rounds at the end where $l = n/(k M)$. As in standard sample sort, the number of writes is linear with the size of the subproblem, but here the number of reads is multiplied by a factor of $k{}$. The standard samplesort analysis thus applies, implying the bound stated. It remains only to consider the base case. Because all buckets are approximately the same size, the total number of leaves is $O(n/B)$---during the recursion, a size $n > k{}M$ problem is split into subproblems whose sizes are $\Omega(B)$. Applying Lemma~\ref{lem:selsort} to all leaves, we get a cost of $O(k n / B)$ reads and $O(n/B)$ writes for all base cases. \end{proof} \myparagraph{Extensions for the Private-Cache Model} The above can be readily parallelized. Here we outline the approach. We assume that there are $p = n/M$ processors. We use parallelism both within each partition, and across the recursive partitions. Within a partition we first find the $l$ splitters in parallel. (As above, $l = k M/B$ except for the at most two rounds prior to the base case where $l = n/(k M)$.) This can be done on a sample that is a logarithmic factor smaller than the partition size, using a less efficient sorting algorithm such as parallel mergesort, and then sub-selecting $l$ splitters from the sorted order. This requires $O(k(M/B + \log^2 n))$ time, where the second term ($O(k\log^2 n)$) is the depth of the parallel mergesort, and the first term is the work term $O((k/B)((n/\log n)\log n)/P) = O(k M/B)$. The algorithm groups the input into $n/(k M)$ chunks of size $k M$ each. As before we also group the splitters into $k$ rounds of size $M/B$ each. Now in parallel across all chunks and across all rounds, partition the chunk based on the round. We have $n/(k M) \times k = n/M$ processors so we can do them all in parallel. Each will require $k M$ reads and $M$ writes. To ensure that the chunks write their buckets to adjacent locations (so that the output of each bucket is contiguous) we will need to do a pass over the input to count the size of each bucket for each chunk, followed by a prefix sum. This can be done before processing the chunks and is a lower-order term. The time for the computation is $O(k M/B)$. The processors are then divided among the sub-problems proportional to the size of the sub-problem, and we repeat. The work at each level of recursion remains the same, so the time at each level remains the same. For the base case of size $\leq k M$, instead of using a selection sort across all keys, which is sequential, we find $k$ splitters and divide the work among $k$ processors to sub-select their part of the input, each by reading the whole input, and then sorting their part of size $O(M)$ using a selection sort on those keys. This again takes $O(k M/B)$ time. The total time for the algorithm is therefore: \[O\mathopen{}\mathclose\bgroup\originalleft(k\mathopen{}\mathclose\bgroup\originalleft({M\over B} + \log^2 n\aftergroup\egroup\originalright) \mathopen{}\mathclose\bgroup\originalleft\lceil 1 + \log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\aftergroup\egroup\originalright)\] with high probability. This is linear speedup assuming $\frac{M}{B} \geq \log^2 n$. Otherwise the number of processors can be reduced to maintain linear speedup. \subsection{I/O Buffer Trees}\label{sec:io-buffer-trees} This section describes how to augment the basic buffer tree~\cite{Arge03} to build a priority queue that supports $n$ \textsc{Insert} and \textsc{Delete-Min} operations with an amortized cost of $O((k{}/B)(1+\log_{k{}M/B} n))$ reads and $O((1/B)(1+ \log_{k{}M/B} n ))$ writes per operation. Using the priority queue to implement a sorting algorithm trivially results in a sort costing a total of $O((k{}n/B)(1+ \log_{k{}M/B} n))$ reads and $O((n/B)(1+ \log_{k{}M/B} n))$ writes. Here $k$ is also the extra branching factor to reduce writes (at a cost of more reads), and can be chosen using similar argument in two previous sorting algorithms. These bounds asymptotically match the preceding sorting algorithms, but some additional constant factors are introduced because a buffer tree is a dynamic data structure. Our buffer tree-based priority queue for the AEM contains a few differences from the regular EM buffer tree~\cite{Arge03}: (1) the buffer tree nodes are larger by a factor $k$, (2) consequently, the ``buffer-emptying'' process uses an efficient sort on $k{}M$ elements instead of an in-memory sort on $M$ elements, and (3) to support the priority queue, $O(k{}M)$ elements are stored outside the buffer tree instead of $O(M)$, which adds nontrivial changes to the data structure. \subsubsection{Overview of a buffer tree} A buffer tree~\cite{Arge03} is an augmented version of an $(a,b)$-tree~\cite{HuddlestonMe82}, where $a=l/4$ and $b=l$ for large branching factor $l$. In the original buffer tree $l=M/B$, but to reduce the number of writes we instead set $l=k{}M/B$. As an $(a,b)$ tree, all leaves are at the same depth in the tree, and all internal nodes have between $l/4$ and $l$ children (except the root, which may have fewer). Thus the height of the tree is $O(1+\log_l n)$. An internal node with $c$ children contains $c-1$ keys, stored in sorted order, that partition the elements in the subtrees. The structure of a buffer tree differs from that of an $(a,b)$ tree in two ways. Firstly, each leaf of the buffer tree contains between $lB/4$ and $lB$ elements stored in $l$ blocks.\footnote{Arge~\cite{Arge03} defines the ``leaves'' of a buffer tree to contain $\Theta(B)$ elements instead of $\Theta(lB)$ elements. Since the algorithm only operates on the parents of those ``leaves'', we find the terminology more convenient when flattening the bottom two levels of the tree. Our leaves thus correspond to what Arge terms ``leaf nodes''~\cite{Arge03} (not to be confused with leaves) or equivalently what Sitchinava and Zeh call ``fringe nodes''~\cite{SitchinavaZe12}.} Secondly, each node in the buffer tree also contains a dense unsorted list, called a \defn{buffer}, of partially inserted elements that belong in that subtree. We next summarize the basic buffer tree insertion process~\cite{Arge03}. Supporting general deletions is not much harder, but to implement a priority queue we only need to support deleting an entire leaf. The insertion algorithm proceeds in two phases: the first phase moves elements down the tree through buffers, and the second phase performs the $(a,b)$-tree rebalance operations (i.e., splitting nodes that are too big). The first phase begins by appending the new element to the end of the root's buffer. We say that a node is \defn{full} if its buffer contains at least $lB$ elements. If the insert causes the root to become full, then a \defn{buffer-emptying process} commences, whereby all of the elements in the node's buffer are sorted then distributed to the children (appended to the ends of their buffers). This distribution process may cause children to become full, in which case they must also be emptied. More precisely, the algorithm maintains a list of internal nodes with full buffers (initially the root) and a separate list of leaves with full buffers. The first phase operates by repeatedly extracting a full internal node from the list, emptying its buffer, and adding any full children to the list of full internal or leaf nodes, until there are no full internal nodes. Note that during the first phase, the buffers of full nodes may far exceed $lB$, e.g., if all of the ancestors' buffer elements are distributed to a single descendant. Sorting the buffer from scratch would therefore be too expensive. Fortunately, each distribution process writes elements to the child buffers in sorted order, so all elements after the $lB$'th element (i.e., those written in the most recent emptying of the parent) are sorted. It thus suffices to split the buffer at the $lB$'th element and sort the first $lB$ elements, resulting in a buffer that consists of two sorted lists. These two lists can trivially be merged as they are being distributed to the sorted list of children in a linear number of I/O's. When the first phase completes, there may be full leaves but no full internal nodes. Moreover, all ancestors of each full leaf have empty buffers. The second phase operates on each full leaf one at a time. First, the buffer is sorted as above and then merged with the elements stored in the leaf. If the leaf contains $X > lB$ elements, then a sequence of $(a,b)$-tree rebalance operations occur whereby the leaf may be split into $\Theta(X/(lB))$ new nodes. These splits cascade up the tree as in a typical $(a,b)$-tree insert. \subsubsection{Buffer tree with fewer writes} To reduce the number of writes, we set the branching factor of the buffer tree to $l = k{}M/B$ instead of $l=M/B$. The consequence of this increase is that the buffer emptying process needs to sort $lB=k{}M$ elements, which cannot be done with an in-memory sort. The advantage is that the height of the tree reduces to $O(1+\log_{k{}M/B} n)$. \begin{lemma}\label{lem:flushcost} It costs $O(k{}X/B)$ reads and $O(X/B)$ writes to empty a full buffer containing $X$ elements using $\Theta(M)$ memory. \end{lemma} \begin{proof} By Lemma~\ref{lem:selsort}, the cost of sorting the first $k{}M$ elements is $O(k{}^2M/B)$ reads and $O(k{}M/B)$ writes. The distribute step can be performed by simultaneously scanning the sorted list of children along with the two sorted pieces of the buffer, and outputting to the end of the appropriate child buffer. A write occurs only when either finishing with a child or closing out a block. The distribute step thus uses $O(k{}M/B + X/B)$ reads and writes, giving a total of $O(k{}^2M/B + X/B)$ reads and $O(k{}M/B + X/B)$ writes including the sort step. Observing that full means $X > k{}M$ completes the proof. \end{proof} \begin{theorem}\label{thm:buffertree} Suppose that the partially empty block belonging to the root's buffer is kept in memory. Then the amortized cost of each insert into an $n$-element buffer tree is $O((k{}/B)(1+\log_{k{}M/B} n))$ reads and $O((1/B)(1+\log_{k{}M/B} n))$ writes. \end{theorem} \begin{proof} This proof follows from Arge's buffer tree performance proofs~\cite{Arge03}, augmented with the above lemma. We first consider the cost of reading and writing the buffers. The last block of the root buffer need only be written when it becomes full, at which point the next block must be read, giving $O(1/B)$ reads and writes per insert. Each element moves through buffers on a root-to-leaf path, so it may belong to $O(1+\log_{k{}M/B} n)$ emptying processes. According to Lemma~\ref{lem:flushcost}, emptying a full buffer costs $O(k{}/B)$ reads and $O(1/B)$ writes per element. Multiplying these two gives an amortized cost per element matching the theorem. We next consider the cost of rebalancing operations. Given the choice of $(a,b)$-tree parameters, the total number of node splits is $O(n/(lB))$~\cite[Theorem 1]{Arge03} which is $O(n/(k{}M))$. Each split is performed by scanning a constant number of nodes, yielding a cost of $O(k{}M/B)$ reads and write per split, or $O(n/(k{}M)\cdot k{}M/B) = O(n/B)$ reads and writes in total or $O(1/B)$ per insert. \end{proof} \subsubsection{An efficient priority queue with fewer writes} The main idea of Arge's buffer tree-based priority queue~\cite{Arge03} is to store a working set of the $O(lB)$ smallest elements resident in memory. When inserting an element, first add it to the working set, then evict the largest element from the working set (perhaps the one just inserted) and insert it into the buffer tree. To extract the minimum, find it in the working set. If the working set is empty, remove the $\Theta(lB)$ smallest elements from the buffer tree and add them to the working set. In the standard buffer tree, $l=M/B$ and hence operating on the working set is free because it fits entirely in memory. In our case, however, extra care is necessary to maintain a working set that has size roughly $k$ times larger. Our AEM priority queue follows the same idea except the working set is partitioned into two pieces, the alpha working set and beta working set. The \defn{alpha working set}, which is always resident in memory, contains at most $M/4$ of the smallest elements in the priority queue. The \defn{beta working set} contains at most $2k{}M$ of the next smallest elements in the data structure, stored in $O(k{}M/B)$ blocks. The motivation for having a beta working set is that during \textsc{Delete-Min} operations, emptying elements directly from the buffer tree whenever the alpha working set is empty would be too expensive---having a beta working set to stage larger batches of such elements leads to better amortized bounds. Coping with the interaction between the alpha working set, the beta working set, and the buffer tree, is the main complexity of our priority queue. The beta working set does not fit in memory, but we keep a constant number of blocks from the beta working set and the buffer tree (specifically, the last block of the root buffer) in memory. We begin with a high-level description of the priority-queue operations, with details of the beta working set deferred until later. For now, it suffices to know that we keep the maximum key in the beta working set in memory. To insert a new element, first compare its key against the maximums in the alpha and beta working set. Then insert it into either the alpha working set, the beta working set, or the buffer tree depending on the key comparisons. If the alpha working set exceeds maximum capacity of $M/4$ elements, move the largest element to the beta working set. If the beta working set hits its maximum capacity of $2k{}M$ elements, remove the largest $k{}M$ elements and insert them into the buffer tree. To delete the minimum from the priority queue, remove the smallest element from the alpha working set. If the alpha working set is empty, extract the $M/4$ smallest elements from the beta working set (details to follow) and move them to the alpha working set. If the beta working set is empty, perform a buffer emptying process on the root-to-leftmost-leaf path in the buffer tree. Then delete the leftmost leaf and move its contents to the beta working set. \myparagraph{The beta working set} The main challenge is in implementing the beta working set. An unsorted list or buffer allows for efficient inserts by appending to the last block. The challenge, however, is to extract the $\Theta(M)$ smallest elements with $O(M/B)$ writes---if $k{}>B$, each element may reside in a separate block, and we thus cannot afford to update those blocks when extracting the elements. Instead, we perform the deletions implicitly. To facilitate implicit deletions, we maintain a list of ordered pairs $(i_1,x_1),(i_2,x_2),(i_3,x_3),\ldots$, where $(i,x)$ indicates that all elements with index at most $i$ and key at most $x$ are invalid. Our algorithm maintains the invariant that for consecutive list elements $(i_j,x_j)$ and $(i_{j+1},x_{j+1})$, we have $i_j < i_{j+1}$ and $x_j > x_{j+1}$ (recall that all keys are distinct). To insert an element to the beta working set, simply append it to the end. The invariant is maintained because its index is larger than any pair in the list. To extract the minimum $M/4$ elements, scan from index $0$ to $i_1$ in the beta working set, ignoring any elements with key at most $x_1$. Then scan from $i_1+1$ to $i_2$, ignoring any element with key at most $x_2$. And so on. While scanning, record in memory the $M/4$ smallest valid elements seen so far. When finished, let $x$ be the largest key and let $i$ be the length of the beta working set. All elements with key at most $x$ have been removed from the full beta working set, so they should be implicitly marked as invalid. To restore the invariant, truncate the list until the last pair $(i_j,x_j)$ has $x_j > x$, then append $(i,x)$ to the list. Because the size of the beta working set is growing, $i_j < i$. It should be clear that truncation does not discard any information as $(i,x)$ subsumes any of the truncated pairs. Whenever the beta working set grows too large ($2k{}M$ valid elements) or becomes too sparse ($k$ extractions of $M/4$ elements each have occurred), we first rebuild it. Rebuilding scans the elements in order, removing the invalid elements by packing the valid ones densely into blocks. Testing for validity is done as above. When done, the list of ordered pairs to test invalidity is cleared. Finally, when the beta working set grows too large, we extract the largest $k{}M$ elements by sorting it (using the selection sort of Lemma~\ref{lem:selsort}). \myparagraph{Analyzing the priority queue} We begin with some lemmas about the beta working set. \begin{lemma}\label{lem:extract} Extracting the $M/4$ smallest valid elements from the beta working set and storing them in memory costs $O(k{}M/B)$ reads and amortized $O(1)$ writes. \end{lemma} \begin{proof} The extraction involves first performing read-only passes over the beta working set and list of pairs, keeping one block from the working set and one pair in memory at a time. Because the working set is rebuilt after $k$ extractions, the list of pairs can have at most $k$ entries. Even if the list is not I/O efficient, the cost of scanning both is $O(k{}M/B+k) = O(k{}M/B)$ reads. Next the list of pairs indicating invalid elements is updated. Appending one new entry requires $O(1)$ writes. Truncating and deleting any old entries can be charged against their insertions. \end{proof} The proof of the following lemma is similar to the preceding one, with the only difference being that the valid elements must be moved and written as they are read. \begin{lemma}\label{lem:rebuild} Rebuilding the beta working set costs $O(k{}M/B)$ reads and writes. \qed \end{lemma} \begin{theorem} Our priority queue, if initially empty, supports $n$ \textsc{Insert} and \textsc{Delete-Min} operations with an amortized cost of $O((k{}/B)(1+\log_{k{}M/B} n))$ reads and $O((1/B)(1+ \log_{k{}M/B} n ))$ writes per operation. \end{theorem} \begin{proof} Inserts are the easier case. Inserting into the alpha working set is free. The amortized cost of inserting directly into the beta working set (a simple append) is $O(1/B)$ reads and writes, assuming the last block stays in memory. The cost of inserting directly into the buffer tree matches the theorem. Occasionally, the beta working set overflows, in which case we rebuild it, sort it, and insert elements into the buffer tree. The rebuild costs $O(k{}M/B)$ reads and writes (Lemma~\ref{lem:rebuild}), the sort costs $O(k{}^2M/B)$ reads and $O(k{}M/B)$ writes (by Lemma~\ref{lem:selsort}), and the $k{}M$ buffer tree inserts cost $O((k{}^2M/B)(1+\log_{k{}M/B}n))$ reads and $O((k{}M/B)(1+\log_{k{}M/B}n))$ writes (by Theorem~\ref{thm:buffertree}). The latter dominates. Amortizing against the $k{}M$ inserts that occur between overflows, the amortized cost per insert matches the theorem statement. Deleting the minimum element from the alpha working set is free. When the alpha working set becomes empty, we extract $M/4$ elements from the beta working set, with a cost of $O(k{}M/B)$ reads and $O(1)$ writes (Lemma~\ref{lem:extract}). This cost may be amortized against the $M/4$ deletes that occur between extractions, for an amortized cost of $O(k{}/B)$ reads and $O(1/M)$ writes per delete-min. Every $k$ extractions of $M/4$ elements, the beta working set is rebuilt, with a cost of $O(k{}M/B)$ reads and writes (Lemma~\ref{lem:rebuild}) or amortized $O(1/B)$ reads and writes per delete-min. Adding these together, we so far have $O(k{}/B)$ reads and $O(1/B)$ writes per delete-min. It remains to analyze the cost of refilling the beta working set when it becomes empty. The cost of removing a leaf from the buffer tree is dominated by the cost of emptying buffers on a length-$O(\log_{k{}M/B}n)$ path. Note that the buffers are not full, so we cannot apply Lemma~\ref{lem:flushcost}. But a similar analysis applies. The cost per node is $O(k{}^2M/B + X/B)$ reads and $O(k{}M/B + X/B)$ writes for an $X$-element buffer. As with Arge's version of the priority queue~\cite{Arge03}, the $O(X/B)$ terms can be charged to the insertion of the $X$ elements, so we are left with a cost of $O(k{}^2M/B)$ read and $O(k{}M/B)$ writes per buffer. Multiplying by $O(1+\log_{k{}M/B}n)$ levels gives a cost of $O((k{}^2M/B)(1+\log_{k{}M/B}n))$ reads and $O((k{}M/B)(1+\log_{k{}M/B}n))$ writes. Because each leaf contains at least $k{}M/4$ elements, we can amortize this cost against at least $k{}M/4$ deletions, giving a cost that matches the theorem. \end{proof} With this priority queue, sorting can be trivially implemented in $O((k{}n/B)(1+ \log_{k{}M/B} n))$ reads and $O((n/B)(1+ \log_{k{}M/B} n))$ writes, matching the bounds of the previous sorting algorithms. We use an $l$-way mergesort---i.e., a balanced tree of merges with each merge taking in $l$ sorted arrays and outputting one sorted array consisting of all records from the input. We assume that once the input is small enough a different sort (the \emph{base case}) is applied. For $l = M/B$ and a base case of $n \leq M$ (using any sort since it fits in memory), we have the standard EM mergesort. With these settings there are $\log_{M/B} \mathopen{}\mathclose\bgroup\originalleft(n/M\aftergroup\egroup\originalright)$ levels of recursion, plus the base case, each costing $O\mathopen{}\mathclose\bgroup\originalleft({n}/{B}\aftergroup\egroup\originalright)$ memory operations. This gives the well-known overall bound from Equation~\ref{eqn:sortbound}~\cite{AggarwalV88}. To modify the algorithm for the asymmetric case, we increase the branching factor and the base case by a factor of $k$ where $1\leq k\leq \omega$ (i.e. $l = k {}M /B$ and a base case of $n \leq k M$). This means that it is no longer possible to keep the base case in the primary memory, nor one block for each of the input arrays during a merge. The modified algorithm is described in Algorithm~\ref{alg:mergesort}. \begin{algorithm}[!t] \caption{\textsc{AEM-Mergesort}} \label{alg:mergesort} \textbf{Input:} An array $A$ of records of length $n$ \begin{algorithmic}[1] \If {$|A|\leq k M$} ~~~~// base case \State { Sort $A$ using $k |A|/B$ reads and $|A|/B$ writes, and return. } \EndIf \State {Evenly partition $A$ into $l=k M/B$ subarrays $A_1,\ldots,A_{l}$ (at the granularity of blocks) and recursively apply \textsc{AEM-Mergesort} to each.} \State {{\bf Initialize Merge.} Initialize an empty output array $O$, a load buffer and an empty store buffer each of size $B$, an empty priority queue $Q$ of size $M$, an array of pointers $I_1, \ldots, I_{l}$ that point to the start of each sorted subarray, $c=0$, and $\mb{lastV}=-\infty$. Associated with $Q$ is $Q.\mb{max}$, which holds the maximum element in $Q$ if $Q$ is full, and $+\infty$ otherwise.} \While {$c<|A|$} \label{line:while} \For {$i\leftarrow 1$ to $l$} \label{line:for} \State {\textsc{Process-Block}($i$).} \EndFor \While {$Q$ is not empty} \label{line:while2} \State {$e\leftarrow Q.\mb{deleteMin}$.} \State {Write $e$ to the store buffer, $c\leftarrow c+1$.} \State {If the store buffer is full, flush it to $O$ and update $\mb{lastV}$.} \If {$e$ is marked as last record in its subarray block} \State {$i = e.\mb{subarray}$.} \State {Increment $I_i$ to point to next block in subarray $i$.} \State {\textsc{Process-Block}($i$).} \EndIf \EndWhile \EndWhile \State {$A\leftarrow O$.~~// Logically, don't actually copy} \Function{Process-Block}{subarray $i$} \State {\textbf{If} $I_i$ points to the end of the subarray \textbf{then} return.} \State {Read the block $I_i$ into the load buffer.} \ForAll {records $e$ in the block} \If {$e.key$ is in the range $\mathopen{}\mathclose\bgroup\originalleft(\mb{lastV},Q.\mb{max}\aftergroup\egroup\originalright)$} \State {If $Q$ is full, eject $Q.\mb{max}$.} \State {Insert $e$ into $Q$, and mark if last record in block. } \EndIf \EndFor \EndFunction \end{algorithmic} \end{algorithm} Each merge proceeds in a sequence of rounds, where a round is one iteration of the {\bf while} loop starting on line~\ref{line:while}. During each round we maintain a priority queue within the primary memory. Because operations within the primary memory are free in the model, this can just be kept as a sorted array of records, or even unsorted, although a balanced search tree can be a feasible solution in practice. Each round consists of two phases. The first phase (the {\bf for} loop on line~\ref{line:for}) considers each of the $l$ input subarrays in turn, loading the current block for the subarray into the load buffer, and then inserting each record $e$ from the block into the priority queue if not already written to the output (i.e. $e.key > \mb{lastV}$), and if smaller than the maximum in the queue (i.e. $e.key < Q.\mb{max}$). This might bump an existing element out of the queue. Also, if a record is the last in its block then it is marked and tagged with its subarray number. The second phase (the {\bf while} loop starting on line~\ref{line:while2}) starts writing the priority queue to the output one block at a time. Whenever reaching a record that is marked as the last in its block, the algorithm increments the pointer to the corresponding subarray and processes the next block in the subarray. We repeat the rounds until all records from all subarrays have been processed. To account for the space for the pointers $I=I_1,\ldots,I_l$, let $\alpha = (\log n)/{s}$, where $s$ is the size of a record in bits, and $n$ is the total number of records being merged. The cost of the merge is bounded as follows: \begin{lemma}\label{lem:merge} $l = k{}M/B$ sorted sequences with total size $n$ (stored in $\lceil n/B \rceil$ blocks, and block aligned) can be merged using at most $(k{} + 1) \lceil n/B \rceil$ reads and $\lceil n/B \rceil$ writes, on the AEM model with primary memory size $\mathopen{}\mathclose\bgroup\originalleft(M+2B+2\alphak M/B\aftergroup\egroup\originalright)$. \end{lemma} \begin{proof} Each round (except perhaps the last) outputs at least $M$ records, and hence the total number of rounds is at most $\lceil n/M \rceil$. The first phase of each round requires at most $k{}M/B$ reads, so the total number of reads across all the first phases is at most $k{}\lceil n/B \rceil$ (the last round can be included in this since it only loads as many blocks as are output). For the second phase, a block is only read when incrementing its pointer, therefore every block is only read once in the second phase. Also every record is only written once. This gives the stated bounds on the number of reads and writes. The space includes the space for the in-memory heap ($M$), the load and store buffers, the pointers $I$ ($\alpha k M/B$), and pointers to maintain the last-record in block information ($\alpha k M/B$). \end{proof} We note that it is also possible to keep $I$ in secondary memory. This will double the number of writes because every time the algorithm moves to a new block in an input array $i$, it would need to write out the updated $I_i$. The increase in reads is small. Also, if one uses a balanced search tree to implement the priority queue $Q$ then the size increases by $< M (\log M)/s$ in order to store the pointers in the tree. For the base case when $n \leq k{}M <\omega{}M$ we use the following lemma. \begin{lemma}\label{lem:selsort} $n \leq k{}M$ records stored in $\lceil n/B \rceil$ blocks can be sorted using at most $k{} \lceil n/B \rceil$ reads and $\lceil n/B \rceil$ writes, on the AEM model with primary memory size $M + B$. \end{lemma} \begin{proof} We sort the elements using a variant of selection sort, scanning the input list a total of at most $k{}$ times. In the first scan, store in memory the $M$ smallest elements seen so far, performing no writes and $\lceil n/B \rceil$ reads. After completing the scan, output all the $\min(M,n)$ elements in sorted order using $\lceil\min(M,n)/B\rceil$ writes. Record the maximum element written so far. In each subsequent phase (if not finished), store in memory the $M$ smallest records larger than the maximum written so far, then output as before. The cost is $\lceil n/B \rceil$ reads and $M/B$ writes per phase (except perhaps the last phase). We need one extra block to hold the input. The largest output can be stored in the $O\mathopen{}\mathclose\bgroup\originalleft(\log M\aftergroup\egroup\originalright)$ locations we have allowed for in the model. This gives the stated bounds because every element is written out once and the input is scanned at most $\omega$ times. \end{proof} Together we have: \begin{theorem}\label{lem:mergesort} Algorithm~\ref{alg:mergesort} sorts $n$ records using \[R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) \leq \mathopen{}\mathclose\bgroup\originalleft(k{}+1\aftergroup\egroup\originalright)\mathopen{}\mathclose\bgroup\originalleft\lceil \frac{n}{B}\aftergroup\egroup\originalright\rceil\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\] reads, and \[W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) \leq \mathopen{}\mathclose\bgroup\originalleft\lceil\frac{n}{B}\aftergroup\egroup\originalright\rceil\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\] writes on an AEM with primary memory size $\mathopen{}\mathclose\bgroup\originalleft(M+2B+2\alphak M/B\aftergroup\egroup\originalright)$. \end{theorem} \begin{proof} The number of recursive levels of merging is bounded by $\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$, and when we add the additional base round we have $1 + \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil = \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\frac{k{}M}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil = \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$. The cost for each level is at most $(k + 1) \lceil n/B \rceil$ reads and $\lceil n/B \rceil$ writes (only one block on each level might not be full). \end{proof} \begin{corollary} Assuming that $n$ is big enough so we ignore the ceiling function in number of levels for merging, then picking $k$ such that $\displaystyle{k \over \logk}<{\omega\over \log{M \over B}}$ gives an overall improvement of the total I/O complexity (reads and writes). \end{corollary} The details of computing this can be found in appendix~\ref{app:branchfactor}. In real-world applications for sorting, $\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$ is usually a small constant $p$ (between $2$ to $6$). In this case, we can always try all $k=\lceil\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)^{1/p'}/\frac{M}{B}\rceil$ for all integer $p'$ and $1\leq p'\leq p$, and pick the appropriate $k$ that gives the minimum total I/O complexity. Our new mergesort will never perform worse than the classic $M/B$-way mergesort, since we always have the option to pick $k$ as $1$, and the new algorithm will performe exactly the same as the classic EM mergesort. \label{sec:io-sample-sort} We now describe an $l$-way randomized sample sort~\cite{Blelloch91,Frazer70} (also called distribution sort), which asymptotically matches the I/O bounds of the mergesort. The idea of sample sort is to partition $n$ records into $l$ approximately equally sized buckets based on a sample of the keys within the records, and then recurse on each bucket until an appropriately-sized base case is reached. Similar to the mergesort, here we use a branching factor $l = k{}M/B$, where $1\leqk=\omega$. Again this branching factor reduces the number of levels of recursion relative to the standard EM sample sort which uses $l = M/B$~\cite{AggarwalV88}, and the analysis of picking appropriate $k$ in previous section also holds here. We describe how to process each partition and the base case. The partitioning starts by selecting a set of splitters. This can be done using standard techniques, which we review later. The splitters partition the input into buckets that w.h.p.~are within a constant factor of the average size $n/l$. The algorithm now needs to bucket the input based on the splitters. The algorithm processes the splitters in $k{}$ rounds of size $M/B$ each, starting with the first $M/B$ splitters. For each round the algorithm scans the whole input array, partitioning each value into the one of $M/B$ buckets associated with the splitters, or skipping a record if its key does not belong in the current buckets. One block for each bucket is kept in memory. Whenever a block for one of the buckets is full, it is written out to memory and the next block is started for that bucket. Each $k$ rounds reads all of the input and writes out only the elements associated with these buckets (roughly a $1/k$ fraction of the input). The base case occurs when $n\leq k{}M$, at which point we apply the selection sort from Lemma~\ref{lem:selsort}. Let $n_0$ be the original input size. The splitters can be chosen by randomly picking a sample of keys of size $m = \Theta(l \log n_0)$, sorting them, and then sub-selecting the keys at positions $m/l, 2m/l, \ldots, (l-1)m/l$. By selecting the constant in the $\Theta$ sufficiently large, this process ensures that, w.h.p., every bucket is within a constant factor of the average size~\cite{Blelloch91}. To sort the samples apply a RAM mergesort, which requires at most $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(l \log n_0\aftergroup\egroup\originalright)/B\aftergroup\egroup\originalright) \log \mathopen{}\mathclose\bgroup\originalleft(l \log n_0/M\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. This is a lower-order term when $l = O(n\,/\log^2 n)$, but unfortunately this bound on $l$ may not hold for small subproblems. There is a simple solution---when $n\leq k{}^2M^2/B$, instead use $l=n/(k{}M)$. With this modification, we always have $l \leq \sqrt{n/B}$. It is likely that the splitters could also be selected deterministically using an approach used in the original I/O-efficient distribution sort~\cite{AggarwalV88}. \begin{theorem} The $k{}M/B$-way sample sort sorts $n$ records using, w.h.p., \[R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\frac{k{}n}{B} \mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\aftergroup\egroup\originalright)\] reads, and \[W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\aftergroup\egroup\originalright)\] writes on an AEM with primary memory size $\mathopen{}\mathclose\bgroup\originalleft(M+B+M/B\aftergroup\egroup\originalright)$. \end{theorem} \begin{proof} (Sketch) The primary-memory size allows one block from each bucket as well as the $M/B$ splitters to remain in memory. Each partitioning step thus requires $\lceil n/B \rceil + k{}M/B$ writes, where the second term arises from the fact that each bucket may use a partial block. Since $n \geq k{}M$ (this is not a base case), the cost of each partitioning step becomes $O(n/B)$ writes and $O(k{}n/B)$ reads. Because the number of splitters is at most $\sqrt{n} = O(n/\log^2n)$, choosing and sorting the splitters takes $O(n/B)$ reads and writes. Observe that the recursive structure matches that of a sample sort with an effective memory of size $k{}M$, and that there will be at most two rounds at the end where $l = n/(k M)$. As in standard sample sort, the number of writes is linear with the size of the subproblem, but here the number of reads is multiplied by a factor of $k{}$. The standard samplesort analysis thus applies, implying the bound stated. It remains only to consider the base case. Because all buckets are approximately the same size, the total number of leaves is $O(n/B)$---during the recursion, a size $n > k{}M$ problem is split into subproblems whose sizes are $\Omega(B)$. Applying Lemma~\ref{lem:selsort} to all leaves, we get a cost of $O(k n / B)$ reads and $O(n/B)$ writes for all base cases. \end{proof} \myparagraph{Extensions for the Private-Cache Model} The above can be readily parallelized. Here we outline the approach. We assume that there are $p = n/M$ processors. We use parallelism both within each partition, and across the recursive partitions. Within a partition we first find the $l$ splitters in parallel. (As above, $l = k M/B$ except for the at most two rounds prior to the base case where $l = n/(k M)$.) This can be done on a sample that is a logarithmic factor smaller than the partition size, using a less efficient sorting algorithm such as parallel mergesort, and then sub-selecting $l$ splitters from the sorted order. This requires $O(k(M/B + \log^2 n))$ time, where the second term ($O(k\log^2 n)$) is the depth of the parallel mergesort, and the first term is the work term $O((k/B)((n/\log n)\log n)/P) = O(k M/B)$. The algorithm groups the input into $n/(k M)$ chunks of size $k M$ each. As before we also group the splitters into $k$ rounds of size $M/B$ each. Now in parallel across all chunks and across all rounds, partition the chunk based on the round. We have $n/(k M) \times k = n/M$ processors so we can do them all in parallel. Each will require $k M$ reads and $M$ writes. To ensure that the chunks write their buckets to adjacent locations (so that the output of each bucket is contiguous) we will need to do a pass over the input to count the size of each bucket for each chunk, followed by a prefix sum. This can be done before processing the chunks and is a lower-order term. The time for the computation is $O(k M/B)$. The processors are then divided among the sub-problems proportional to the size of the sub-problem, and we repeat. The work at each level of recursion remains the same, so the time at each level remains the same. For the base case of size $\leq k M$, instead of using a selection sort across all keys, which is sequential, we find $k$ splitters and divide the work among $k$ processors to sub-select their part of the input, each by reading the whole input, and then sorting their part of size $O(M)$ using a selection sort on those keys. This again takes $O(k M/B)$ time. The total time for the algorithm is therefore: \[O\mathopen{}\mathclose\bgroup\originalleft(k\mathopen{}\mathclose\bgroup\originalleft({M\over B} + \log^2 n\aftergroup\egroup\originalright) \mathopen{}\mathclose\bgroup\originalleft\lceil 1 + \log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{k{}M}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil\aftergroup\egroup\originalright)\] with high probability. This is linear speedup assuming $\frac{M}{B} \geq \log^2 n$. Otherwise the number of processors can be reduced to maintain linear speedup. This section describes how to augment the basic buffer tree~\cite{Arge03} to build a priority queue that supports $n$ \textsc{Insert} and \textsc{Delete-Min} operations with an amortized cost of $O((k{}/B)(1+\log_{k{}M/B} n))$ reads and $O((1/B)(1+ \log_{k{}M/B} n ))$ writes per operation. Using the priority queue to implement a sorting algorithm trivially results in a sort costing a total of $O((k{}n/B)(1+ \log_{k{}M/B} n))$ reads and $O((n/B)(1+ \log_{k{}M/B} n))$ writes. Here $k$ is also the extra branching factor to reduce writes (at a cost of more reads), and can be chosen using similar argument in two previous sorting algorithms. These bounds asymptotically match the preceding sorting algorithms, but some additional constant factors are introduced because a buffer tree is a dynamic data structure. Our buffer tree-based priority queue for the AEM contains a few differences from the regular EM buffer tree~\cite{Arge03}: (1) the buffer tree nodes are larger by a factor $k$, (2) consequently, the ``buffer-emptying'' process uses an efficient sort on $k{}M$ elements instead of an in-memory sort on $M$ elements, and (3) to support the priority queue, $O(k{}M)$ elements are stored outside the buffer tree instead of $O(M)$, which adds nontrivial changes to the data structure. \subsubsection{Overview of a buffer tree} A buffer tree~\cite{Arge03} is an augmented version of an $(a,b)$-tree~\cite{HuddlestonMe82}, where $a=l/4$ and $b=l$ for large branching factor $l$. In the original buffer tree $l=M/B$, but to reduce the number of writes we instead set $l=k{}M/B$. As an $(a,b)$ tree, all leaves are at the same depth in the tree, and all internal nodes have between $l/4$ and $l$ children (except the root, which may have fewer). Thus the height of the tree is $O(1+\log_l n)$. An internal node with $c$ children contains $c-1$ keys, stored in sorted order, that partition the elements in the subtrees. The structure of a buffer tree differs from that of an $(a,b)$ tree in two ways. Firstly, each leaf of the buffer tree contains between $lB/4$ and $lB$ elements stored in $l$ blocks.\footnote{Arge~\cite{Arge03} defines the ``leaves'' of a buffer tree to contain $\Theta(B)$ elements instead of $\Theta(lB)$ elements. Since the algorithm only operates on the parents of those ``leaves'', we find the terminology more convenient when flattening the bottom two levels of the tree. Our leaves thus correspond to what Arge terms ``leaf nodes''~\cite{Arge03} (not to be confused with leaves) or equivalently what Sitchinava and Zeh call ``fringe nodes''~\cite{SitchinavaZe12}.} Secondly, each node in the buffer tree also contains a dense unsorted list, called a \defn{buffer}, of partially inserted elements that belong in that subtree. We next summarize the basic buffer tree insertion process~\cite{Arge03}. Supporting general deletions is not much harder, but to implement a priority queue we only need to support deleting an entire leaf. The insertion algorithm proceeds in two phases: the first phase moves elements down the tree through buffers, and the second phase performs the $(a,b)$-tree rebalance operations (i.e., splitting nodes that are too big). The first phase begins by appending the new element to the end of the root's buffer. We say that a node is \defn{full} if its buffer contains at least $lB$ elements. If the insert causes the root to become full, then a \defn{buffer-emptying process} commences, whereby all of the elements in the node's buffer are sorted then distributed to the children (appended to the ends of their buffers). This distribution process may cause children to become full, in which case they must also be emptied. More precisely, the algorithm maintains a list of internal nodes with full buffers (initially the root) and a separate list of leaves with full buffers. The first phase operates by repeatedly extracting a full internal node from the list, emptying its buffer, and adding any full children to the list of full internal or leaf nodes, until there are no full internal nodes. Note that during the first phase, the buffers of full nodes may far exceed $lB$, e.g., if all of the ancestors' buffer elements are distributed to a single descendant. Sorting the buffer from scratch would therefore be too expensive. Fortunately, each distribution process writes elements to the child buffers in sorted order, so all elements after the $lB$'th element (i.e., those written in the most recent emptying of the parent) are sorted. It thus suffices to split the buffer at the $lB$'th element and sort the first $lB$ elements, resulting in a buffer that consists of two sorted lists. These two lists can trivially be merged as they are being distributed to the sorted list of children in a linear number of I/O's. When the first phase completes, there may be full leaves but no full internal nodes. Moreover, all ancestors of each full leaf have empty buffers. The second phase operates on each full leaf one at a time. First, the buffer is sorted as above and then merged with the elements stored in the leaf. If the leaf contains $X > lB$ elements, then a sequence of $(a,b)$-tree rebalance operations occur whereby the leaf may be split into $\Theta(X/(lB))$ new nodes. These splits cascade up the tree as in a typical $(a,b)$-tree insert. \subsubsection{Buffer tree with fewer writes} To reduce the number of writes, we set the branching factor of the buffer tree to $l = k{}M/B$ instead of $l=M/B$. The consequence of this increase is that the buffer emptying process needs to sort $lB=k{}M$ elements, which cannot be done with an in-memory sort. The advantage is that the height of the tree reduces to $O(1+\log_{k{}M/B} n)$. \begin{lemma}\label{lem:flushcost} It costs $O(k{}X/B)$ reads and $O(X/B)$ writes to empty a full buffer containing $X$ elements using $\Theta(M)$ memory. \end{lemma} \begin{proof} By Lemma~\ref{lem:selsort}, the cost of sorting the first $k{}M$ elements is $O(k{}^2M/B)$ reads and $O(k{}M/B)$ writes. The distribute step can be performed by simultaneously scanning the sorted list of children along with the two sorted pieces of the buffer, and outputting to the end of the appropriate child buffer. A write occurs only when either finishing with a child or closing out a block. The distribute step thus uses $O(k{}M/B + X/B)$ reads and writes, giving a total of $O(k{}^2M/B + X/B)$ reads and $O(k{}M/B + X/B)$ writes including the sort step. Observing that full means $X > k{}M$ completes the proof. \end{proof} \begin{theorem}\label{thm:buffertree} Suppose that the partially empty block belonging to the root's buffer is kept in memory. Then the amortized cost of each insert into an $n$-element buffer tree is $O((k{}/B)(1+\log_{k{}M/B} n))$ reads and $O((1/B)(1+\log_{k{}M/B} n))$ writes. \end{theorem} \begin{proof} This proof follows from Arge's buffer tree performance proofs~\cite{Arge03}, augmented with the above lemma. We first consider the cost of reading and writing the buffers. The last block of the root buffer need only be written when it becomes full, at which point the next block must be read, giving $O(1/B)$ reads and writes per insert. Each element moves through buffers on a root-to-leaf path, so it may belong to $O(1+\log_{k{}M/B} n)$ emptying processes. According to Lemma~\ref{lem:flushcost}, emptying a full buffer costs $O(k{}/B)$ reads and $O(1/B)$ writes per element. Multiplying these two gives an amortized cost per element matching the theorem. We next consider the cost of rebalancing operations. Given the choice of $(a,b)$-tree parameters, the total number of node splits is $O(n/(lB))$~\cite[Theorem 1]{Arge03} which is $O(n/(k{}M))$. Each split is performed by scanning a constant number of nodes, yielding a cost of $O(k{}M/B)$ reads and write per split, or $O(n/(k{}M)\cdot k{}M/B) = O(n/B)$ reads and writes in total or $O(1/B)$ per insert. \end{proof} \subsubsection{An efficient priority queue with fewer writes} The main idea of Arge's buffer tree-based priority queue~\cite{Arge03} is to store a working set of the $O(lB)$ smallest elements resident in memory. When inserting an element, first add it to the working set, then evict the largest element from the working set (perhaps the one just inserted) and insert it into the buffer tree. To extract the minimum, find it in the working set. If the working set is empty, remove the $\Theta(lB)$ smallest elements from the buffer tree and add them to the working set. In the standard buffer tree, $l=M/B$ and hence operating on the working set is free because it fits entirely in memory. In our case, however, extra care is necessary to maintain a working set that has size roughly $k$ times larger. Our AEM priority queue follows the same idea except the working set is partitioned into two pieces, the alpha working set and beta working set. The \defn{alpha working set}, which is always resident in memory, contains at most $M/4$ of the smallest elements in the priority queue. The \defn{beta working set} contains at most $2k{}M$ of the next smallest elements in the data structure, stored in $O(k{}M/B)$ blocks. The motivation for having a beta working set is that during \textsc{Delete-Min} operations, emptying elements directly from the buffer tree whenever the alpha working set is empty would be too expensive---having a beta working set to stage larger batches of such elements leads to better amortized bounds. Coping with the interaction between the alpha working set, the beta working set, and the buffer tree, is the main complexity of our priority queue. The beta working set does not fit in memory, but we keep a constant number of blocks from the beta working set and the buffer tree (specifically, the last block of the root buffer) in memory. We begin with a high-level description of the priority-queue operations, with details of the beta working set deferred until later. For now, it suffices to know that we keep the maximum key in the beta working set in memory. To insert a new element, first compare its key against the maximums in the alpha and beta working set. Then insert it into either the alpha working set, the beta working set, or the buffer tree depending on the key comparisons. If the alpha working set exceeds maximum capacity of $M/4$ elements, move the largest element to the beta working set. If the beta working set hits its maximum capacity of $2k{}M$ elements, remove the largest $k{}M$ elements and insert them into the buffer tree. To delete the minimum from the priority queue, remove the smallest element from the alpha working set. If the alpha working set is empty, extract the $M/4$ smallest elements from the beta working set (details to follow) and move them to the alpha working set. If the beta working set is empty, perform a buffer emptying process on the root-to-leftmost-leaf path in the buffer tree. Then delete the leftmost leaf and move its contents to the beta working set. \myparagraph{The beta working set} The main challenge is in implementing the beta working set. An unsorted list or buffer allows for efficient inserts by appending to the last block. The challenge, however, is to extract the $\Theta(M)$ smallest elements with $O(M/B)$ writes---if $k{}>B$, each element may reside in a separate block, and we thus cannot afford to update those blocks when extracting the elements. Instead, we perform the deletions implicitly. To facilitate implicit deletions, we maintain a list of ordered pairs $(i_1,x_1),(i_2,x_2),(i_3,x_3),\ldots$, where $(i,x)$ indicates that all elements with index at most $i$ and key at most $x$ are invalid. Our algorithm maintains the invariant that for consecutive list elements $(i_j,x_j)$ and $(i_{j+1},x_{j+1})$, we have $i_j < i_{j+1}$ and $x_j > x_{j+1}$ (recall that all keys are distinct). To insert an element to the beta working set, simply append it to the end. The invariant is maintained because its index is larger than any pair in the list. To extract the minimum $M/4$ elements, scan from index $0$ to $i_1$ in the beta working set, ignoring any elements with key at most $x_1$. Then scan from $i_1+1$ to $i_2$, ignoring any element with key at most $x_2$. And so on. While scanning, record in memory the $M/4$ smallest valid elements seen so far. When finished, let $x$ be the largest key and let $i$ be the length of the beta working set. All elements with key at most $x$ have been removed from the full beta working set, so they should be implicitly marked as invalid. To restore the invariant, truncate the list until the last pair $(i_j,x_j)$ has $x_j > x$, then append $(i,x)$ to the list. Because the size of the beta working set is growing, $i_j < i$. It should be clear that truncation does not discard any information as $(i,x)$ subsumes any of the truncated pairs. Whenever the beta working set grows too large ($2k{}M$ valid elements) or becomes too sparse ($k$ extractions of $M/4$ elements each have occurred), we first rebuild it. Rebuilding scans the elements in order, removing the invalid elements by packing the valid ones densely into blocks. Testing for validity is done as above. When done, the list of ordered pairs to test invalidity is cleared. Finally, when the beta working set grows too large, we extract the largest $k{}M$ elements by sorting it (using the selection sort of Lemma~\ref{lem:selsort}). \myparagraph{Analyzing the priority queue} We begin with some lemmas about the beta working set. \begin{lemma}\label{lem:extract} Extracting the $M/4$ smallest valid elements from the beta working set and storing them in memory costs $O(k{}M/B)$ reads and amortized $O(1)$ writes. \end{lemma} \begin{proof} The extraction involves first performing read-only passes over the beta working set and list of pairs, keeping one block from the working set and one pair in memory at a time. Because the working set is rebuilt after $k$ extractions, the list of pairs can have at most $k$ entries. Even if the list is not I/O efficient, the cost of scanning both is $O(k{}M/B+k) = O(k{}M/B)$ reads. Next the list of pairs indicating invalid elements is updated. Appending one new entry requires $O(1)$ writes. Truncating and deleting any old entries can be charged against their insertions. \end{proof} The proof of the following lemma is similar to the preceding one, with the only difference being that the valid elements must be moved and written as they are read. \begin{lemma}\label{lem:rebuild} Rebuilding the beta working set costs $O(k{}M/B)$ reads and writes. \qed \end{lemma} \begin{theorem} Our priority queue, if initially empty, supports $n$ \textsc{Insert} and \textsc{Delete-Min} operations with an amortized cost of $O((k{}/B)(1+\log_{k{}M/B} n))$ reads and $O((1/B)(1+ \log_{k{}M/B} n ))$ writes per operation. \end{theorem} \begin{proof} Inserts are the easier case. Inserting into the alpha working set is free. The amortized cost of inserting directly into the beta working set (a simple append) is $O(1/B)$ reads and writes, assuming the last block stays in memory. The cost of inserting directly into the buffer tree matches the theorem. Occasionally, the beta working set overflows, in which case we rebuild it, sort it, and insert elements into the buffer tree. The rebuild costs $O(k{}M/B)$ reads and writes (Lemma~\ref{lem:rebuild}), the sort costs $O(k{}^2M/B)$ reads and $O(k{}M/B)$ writes (by Lemma~\ref{lem:selsort}), and the $k{}M$ buffer tree inserts cost $O((k{}^2M/B)(1+\log_{k{}M/B}n))$ reads and $O((k{}M/B)(1+\log_{k{}M/B}n))$ writes (by Theorem~\ref{thm:buffertree}). The latter dominates. Amortizing against the $k{}M$ inserts that occur between overflows, the amortized cost per insert matches the theorem statement. Deleting the minimum element from the alpha working set is free. When the alpha working set becomes empty, we extract $M/4$ elements from the beta working set, with a cost of $O(k{}M/B)$ reads and $O(1)$ writes (Lemma~\ref{lem:extract}). This cost may be amortized against the $M/4$ deletes that occur between extractions, for an amortized cost of $O(k{}/B)$ reads and $O(1/M)$ writes per delete-min. Every $k$ extractions of $M/4$ elements, the beta working set is rebuilt, with a cost of $O(k{}M/B)$ reads and writes (Lemma~\ref{lem:rebuild}) or amortized $O(1/B)$ reads and writes per delete-min. Adding these together, we so far have $O(k{}/B)$ reads and $O(1/B)$ writes per delete-min. It remains to analyze the cost of refilling the beta working set when it becomes empty. The cost of removing a leaf from the buffer tree is dominated by the cost of emptying buffers on a length-$O(\log_{k{}M/B}n)$ path. Note that the buffers are not full, so we cannot apply Lemma~\ref{lem:flushcost}. But a similar analysis applies. The cost per node is $O(k{}^2M/B + X/B)$ reads and $O(k{}M/B + X/B)$ writes for an $X$-element buffer. As with Arge's version of the priority queue~\cite{Arge03}, the $O(X/B)$ terms can be charged to the insertion of the $X$ elements, so we are left with a cost of $O(k{}^2M/B)$ read and $O(k{}M/B)$ writes per buffer. Multiplying by $O(1+\log_{k{}M/B}n)$ levels gives a cost of $O((k{}^2M/B)(1+\log_{k{}M/B}n))$ reads and $O((k{}M/B)(1+\log_{k{}M/B}n))$ writes. Because each leaf contains at least $k{}M/4$ elements, we can amortize this cost against at least $k{}M/4$ deletions, giving a cost that matches the theorem. \end{proof} With this priority queue, sorting can be trivially implemented in $O((k{}n/B)(1+ \log_{k{}M/B} n))$ reads and $O((n/B)(1+ \log_{k{}M/B} n))$ writes, matching the bounds of the previous sorting algorithms. \section{Cache-Oblivious Parallel Algorithms}\label{sec:low-depth-algs} In this section we present low-depth cache-oblivious parallel algorithms for sorting and Fast Fourier Transform, with asymmetric read and write costs. Both algorithms (i) have only polylogarithmic depth, (ii) are processor-oblivious (i.e., no explicit mention of processors), (iii) can be cache-oblivious or cache-aware, and (iv) map to low cache complexity on parallel machines with hierarchies of shared caches as well as private caches using the results of Section~\ref{sec:prelims}. We also present a linear-depth, cache-oblivious parallel algorithm for matrix multiplication. All three algorithms use $\Theta(\omega)$ fewer writes than reads. Our algorithms are oblivious to cache size. However, we still assume that the read/write ratio $\omega{}$ is aware since it is a parameter for the main memory instead of the caches. \begin{figure*} \caption{The low-depth cache-oblivious algorithm on asymmetric read and write costs to sort an input array of size $n$. } \label{fig:cache-obli-sort} \end{figure*} \subsection{Sorting}\label{sec:low-depth-sort} \newcommand{\wcost}{\omega} We show how the low-depth, cache-oblivious sorting algorithm from~\cite{BGS10} can be adapted to the asymmetric case. The original algorithm is based on viewing the input as a $\sqrt{n} \times \sqrt{n}$ array, sorting the rows, partitioning them based on splitters, transposing the partitions, and then sorting the buckets. The original algorithm incurs $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_M\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. To reduce the number of writes, our revised version partitions slightly differently and does extra reads to reduce the number of levels of recursion. The algorithm does $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) \log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ writes, $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(\omega n/B\aftergroup\egroup\originalright) \log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads, and has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log^2 \mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ w.h.p. The algorithm uses matrix transpose, prefix sums and mergesort as subroutines. Efficient parallel and cache-oblivious versions of these algorithm are described in~\cite{BGS10}. For an input of size $n$, prefix sums has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log n\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ reads and writes, merging two arrays of lengths $n$ and $m$ has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost\log\mathopen{}\mathclose\bgroup\originalleft(n+m\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n+m\aftergroup\egroup\originalright)/B\aftergroup\egroup\originalright)$ reads and writes, and mergesort has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log^2 n\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft((n/B)\log_2\mathopen{}\mathclose\bgroup\originalleft(n/M\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. Transposing an $n\times m$ matrix has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log\mathopen{}\mathclose\bgroup\originalleft(n+m\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft(nm/B\aftergroup\egroup\originalright)$ reads and writes. Our cache-oblivious sorting algorithm works recursively, with a base case of $n \leq M$, at which point any parallel sorting algorithm with $O\mathopen{}\mathclose\bgroup\originalleft(n\log n\aftergroup\egroup\originalright)$ reads/writes and $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log n\aftergroup\egroup\originalright)$ depth can be applied (e.g.~\cite{Cole1988}). Figure~\ref{fig:cache-obli-sort} illustrates the steps of the algorithm. Given an input array of size $n$, the algorithm first splits it into $\sqrt{n\omega}$ subarrays of size $\sqrt{n/\omega}$ and recursively sorts each of the subarrays. This step corresponds to step (a) in Figure~\ref{fig:cache-obli-sort}. Then the algorithm determines the splitters by sampling. After the subarrays are sorted, every $\mathopen{}\mathclose\bgroup\originalleft(\log n\aftergroup\egroup\originalright)$'th element from each row is sampled, and these $n/\log n$ samples are sorted using a cache-oblivious mergesort. Then $\sqrt{n/ \omega}-1$ evenly-distributed splitters are picked from the sorted samples to create $\sqrt{n/\omega}$ buckets. The algorithm then determines the boundaries of the buckets in each subarray, which can be implemented by merging the splitters with each row, requiring $O\mathopen{}\mathclose\bgroup\originalleft(\wcost\log n\aftergroup\egroup\originalright)$ depth and $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ writes overall. This step is shown as step (b) in Figure~\ref{fig:cache-obli-sort}. Notice that on average the size of each bucket is $O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n\omega}\aftergroup\egroup\originalright)$, and the largest bucket has no more than $2\sqrt{n\omega}\log n$ elements. After the subarrays are split into $\sqrt{n/\omega}$ buckets, prefix sums and a matrix transpose can be used to place all keys destined for the same bucket together in contiguous arrays. This process is illustrated as step (c) in Figure~\ref{fig:cache-obli-sort}. This process requires $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ reads and writes, and $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log n\aftergroup\egroup\originalright)$ depth. The next step is new to the asymmetric algorithm and is the part that requires extra reads. As illustrated in Figure~\ref{fig:cache-obli-sort} (d), $\omega-1$ pivots are chosen from each bucket to generate $\omega$ sub-buckets. We sample $\max\{\omega,\sqrt{\omega n}/\log n\}$ samples, apply a mergesort, and evenly pick $\omega-1$ pivots in the sample. Then the size of each sub-bucket can be shown to be $O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\log n\aftergroup\egroup\originalright)$ w.h.p. using Chernoff bounds. We then scan each bucket for $\omega$ rounds to partition all elements into $\omega$ sub-buckets, and sort each sub-bucket recursively. \begin{theorem}\label{thm:cache-obli-sort} Our cache-oblivious sorting algorithm requires $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(\omega n/B\aftergroup\egroup\originalright)\log_{\omega M}{\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)}\aftergroup\egroup\originalright)$ reads, $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_{\omega M}{\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)}\aftergroup\egroup\originalright)$ writes, and $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log^2 \mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ depth w.h.p. \end{theorem} \begin{proof} All the subroutines except for the recursive calls do $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ writes. The last partitioning process to put elements into sub-buckets takes $O\mathopen{}\mathclose\bgroup\originalleft(\omega n/B\aftergroup\egroup\originalright)$ reads and the other subroutines require fewer reads. The overall depth is dominated by the mergesort to find the first $\sqrt{n/\omega}$ pivots, requiring $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log^2 \mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ depth per level of recursion. Hence, the recurrence relations (w.h.p.) for read I/O's ($R$), write I/O's ($W$), and depth ($D$) are: $$R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(\omega n\over B\aftergroup\egroup\originalright)+\sqrt{\omega n}\cdot R\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n\over \omega}\aftergroup\egroup\originalright)+\sum_{i=1}^{\sqrt{\omega n}}R\mathopen{}\mathclose\bgroup\originalleft(n_i\aftergroup\egroup\originalright)$$ $$W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(n\over B\aftergroup\egroup\originalright)+\sqrt{\omega n}\cdot W\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n\over \omega}\aftergroup\egroup\originalright)+\sum_{i=1}^{\sqrt{\omega n}}W\mathopen{}\mathclose\bgroup\originalleft(n_i\aftergroup\egroup\originalright)$$ $$D\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(\omega\log^2 {n\over \omega}\aftergroup\egroup\originalright)+\max_{i=1,\cdots,\sqrt{\omega n}}\{D\mathopen{}\mathclose\bgroup\originalleft(n_i\aftergroup\egroup\originalright)\}$$ where $n_i$ is the size of the $i$'th sub-bucket, and $n_i= O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\log n\aftergroup\egroup\originalright)$ and $\sum{n_i}=n$. The base case for the I/O complexity is $R\mathopen{}\mathclose\bgroup\originalleft(M\aftergroup\egroup\originalright)=W\mathopen{}\mathclose\bgroup\originalleft(M\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(M/B\aftergroup\egroup\originalright)$. Solving these recurrences proves the theorem. \end{proof} \subsection{Fast Fourier Transform}\label{sec:fft} We now consider a parallel cache-oblivious algorithm for computing the Discrete Fourier Transform (DFT). The algorithm we consider is based on the Cooley-Tukey FFT algorithm~\cite{FFT}, and our description follows that of~\cite{Frigo99}. We assume that $n$ at each level of recursion and $\omega$ are powers of 2. The standard cache-oblivious divide-and-conquer algorithm~\cite{Frigo99} views the input matrix as an $\sqrt{n} \times \sqrt{n}$ matrix, and incurs $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_M\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. To reduce the number of writes, we modify the algorithm to the following: \begin{enumerate}\itemsep=0in \item View input of size $n$ as a $\omega\sqrt{n/\omega} \times \sqrt{n/\omega}$ matrix in row-major order. Transpose the matrix. \item Compute a DFT on each row of the matrix as follows \begin{enumerate} \item View the row as a $\omega \times \sqrt{n/\omega}$ matrix \item For each row $i$ \begin{enumerate} \item Calculate the values of column DFTs for row $i$ using the brute-force method. This will require $\omega$ reads (each row) and $1$ write (row $i$) per value. \item Recursively compute the FFT for the row. \end{enumerate} \item Incorporate twiddle factors \item Transpose $\omega \times \sqrt{n/\omega}$ matrix \end{enumerate} \item Transpose matrix \item Recursively compute an FFT on each length $\sqrt{n/\omega}$ row. \item Transpose to final order. \end{enumerate} The difference from the standard cache-oblivious algorithms is the extra level of nesting in step 2, and the use of a work-inefficient DFT over $\omega$ elements in step 2(b). The transposes in steps 1, 2(d) and 3 can be done with $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ reads/writes and $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ depth. The brute-force calculations in step 2(b)i require a total of $\omega n$ reads (and arithmetic operations) and $n$ writes. This is where we waste a $\omega$ factor in reads in order to reduce the recursion depth. The twiddle factors can all be calculated with $O(n)$ reads and writes. There are a total of $2 \omega \sqrt{n/\omega}$ recursive calls on problems of size $\sqrt{n/\omega}$. Our base case is of size $M$, which uses $M/B$ reads and writes. The number of reads therefore satisfies the following recurrence: \[ R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) & \text{if $n \le M$} \\ 2\omega\sqrt{n/\omega}\;R\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\aftergroup\egroup\originalright) + O\mathopen{}\mathclose\bgroup\originalleft(\omega{}n/B\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] which solves to $R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(kn/B\aftergroup\egroup\originalright)\log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$, and the number of writes is \[ W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) & \text{if $n \le M$} \\ 2\omega\sqrt{n/\omega}\; W\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\aftergroup\egroup\originalright) + O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] which solves to $W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$. Since we can do all the rows in parallel for each step, and the brute-force calculation in parallel, the only important dependence is that we have to do step 2 before step 5. This gives a recurrence $D\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = 2D\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\aftergroup\egroup\originalright) + O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ for the depth, which solves to $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\log\log n\aftergroup\egroup\originalright)$. We note that the algorithm as described requires an extra transpose and an extra write in step 2(b)i relative to the standard version. This might negate any advantage from reducing the number of levels, however we note that these can likely be removed. In particular the transpose in steps 2(d) and 3 can be merged by viewing the results as a three dimensional array and transposing the first and last dimensions (this is what the pair of transposes does). The write in step 2(b)i can be merged with the transpose in step 1 by combining the columnwise FFT with the transpose, and applying it $\omega$ times. \subsection{Matrix Multiplication}\label{sec:matmul} In this section, we consider matrix multiplication in the asymmetric read/write setting. The standard cubic-work sequential algorithm trivially uses $O\mathopen{}\mathclose\bgroup\originalleft(n^3\aftergroup\egroup\originalright)$ reads and $\Theta\mathopen{}\mathclose\bgroup\originalleft(n^2\aftergroup\egroup\originalright)$ writes, one for each entry in the output matrix. For the EM model, the blocked algorithm that divides the matrix into sub-matrices of size $\sqrt{M} \times \sqrt{M}$ uses $O\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M})\aftergroup\egroup\originalright)$ reads~\cite{Blumofe96,Frigo99}. Because we can keep each $\sqrt{M} \times \sqrt{M}$ sub-matrix of the output matrix in memory until it is completely computed, the number of writes is proportional to the number of blocks in the output, which is $O\mathopen{}\mathclose\bgroup\originalleft(n^2/B\aftergroup\egroup\originalright)$. This gives the following simple result: \begin{theorem} External-memory matrix multiplication can be done in $O\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M})\aftergroup\egroup\originalright)$ reads and $O\mathopen{}\mathclose\bgroup\originalleft(n^2/B\aftergroup\egroup\originalright)$ writes. \end{theorem} We now turn to a divide-and-conquer algorithm for matrix multiplication, which is parallel and cache-oblivious. We assume that we can fit $3M$ elements in cache instead of $M$, which only affects our bounds by constant factors. Note that the standard cache-oblivious divide-and-conquer algorithm~\cite{Blumofe96,Frigo99} recurses on four parallel subproblems of size $n/2 \times n/2$, resulting in $\Theta\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M})\aftergroup\egroup\originalright)$ reads and writes. To reduce the writes by a factor of $\Theta\mathopen{}\mathclose\bgroup\originalleft(\omega\aftergroup\egroup\originalright)$, we instead recurse on $\omega^2$ parallel subproblems (blocks) of size $n/\omega \times n/\omega$. On each level of recursion, computing each output block of size $n/\omega \times n/\omega$ requires summing over the $\omega$ products of $n/\omega\times n/\omega$ size input matrices. We assume the recursive calls writing to the same target block are processed sequentially so that all writes (and reads) can be made at the leaves of the recursion to their final locations. We now partition a task into $\omega{}\times \omega{}$ subproblems, which may lead to up to $n\omega{}/\sqrt{M} \times n\omega{}/\sqrt{M}$ base cases and thus match the overall work of the standard cache-oblivious algorithm. Instead, in the first round, we pick an integer $b$ uniformly at random from 1 to $\lfloor \log_2\omega{}\rfloor$, and recurses on $b \times b$ subproblems. After this round, we run the divide-and-conquer algorithms just described before with the branching factor of $\omega{}\times \omega{}$. We show that this extra first step reduces $O(\log \omega{})$ in exception. Now we analyze the cost of this algorithm. The base case is when the problem size $n$ is no more than $\omega\sqrt{M}$. At this point each of its subproblems of size no more than $\sqrt{M} \times \sqrt{M}$ is writing into an output block of size at most $M$, which fits in cache. Therefore since we do the products for the output blocks sequentially, the result can stay in cache and only be written when all $\omega$ are done. The number of writes is therefore $n^2/B$ total for $n\le\omega\sqrt{M}$. For reads all of the $\omega^3$ subproblems might require reading both inputs so there are $2\omega{} n^2/B$ reads. The non-base recursive calls do not contribute any significant reads or writes since all reading and writing to the arrays is done at the leaves. This gives us the following recurrence for the number of writes for an $n \times n$ matrix: \[ W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} n^2/B & \text{if $n \leq \omega\sqrt{M}$} \\ O(1) + {1\over \lfloor \log_2\omega{}\rfloor}\sum_{k=1}^{\lfloor \log_2\omega{}\rfloor}{(2^k)^3W(n/2^k)} & \text{the first round}\\ O(1) + \omega^3W\mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] This solves to $W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(n^3/( B\sqrt{M}\log\omega)\aftergroup\egroup\originalright)$ in expectation, which is a factor of $\omega$ less than for the standard EM or cache-oblivious matrix multiply. The number of reads satisfies: \[ R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} 2 \omega n^2/B & \text{if $n \leq \omega\sqrt{M}$} \\ O(1) + {1\over \lfloor \log_2\omega{}\rfloor}\sum_{k=1}^{\lfloor \log_2\omega{}\rfloor}{(2^k)^3R(n/2^k)} & \text{the first round}\\ O(1) + \omega^3R\mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] This solves to $R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(n^3\omega/(B\sqrt{M}\log \omega)\aftergroup\egroup\originalright)$ in expectation. Because the products contributing to a block are done sequentially, the depth of the algorithm satisfies the recurrence $D(n) = \omega D(n/\omega) + O(1)$ with base case $D(1) = \omega$, which solves to $O(\omega n)$. This gives the following theorem: \begin{theorem} Our cache-oblivious matrix multiplication algorithm requires expected $O\mathopen{}\mathclose\bgroup\originalleft(n^3\omega/(B\sqrt{M}\log\omega)\aftergroup\egroup\originalright)$ reads and $O\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M}\log\omega)\aftergroup\egroup\originalright)$ writes, and has $O\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)$ depth. \end{theorem} Comparing with the standard cache-oblivious matrix multiplication algorithm, the overall read and write cost of new algorithm expects to be reduced by a factor of $O(\log\omega{})$. \newcommand{\wcost}{\omega} We show how the low-depth, cache-oblivious sorting algorithm from~\cite{BGS10} can be adapted to the asymmetric case. The original algorithm is based on viewing the input as a $\sqrt{n} \times \sqrt{n}$ array, sorting the rows, partitioning them based on splitters, transposing the partitions, and then sorting the buckets. The original algorithm incurs $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_M\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. To reduce the number of writes, our revised version partitions slightly differently and does extra reads to reduce the number of levels of recursion. The algorithm does $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) \log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ writes, $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(\omega n/B\aftergroup\egroup\originalright) \log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads, and has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log^2 \mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ w.h.p. The algorithm uses matrix transpose, prefix sums and mergesort as subroutines. Efficient parallel and cache-oblivious versions of these algorithm are described in~\cite{BGS10}. For an input of size $n$, prefix sums has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log n\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ reads and writes, merging two arrays of lengths $n$ and $m$ has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost\log\mathopen{}\mathclose\bgroup\originalleft(n+m\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n+m\aftergroup\egroup\originalright)/B\aftergroup\egroup\originalright)$ reads and writes, and mergesort has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log^2 n\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft((n/B)\log_2\mathopen{}\mathclose\bgroup\originalleft(n/M\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. Transposing an $n\times m$ matrix has depth $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log\mathopen{}\mathclose\bgroup\originalleft(n+m\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ and requires $O\mathopen{}\mathclose\bgroup\originalleft(nm/B\aftergroup\egroup\originalright)$ reads and writes. Our cache-oblivious sorting algorithm works recursively, with a base case of $n \leq M$, at which point any parallel sorting algorithm with $O\mathopen{}\mathclose\bgroup\originalleft(n\log n\aftergroup\egroup\originalright)$ reads/writes and $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log n\aftergroup\egroup\originalright)$ depth can be applied (e.g.~\cite{Cole1988}). Figure~\ref{fig:cache-obli-sort} illustrates the steps of the algorithm. Given an input array of size $n$, the algorithm first splits it into $\sqrt{n\omega}$ subarrays of size $\sqrt{n/\omega}$ and recursively sorts each of the subarrays. This step corresponds to step (a) in Figure~\ref{fig:cache-obli-sort}. Then the algorithm determines the splitters by sampling. After the subarrays are sorted, every $\mathopen{}\mathclose\bgroup\originalleft(\log n\aftergroup\egroup\originalright)$'th element from each row is sampled, and these $n/\log n$ samples are sorted using a cache-oblivious mergesort. Then $\sqrt{n/ \omega}-1$ evenly-distributed splitters are picked from the sorted samples to create $\sqrt{n/\omega}$ buckets. The algorithm then determines the boundaries of the buckets in each subarray, which can be implemented by merging the splitters with each row, requiring $O\mathopen{}\mathclose\bgroup\originalleft(\wcost\log n\aftergroup\egroup\originalright)$ depth and $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ writes overall. This step is shown as step (b) in Figure~\ref{fig:cache-obli-sort}. Notice that on average the size of each bucket is $O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n\omega}\aftergroup\egroup\originalright)$, and the largest bucket has no more than $2\sqrt{n\omega}\log n$ elements. After the subarrays are split into $\sqrt{n/\omega}$ buckets, prefix sums and a matrix transpose can be used to place all keys destined for the same bucket together in contiguous arrays. This process is illustrated as step (c) in Figure~\ref{fig:cache-obli-sort}. This process requires $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ reads and writes, and $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log n\aftergroup\egroup\originalright)$ depth. The next step is new to the asymmetric algorithm and is the part that requires extra reads. As illustrated in Figure~\ref{fig:cache-obli-sort} (d), $\omega-1$ pivots are chosen from each bucket to generate $\omega$ sub-buckets. We sample $\max\{\omega,\sqrt{\omega n}/\log n\}$ samples, apply a mergesort, and evenly pick $\omega-1$ pivots in the sample. Then the size of each sub-bucket can be shown to be $O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\log n\aftergroup\egroup\originalright)$ w.h.p. using Chernoff bounds. We then scan each bucket for $\omega$ rounds to partition all elements into $\omega$ sub-buckets, and sort each sub-bucket recursively. \begin{theorem}\label{thm:cache-obli-sort} Our cache-oblivious sorting algorithm requires $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(\omega n/B\aftergroup\egroup\originalright)\log_{\omega M}{\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)}\aftergroup\egroup\originalright)$ reads, $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_{\omega M}{\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)}\aftergroup\egroup\originalright)$ writes, and $O\mathopen{}\mathclose\bgroup\originalleft(\wcost \log^2 \mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ depth w.h.p. \end{theorem} \begin{proof} All the subroutines except for the recursive calls do $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ writes. The last partitioning process to put elements into sub-buckets takes $O\mathopen{}\mathclose\bgroup\originalleft(\omega n/B\aftergroup\egroup\originalright)$ reads and the other subroutines require fewer reads. The overall depth is dominated by the mergesort to find the first $\sqrt{n/\omega}$ pivots, requiring $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log^2 \mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ depth per level of recursion. Hence, the recurrence relations (w.h.p.) for read I/O's ($R$), write I/O's ($W$), and depth ($D$) are: $$R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(\omega n\over B\aftergroup\egroup\originalright)+\sqrt{\omega n}\cdot R\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n\over \omega}\aftergroup\egroup\originalright)+\sum_{i=1}^{\sqrt{\omega n}}R\mathopen{}\mathclose\bgroup\originalleft(n_i\aftergroup\egroup\originalright)$$ $$W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(n\over B\aftergroup\egroup\originalright)+\sqrt{\omega n}\cdot W\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n\over \omega}\aftergroup\egroup\originalright)+\sum_{i=1}^{\sqrt{\omega n}}W\mathopen{}\mathclose\bgroup\originalleft(n_i\aftergroup\egroup\originalright)$$ $$D\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(\omega\log^2 {n\over \omega}\aftergroup\egroup\originalright)+\max_{i=1,\cdots,\sqrt{\omega n}}\{D\mathopen{}\mathclose\bgroup\originalleft(n_i\aftergroup\egroup\originalright)\}$$ where $n_i$ is the size of the $i$'th sub-bucket, and $n_i= O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\log n\aftergroup\egroup\originalright)$ and $\sum{n_i}=n$. The base case for the I/O complexity is $R\mathopen{}\mathclose\bgroup\originalleft(M\aftergroup\egroup\originalright)=W\mathopen{}\mathclose\bgroup\originalleft(M\aftergroup\egroup\originalright)=O\mathopen{}\mathclose\bgroup\originalleft(M/B\aftergroup\egroup\originalright)$. Solving these recurrences proves the theorem. \end{proof} We now consider a parallel cache-oblivious algorithm for computing the Discrete Fourier Transform (DFT). The algorithm we consider is based on the Cooley-Tukey FFT algorithm~\cite{FFT}, and our description follows that of~\cite{Frigo99}. We assume that $n$ at each level of recursion and $\omega$ are powers of 2. The standard cache-oblivious divide-and-conquer algorithm~\cite{Frigo99} views the input matrix as an $\sqrt{n} \times \sqrt{n}$ matrix, and incurs $O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_M\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$ reads and writes. To reduce the number of writes, we modify the algorithm to the following: \begin{enumerate}\itemsep=0in \item View input of size $n$ as a $\omega\sqrt{n/\omega} \times \sqrt{n/\omega}$ matrix in row-major order. Transpose the matrix. \item Compute a DFT on each row of the matrix as follows \begin{enumerate} \item View the row as a $\omega \times \sqrt{n/\omega}$ matrix \item For each row $i$ \begin{enumerate} \item Calculate the values of column DFTs for row $i$ using the brute-force method. This will require $\omega$ reads (each row) and $1$ write (row $i$) per value. \item Recursively compute the FFT for the row. \end{enumerate} \item Incorporate twiddle factors \item Transpose $\omega \times \sqrt{n/\omega}$ matrix \end{enumerate} \item Transpose matrix \item Recursively compute an FFT on each length $\sqrt{n/\omega}$ row. \item Transpose to final order. \end{enumerate} The difference from the standard cache-oblivious algorithms is the extra level of nesting in step 2, and the use of a work-inefficient DFT over $\omega$ elements in step 2(b). The transposes in steps 1, 2(d) and 3 can be done with $O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)$ reads/writes and $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ depth. The brute-force calculations in step 2(b)i require a total of $\omega n$ reads (and arithmetic operations) and $n$ writes. This is where we waste a $\omega$ factor in reads in order to reduce the recursion depth. The twiddle factors can all be calculated with $O(n)$ reads and writes. There are a total of $2 \omega \sqrt{n/\omega}$ recursive calls on problems of size $\sqrt{n/\omega}$. Our base case is of size $M$, which uses $M/B$ reads and writes. The number of reads therefore satisfies the following recurrence: \[ R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) & \text{if $n \le M$} \\ 2\omega\sqrt{n/\omega}\;R\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\aftergroup\egroup\originalright) + O\mathopen{}\mathclose\bgroup\originalleft(\omega{}n/B\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] which solves to $R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(kn/B\aftergroup\egroup\originalright)\log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$, and the number of writes is \[ W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) & \text{if $n \le M$} \\ 2\omega\sqrt{n/\omega}\; W\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\aftergroup\egroup\originalright) + O\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] which solves to $W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft(n/B\aftergroup\egroup\originalright)\log_{\omega M}\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)\aftergroup\egroup\originalright)$. Since we can do all the rows in parallel for each step, and the brute-force calculation in parallel, the only important dependence is that we have to do step 2 before step 5. This gives a recurrence $D\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = 2D\mathopen{}\mathclose\bgroup\originalleft(\sqrt{n/\omega}\aftergroup\egroup\originalright) + O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\aftergroup\egroup\originalright)$ for the depth, which solves to $O\mathopen{}\mathclose\bgroup\originalleft(\omega\log n\log\log n\aftergroup\egroup\originalright)$. We note that the algorithm as described requires an extra transpose and an extra write in step 2(b)i relative to the standard version. This might negate any advantage from reducing the number of levels, however we note that these can likely be removed. In particular the transpose in steps 2(d) and 3 can be merged by viewing the results as a three dimensional array and transposing the first and last dimensions (this is what the pair of transposes does). The write in step 2(b)i can be merged with the transpose in step 1 by combining the columnwise FFT with the transpose, and applying it $\omega$ times. In this section, we consider matrix multiplication in the asymmetric read/write setting. The standard cubic-work sequential algorithm trivially uses $O\mathopen{}\mathclose\bgroup\originalleft(n^3\aftergroup\egroup\originalright)$ reads and $\Theta\mathopen{}\mathclose\bgroup\originalleft(n^2\aftergroup\egroup\originalright)$ writes, one for each entry in the output matrix. For the EM model, the blocked algorithm that divides the matrix into sub-matrices of size $\sqrt{M} \times \sqrt{M}$ uses $O\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M})\aftergroup\egroup\originalright)$ reads~\cite{Blumofe96,Frigo99}. Because we can keep each $\sqrt{M} \times \sqrt{M}$ sub-matrix of the output matrix in memory until it is completely computed, the number of writes is proportional to the number of blocks in the output, which is $O\mathopen{}\mathclose\bgroup\originalleft(n^2/B\aftergroup\egroup\originalright)$. This gives the following simple result: \begin{theorem} External-memory matrix multiplication can be done in $O\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M})\aftergroup\egroup\originalright)$ reads and $O\mathopen{}\mathclose\bgroup\originalleft(n^2/B\aftergroup\egroup\originalright)$ writes. \end{theorem} We now turn to a divide-and-conquer algorithm for matrix multiplication, which is parallel and cache-oblivious. We assume that we can fit $3M$ elements in cache instead of $M$, which only affects our bounds by constant factors. Note that the standard cache-oblivious divide-and-conquer algorithm~\cite{Blumofe96,Frigo99} recurses on four parallel subproblems of size $n/2 \times n/2$, resulting in $\Theta\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M})\aftergroup\egroup\originalright)$ reads and writes. To reduce the writes by a factor of $\Theta\mathopen{}\mathclose\bgroup\originalleft(\omega\aftergroup\egroup\originalright)$, we instead recurse on $\omega^2$ parallel subproblems (blocks) of size $n/\omega \times n/\omega$. On each level of recursion, computing each output block of size $n/\omega \times n/\omega$ requires summing over the $\omega$ products of $n/\omega\times n/\omega$ size input matrices. We assume the recursive calls writing to the same target block are processed sequentially so that all writes (and reads) can be made at the leaves of the recursion to their final locations. We now partition a task into $\omega{}\times \omega{}$ subproblems, which may lead to up to $n\omega{}/\sqrt{M} \times n\omega{}/\sqrt{M}$ base cases and thus match the overall work of the standard cache-oblivious algorithm. Instead, in the first round, we pick an integer $b$ uniformly at random from 1 to $\lfloor \log_2\omega{}\rfloor$, and recurses on $b \times b$ subproblems. After this round, we run the divide-and-conquer algorithms just described before with the branching factor of $\omega{}\times \omega{}$. We show that this extra first step reduces $O(\log \omega{})$ in exception. Now we analyze the cost of this algorithm. The base case is when the problem size $n$ is no more than $\omega\sqrt{M}$. At this point each of its subproblems of size no more than $\sqrt{M} \times \sqrt{M}$ is writing into an output block of size at most $M$, which fits in cache. Therefore since we do the products for the output blocks sequentially, the result can stay in cache and only be written when all $\omega$ are done. The number of writes is therefore $n^2/B$ total for $n\le\omega\sqrt{M}$. For reads all of the $\omega^3$ subproblems might require reading both inputs so there are $2\omega{} n^2/B$ reads. The non-base recursive calls do not contribute any significant reads or writes since all reading and writing to the arrays is done at the leaves. This gives us the following recurrence for the number of writes for an $n \times n$ matrix: \[ W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} n^2/B & \text{if $n \leq \omega\sqrt{M}$} \\ O(1) + {1\over \lfloor \log_2\omega{}\rfloor}\sum_{k=1}^{\lfloor \log_2\omega{}\rfloor}{(2^k)^3W(n/2^k)} & \text{the first round}\\ O(1) + \omega^3W\mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] This solves to $W\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(n^3/( B\sqrt{M}\log\omega)\aftergroup\egroup\originalright)$ in expectation, which is a factor of $\omega$ less than for the standard EM or cache-oblivious matrix multiply. The number of reads satisfies: \[ R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = \begin{cases} 2 \omega n^2/B & \text{if $n \leq \omega\sqrt{M}$} \\ O(1) + {1\over \lfloor \log_2\omega{}\rfloor}\sum_{k=1}^{\lfloor \log_2\omega{}\rfloor}{(2^k)^3R(n/2^k)} & \text{the first round}\\ O(1) + \omega^3R\mathopen{}\mathclose\bgroup\originalleft(n/\omega\aftergroup\egroup\originalright) & \text{otherwise} \end{cases} \] This solves to $R\mathopen{}\mathclose\bgroup\originalleft(n\aftergroup\egroup\originalright) = O\mathopen{}\mathclose\bgroup\originalleft(n^3\omega/(B\sqrt{M}\log \omega)\aftergroup\egroup\originalright)$ in expectation. Because the products contributing to a block are done sequentially, the depth of the algorithm satisfies the recurrence $D(n) = \omega D(n/\omega) + O(1)$ with base case $D(1) = \omega$, which solves to $O(\omega n)$. This gives the following theorem: \begin{theorem} Our cache-oblivious matrix multiplication algorithm requires expected $O\mathopen{}\mathclose\bgroup\originalleft(n^3\omega/(B\sqrt{M}\log\omega)\aftergroup\egroup\originalright)$ reads and $O\mathopen{}\mathclose\bgroup\originalleft(n^3/(B\sqrt{M}\log\omega)\aftergroup\egroup\originalright)$ writes, and has $O\mathopen{}\mathclose\bgroup\originalleft(\omega n\aftergroup\egroup\originalright)$ depth. \end{theorem} Comparing with the standard cache-oblivious matrix multiplication algorithm, the overall read and write cost of new algorithm expects to be reduced by a factor of $O(\log\omega{})$. \section{Conclusion}\label{sec:conclusion} Motivated by the high cost of writing relative to reading in emerging non-volatile memory technologies, we have considered a variety of models for accounting for read-write asymmetry, and proposed and analyzed a variety of sorting algorithms in these models. For the asymmetric RAM and PRAM models, we have shown how to reduce the number of writes from $O(n \log n)$ to $O(n)$ without asymptotically increasing the other costs (reads, parallel depth). For the asymmetric external memory models (including the cache-oblivious model) the reductions in writes are mostly more modest, often increasing the base of a logarithm, and at the cost of asymptotically more reads. However, these algorithms might still have practical benefit. We also presented a cache-oblivious matrix-multiply that asymptotically reduces the overall cost by a factor of $O(\log\omega)$ in expectation. Future work includes proving lower bounds for the asymmetric external memory models, devising write-efficient algorithms for additional problems, and performing experimental studies. \section*{Acknowledgments} This research was supported in part by NSF grants CCF-1218188, CCF-1314633, and CCF-1314590, and by the Intel Science and Technology Center for Cloud Computing. \balance \appendix \section{Appropriate Range of the Branching Factor $k$} \label{app:branchfactor} In our new Asymmetric External Memory mergesort, we want to pick an appropriate branching factor $k$ to ensure a better performance comparing to the classic mergesort on External Memory. Based on Theorem~\ref{lem:mergesort}, the I/O complexity of Algorithm~\ref{alg:mergesort} is $$R(n)+\omega{}W(n)=(\omega{}+k+1)\mathopen{}\mathclose\bgroup\originalleft\lceil \frac{n}{B}\aftergroup\egroup\originalright\rceil\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$$ The I/O complexity for the classic mergesort is $R(n)+\omega{}W(n)=(\omega{}+1)\mathopen{}\mathclose\bgroup\originalleft\lceil \frac{n}{B}\aftergroup\egroup\originalright\rceil\mathopen{}\mathclose\bgroup\originalleft\lceil\log_{\frac{M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright\rceil$. Assume $n$ is big enough so we ignore the ceiling function in number of levels for merging, then the following inequality should hold: $$(\omega{}+k+1)\mathopen{}\mathclose\bgroup\originalleft\lceil \frac{n}{B}\aftergroup\egroup\originalright\rceil\log_{\frac{k{}M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)<(\omega{}+1)\mathopen{}\mathclose\bgroup\originalleft\lceil \frac{n}{B}\aftergroup\egroup\originalright\rceil\log_{\frac{M}{B}}\mathopen{}\mathclose\bgroup\originalleft(\frac{n}{B}\aftergroup\egroup\originalright)$$ which solves to $\displaystyle{k \over \logk}<{\omega+1\over \log{M \over B}}$, or $\displaystyle{k \over \logk}<{\omega\over \log{M \over B}}$ if we assume $\omega \gg 1$. If we plug in parameters $\omega{}$, $M$ and $B$ of real-world hardwares, then any integer $k$ s.t. $1\leq k\leq 0.3\omega{}$ satisfies the inequality, and decreases the I/O complexity of mergesort. \end{document}
arXiv
Compound Poisson process A compound Poisson process is a continuous-time stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. To be precise, a compound Poisson process, parameterised by a rate $\lambda >0$ and jump size distribution G, is a process $\{\,Y(t):t\geq 0\,\}$ given by $Y(t)=\sum _{i=1}^{N(t)}D_{i}$ where, $\{\,N(t):t\geq 0\,\}$ is the counting variable of a Poisson process with rate $\lambda $, and $\{\,D_{i}:i\geq 1\,\}$ are independent and identically distributed random variables, with distribution function G, which are also independent of $\{\,N(t):t\geq 0\,\}.\,$ When $D_{i}$ are non-negative integer-valued random variables, then this compound Poisson process is known as a stuttering Poisson process. Properties of the compound Poisson process The expected value of a compound Poisson process can be calculated using a result known as Wald's equation as: $\operatorname {E} (Y(t))=\operatorname {E} (D_{1}+\cdots +D_{N(t)})=\operatorname {E} (N(t))\operatorname {E} (D_{1})=\operatorname {E} (N(t))\operatorname {E} (D)=\lambda t\operatorname {E} (D).$ Making similar use of the law of total variance, the variance can be calculated as: ${\begin{aligned}\operatorname {var} (Y(t))&=\operatorname {E} (\operatorname {var} (Y(t)\mid N(t)))+\operatorname {var} (\operatorname {E} (Y(t)\mid N(t)))\\[5pt]&=\operatorname {E} (N(t)\operatorname {var} (D))+\operatorname {var} (N(t)\operatorname {E} (D))\\[5pt]&=\operatorname {var} (D)\operatorname {E} (N(t))+\operatorname {E} (D)^{2}\operatorname {var} (N(t))\\[5pt]&=\operatorname {var} (D)\lambda t+\operatorname {E} (D)^{2}\lambda t\\[5pt]&=\lambda t(\operatorname {var} (D)+\operatorname {E} (D)^{2})\\[5pt]&=\lambda t\operatorname {E} (D^{2}).\end{aligned}}$ Lastly, using the law of total probability, the moment generating function can be given as follows: $\Pr(Y(t)=i)=\sum _{n}\Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n)$ ${\begin{aligned}\operatorname {E} (e^{sY})&=\sum _{i}e^{si}\Pr(Y(t)=i)\\[5pt]&=\sum _{i}e^{si}\sum _{n}\Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n)\\[5pt]&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(Y(t)=i\mid N(t)=n)\\[5pt]&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(D_{1}+D_{2}+\cdots +D_{n}=i)\\[5pt]&=\sum _{n}\Pr(N(t)=n)M_{D}(s)^{n}\\[5pt]&=\sum _{n}\Pr(N(t)=n)e^{n\ln(M_{D}(s))}\\[5pt]&=M_{N(t)}(\ln(M_{D}(s)))\\[5pt]&=e^{\lambda t\left(M_{D}(s)-1\right)}.\end{aligned}}$ Exponentiation of measures Let N, Y, and D be as above. Let μ be the probability measure according to which D is distributed, i.e. $\mu (A)=\Pr(D\in A).\,$ Let δ0 be the trivial probability distribution putting all of the mass at zero. Then the probability distribution of Y(t) is the measure $\exp(\lambda t(\mu -\delta _{0}))\,$ where the exponential exp(ν) of a finite measure ν on Borel subsets of the real line is defined by $\exp(\nu )=\sum _{n=0}^{\infty }{\nu ^{*n} \over n!}$ and $\nu ^{*n}=\underbrace {\nu *\cdots *\nu } _{n{\text{ factors}}}$ is a convolution of measures, and the series converges weakly. See also • Poisson process • Poisson distribution • Compound Poisson distribution • Non-homogeneous Poisson process • Campbell's formula for the moment generating function of a compound Poisson process Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Marek and his schoolmates have just finished their studies at the university. They wanted to celebrate it with a game of paintball. After an hour of playing a very strange thing happened – everyone had exactly one bullet left. Marek, being a very curious person, wanted to know whether it's possible that everyone will be hit exactly once provided nobody moves. You are given a description of the situation during a paintball game when every player has only one bullet. The description of the game consists of pairs of players who can see each other. If a player can see another player, he can fire at him. Your task is to find a target for each player such that everyone will be hit. The first line of input contains two space separated integers $N$ and $M$, satisfying $2\leq N\leq 1\, 000$ and $0\leq M\leq 5\, 000$, where $N$ is the number of players. Players are numbered $1, 2, \ldots , N$. $M$ lines follow, each line containing two space separated integers $A$ and $B$ ($1\leq A < B\leq N$), denoting that players $A$ and $B$ can see each other. Each pair of players appears at most once in the input. If there is no assignment of targets such that everyone will be hit, output Impossible. Otherwise output $N$ lines. The $i$-th line should contain the number of the target of the $i$-th player. If there is more than one solution, output any one.
CommonCrawl
\begin{document} \title{\LARGE \bf ROMAX: Certifiably Robust Deep Multiagent Reinforcement Learning via Convex Relaxation } \thispagestyle{empty} \pagestyle{empty} \begin{abstract} In a multirobot system, a number of cyber-physical attacks (e.g., communication hijack, observation perturbations) can challenge the robustness of agents. This robustness issue worsens in multiagent reinforcement learning because there exists the non-stationarity of the environment caused by simultaneously learning agents whose changing policies affect the transition and reward functions. In this paper, we propose a minimax MARL approach to infer the worst-case policy update of other agents. As the minimax formulation is computationally intractable to solve, we apply the convex relaxation of neural networks to solve the inner minimization problem. Such convex relaxation enables robustness in interacting with peer agents that may have significantly different behaviors and also achieves a certified bound of the original optimization problem. We evaluate our approach on multiple mixed cooperative-competitive tasks and show that our method outperforms the previous state of the art approaches on this topic. \end{abstract} \section{Introduction}\label{sectioin:introduction} Multirobot systems have recently attracted much attention in robotics. Compared to a single robot approach, a multirobot system provides several unique benefits, including 1) improved efficiency since a sophisticated problem can be decomposed into simpler sub-problems, distributed across robots, and then solved simultaneously and 2) improved mission success rate because a single robot failure can be addressed by another teammate~\cite{mellouli2007reorganization}. These advantages have resulted in emerging multirobot applications, such as formation control~\cite{alonso2017multi}, cooperative manipulation~\cite{culbertson2021decentralized}, and human-swarm interaction~\cite{vasile2011integrating}. Multiagent reinforcement learning (MARL) provides a principled framework for solving problems in which multiple robots interact with one another in a shared environment. However, there remain difficulties in learning intelligent multiagent policies. Amongst these, instability in policy learning is particularly problematic in that agents generally show poor performance when interacting with unseen agents \cite{al-shedivat2018continuous,kim21metamapg}. While there are approaches that stabilize policy learning in MARL (e.g., centralized training and decentralized execution frameworks~\cite{lowe2017multi,foerster2018counterfactual}), agents generally overfit other agents' policies during training, resulting in a failure when playing against new strategies not interacted before. This robustness issue becomes more severe in a competitive setting, where an opponent can intentionally apply cyber-physical attacks (e.g., communication hijack, observation perturbations), fully exploit an agent's brittle policy, and thus dominate a game~\cite{gleave2019adversarial}. \textbf{Our contributions.} To address the robustness problem, we propose a new framework, called \textbf{ROMAX}: \textbf{RO}bust \textbf{MA}RL via convex rela\textbf{X}ation. While the minimax optimization enables learning of robust multiagent policy~\cite{littman1994markov}, solving a general nonconvex-nonconcave minimax optimization problem is computationally intractable~\cite{daskalakis2020complexity}. Assuming that each agent's policy is parameterized by deep neural networks, we develop a computationally efficient approach that can approximately solve the minimax optimization and infer the worst-case actions of other agents via the convex relaxation of neural networks. We note that this convex relaxation has an important benefit in that it can explore the approximate \textit{globally} worst situation while achieving \textit{certified} robustness from the guaranteed bound of the relaxation. We empirically evaluate our algorithm on multiple mixed cooperative-competitive tasks and show that ROMAX outperforms baselines by a significant margin, demonstrating the necessity to compute the worst-case scenarios to improve robustness in MARL. \section{Related Works}\label{section:relate_works} \textbf{Centralized training with decentralized execution.} The standard approach for addressing non-stationarity in MARL is to consider information about other agents and reason about the effects of joint actions~\cite{hernandezLealK17survey}. The recent studies regarding the centralized training with decentralized execution framework, for instance, account for the behaviors of others through a centralized critic~\cite{lowe2017multi,foerster2018counterfactual,yang18mean,wen2018probabilistic,kim20hmat}. While this body of work partially alleviates non-stationarity, converged policies generally overfit the current behaviors of other agents and thus show poor performance when interacting with agents with new behaviors. In contrast, our agents learn robust policies based on minimax optimization by applying convex relaxation. \textbf{Robust MARL.} Our framework is closely related to prior works that apply minimax optimization in multiagent learning settings~\cite{perolat17mean,graumoya18}. Minimax provides a game-theoretical concept that encourages an agent to learn a robust policy by maximizing its performance in a worst-case scenario \cite{littman1994markov,osborne04minimax}. One of the noticeable studies in this category is \cite{li2019robust}, which computes the worst-case perturbation by taking a single gradient descent step assuming that other agents act adversarial. However, the single-step gradient approximation can only explore the locally worst situation and thus can still result in unstable learning. Our approach aims to address this drawback by computing the approximate globally worst situation based on convex relaxation. The work by \cite{lutjens2020certified} applies the similar linear relaxation technique in a single-agent robust RL problem to certify the robustness under uncertainties from the environments. However, in our multiagent settings, the robustness is more challenging to certify due to the concurrent policy learning amongst multiple agents. \textbf{Ensemble training in MARL.} Another relevant approach to learning a robust policy is ensemble training, where each agent interacts with a group of agents instead of a particular agent only \cite{lowe2017multi,Shen_How_2021,Schrittwieser2020MasteringAG}. For example, the population-based training technique, which was originally proposed to find a set of hyperparameters for optimizing a neural network \cite{Jaderberg17pbt}, was applied in MARL by evolving a population of agents \cite{Jaderberg_2019}. This approach showed robust and superhuman level performance in a competitive game. The literature on self-play, which plays against random old versions of itself to improve training stability and robustness, can also be classified into this category \cite{bansal2018emergent}. However, maintaining and/or evolving a population is often computationally heavy. Additionally, these methods do not employ minimax optimization, so agents may not be able to cope well with the worst scenario. \textbf{Learning aware MARL.} Our framework is also related to prior works that consider the learning of other agents in the environment to address non-stationarity. These works include~\cite{zhang10lookahead} which attempted to discover the best response adaptation to the anticipated future policy of other agents. Our work is also related to \cite{foerster17lola,foerster2018dice} that shape the learning process of others. Another relevant idea explored by~\cite{letcher2018stable} is to interpolate between the frameworks of~\cite{zhang10lookahead} and~\cite{foerster17lola} in a way that guarantees convergence while influencing the opponent's future policy. Recently, \cite{kim21metamapg} addresses non-stationarity by considering both an agent’s own non-stationary policy dynamics and the non-stationary policy dynamics of other agents within a meta-learning objective. While these approaches alleviate non-stationarity by considering the others' learning, they do not solve the minimax objective and cannot guarantee robustness when playing against a new opponent. This weakness can be exploited by a carefully trained adversary agent~\cite{gleave2019adversarial}. \textbf{Robustness verification and neural network relaxation.} To verify the robustness of neural networks, it is important to compute the lower and upper bound of the output neurons under input perturbations. In supervised learning settings, for example, the margin between predicting the ground-truth class and other classes indicates the robustness of neural networks (i.e., measuring the chance of misclassification). However, due to the nonconvexity in neural networks, the work by \cite{katz2017reluplex} proved that finding the true range of neural network's output is nonconvex and NP-complete. To address this issue, convex relaxation methods are proposed to efficiently compute the outer approximation (a more conservative estimate) of neural network's output range. Many prior works are based on the linear relaxation of the nonlinear units in neural networks: FastLin~\cite{weng2018towards}, DeepZ~\cite{singh2018fast}, Neurify~\cite{wang2018efficient}, DeepPoly~\cite{singh2019boosting}, and CROWN~\cite{zhang2018efficient}. There are also other approaches based on semidefinite relaxation~\cite{raghunathan2018semidefinite,dvijotham2020efficient}, which admit tighter bounds but are more computationally expensive. See~\cite{salman2019convex} for in-depth surveys on this topic. \section{Background}\label{section:Preliminary} \subsection{Markov game} Interactions between $n$ agents can be represented by a partially observable Markov game~\citep{littman1994markov}, defined as a tuple $\langle \mathcal{I}, \mathcal{S},\{\mathcal{O}_i\}_{i\in \mathcal{I}}, \{\mathcal{A}_i\}_{i\in \mathcal{I}}, \mathcal{T}, \{\mathcal{R}_i\}_{i\in \mathcal{I}}, \gamma \rangle$; $\mathcal{I}\!=\![1,...,n]$ is a set of $n$ agents, $\mathcal{S}$ is the state space, $\{\mathcal{O}_i\}_{i\in \mathcal{I}}$ is the set of observation spaces, $\{\mathcal{A}_i\}_{i\in \mathcal{I}}$ is the set of action spaces, $\mathcal{T}$ is the state transition function, $\{\mathcal{R}_{i}\}_{i\in\mathcal{I}}$ is the set of reward functions, and $\gamma$ is the discount factor. Each agent $i$ chooses an action $a_{i}$ according to its stochastic policy $\pi_{\theta_i}\!: \mathcal{O}_i \times \mathcal{A}_i \to [0, 1]$, where $\theta_i$ denotes agent $i$'s policy parameters. Then, the joint action $a\!=\!\{a_{i},a_{-i}\}$ yields a transition to the next state according to the state transition function $\mathcal{T}\!: \mathcal{S}\times \{\mathcal{A}_i\}_{i\in\mathcal{I}} \to \mathcal{S}$. Note that the notation $-i$ indicates all other agents except agent $i$. Then, agent $i$ obtains a reward as a function of the state and the joint action $r_i\!: \mathcal{S} \times \{\mathcal{A}_i\}_{i\in \mathcal{I}} \to \mathbb{R}$, and receives its private observation according to $o_i\!: \mathcal{S}\to \mathcal{O}_i$. Each agent aims to maximize its own total expected discounted return $R_i\!=\!\mathbb{E}_{\pi}[\sum_{t=1}^T\gamma^t r_{i}^t]$, where $r_{i}^{t}$ denotes $i$'s reward received at timestep $t$, $\pi$ denotes the joint policy, and $T$ denotes the episodic horizon. \subsection{Multiagent deep deterministic policy gradient}\label{section:maddpg} To stabilize learning in MARL, MADDPG~\cite{lowe2017multi} introduced the centralized training and decentralized execution paradigm, in which the centralized critic conditions on the global information and the decentralized actor only depends on the agent's local observation. Specifically, a centralized critic for agent $i$ is defined as $Q_i^{\mu}(o,a_{i},a_{-i})\!=\!\mathbb{E}_{\mu}[R_i|o^1\!=\!o, a^1\!=\!\{a_{i},a_{-i}\}]$, where $o$ and $\mu$ denote the joint observation and policy, respectively. The policy gradient for agent $i$'s deterministic policy $\mu_{\theta_i}$ (abbreviated as $\mu_{i}$) with respect to the expected return $J(\theta_i)\!=\!\mathbb{E}_{\mu}[R_i]$ is: \begin{eqnarray}\label{eq:gradient} \begin{split} &\nabla_{\theta_i}J(\theta_i)\\ &=\mathbb{E}_{o,a\sim \mathcal{D}}[\nabla_{\theta_i}\mu_i(a_i|o_i)\nabla_{a_i}Q_i^{\mu}(o, a_{i}, a_{-i})|_{a_i=\mu_i(o_i)}], \end{split} \end{eqnarray} where $\mathcal{D}$ denotes the replay buffer. The buffer $\mathcal{D}$ stores the tuples $(o,o',a,r)$ where $o'$ is the next joint observation and $r$ is the joint reward. The centralized critic $Q_i^{\mu}$ is updated by minimizing the following loss function: \begin{eqnarray}\label{eq:critic_loss} \begin{split} \mathcal{L}(\theta_i) &= \mathbb{E}_{o,o',a,r\sim \mathcal{D}}[Q_i^{\mu}(o, a_{i},a_{-i}) - y]^2,\\ \text{s.t.}\quad y &= r_i + \gamma Q_i^{\mu'}(o', a'_{i},a'_{-i})|_{a'_j=\mu'_j(o'_j),\forall j\in \mathcal{I}}, \end{split} \end{eqnarray} where $\mu'\!=\!\{\mu_{\theta'_i}\}_{i\in\mathcal{I}}$ denotes the set of target policies. \subsection{Minimax MultiAgent deep deterministic policy gradient}\label{section:m3ddpg} To learn a robust policy in MARL, an agent should account for the changes in the policy of other simultaneously learning agents. \cite{li2019robust} proposed M3DDPG, a robust multiagent learning approach based on the minimax optimization by assuming other agents are adversarial agents. Specifically, each agent $i$ in \cite{li2019robust} optimizes the following learning objective: \begin{eqnarray}\label{eq: minimax_a} \max_{\theta_i}\min_{a_{-i}\in \mathcal{B}_{-i}}Q_i^{\mu}(o,a_i, a_{-i})|_{a_i = \mu_i(o_i)}, \end{eqnarray} where $\mathcal{B}_{-i}$ is a compact constraint set of $a_{-i}$ (e.g., a $l_p$ norm ball). Because solving a general nonconvex-nonconcave minimax optimization problem is generally intractable~\citep{daskalakis2020complexity}, M3DDPG replaces the inner minimization with a one-step gradient descent: \begin{eqnarray}\label{eq:m1} \begin{split} a_{-i}^* &= \arg\min_{a_{-i}}Q_i^{\mu}(o,a_i, a_{-i}) \\ &\approx a_{-i} - \alpha_{-i}\nabla_{a_{-i}}{Q_i^{\mu}}(o,a_i, a_{-i}), \end{split} \end{eqnarray} where $\alpha_{-i} \ge 0$ denotes the learning rate. \section{Approach}\label{section:approach} While the single-step gradient approximation in M3DDPG~\cite{li2019robust} (see Section \ref{section:m3ddpg}) improves robustness, we note that the framework has several limitations: \begin{itemize} \item The single-step gradient approximation can explore the locally worst situation and thus still lead to unsatisfying behavior when testing with new opponents that have drastically different strategies. Applying Equation~\eqref{eq:m1} multiple times for the inner minimization can potentially alleviate this issue, but this results in a double-loop approach in solving Equation~\eqref{eq: minimax_a}, which is computationally prohibitive \cite{nichol18reptile}. \item Moreover, the one-step gradient descent approximation can only compute the upper bound of the inner minimization problem because the original problem cannot be solved to a global optimum. Hence, for the outer level, maximizing an upper bound of the inner objective cannot guarantee the maximum of the original objective in Equation \eqref{eq: minimax_a}. In other words, even though one-step gradient descent approximation cannot find a perturbation that results in the smallest $Q_i^{\mu}$, such perturbation can exist. \end{itemize} As we detail in this section, we address these issues by employing convex relaxation of neural networks and solving the inner minimization to explore the approximate \textit{globally} worst situation while achieving \textit{certified} robustness from the guaranteed bound of the convex relaxation. \subsection{Convex relaxation of neural networks}\label{sec:convex-relaxation} We propose to convexify the centralized action-value function in MARL and efficiently solve the inner minimization problem in Equation~\eqref{eq: minimax_a}. Specifically, we assume that $Q^{\mu}_{i}$ is parameterized by fully connected networks with $L$ layers with an input $x_0\!=\!(o,a_i, a_{-i})$. Then, $Q_i^{\mu}$ can be expressed by the following form: \begin{eqnarray}\label{eq:Q_MLP} \begin{split} z^l &= W_c^lx^{l-1}, x^l = \sigma(z^l), \forall l=1,..., L-1\\ Q_{i}^{\mu} &= W_c^{L}z^L, \end{split} \end{eqnarray} where $\sigma(\cdot)$ denotes the nonlinear activation function, and $W_c^l$ is the weight at layer $l$. For clarity, we drop the bias terms without loss of generality. Due to the nonconvexity of the activation function, verifying the robustness property of $Q_{i}^{\mu}$ over a compact set of $x_0$ is difficult~\citep{wong2018provable,weng2018towards}. To address this, we employ a convex relaxation technique \cite{wong2018provable} to verify robustness verification of neural networks and apply the following linear convexification to the centralized Q-function by assuming the ReLU activation function: \begin{eqnarray}\label{eq:Q_MLP_relax} \begin{split} z^l &= W_c^lx^{l-1}, x^l \le u^{l} \odot (z^l - l^l) \oslash (u^l - l^l)\\ x^l &\ge 0, x^l \ge z^l, \forall l=1,\ldots, L-1\\ \bar{Q}^{\mu}_{i} &= W_c^{L}z^L, \end{split} \end{eqnarray} where $l^l$ and $u^l$ are lower and upper bounds for $z^l$, respectively, and $\odot$ and $\oslash$ are the element-wise multiplication and division, respectively. Note that $\bar{Q}_i^{\mu}$ indicates the relaxed version of ${Q}_i^{\mu}$ (i.e., $l^l \le z^l \le u^l$). Thanks to this relaxation, all equations in \eqref{eq:Q_MLP_relax} are convex, so $\min_{a_{-i}}\bar{Q}_i^{\mu}(o,a_i, a_{-i})$ is a linear programming and can be solved efficiently. In the evaluation, we empirically show that this new certification module is computationally efficient and does not add much burden on top of a base MARL algorithm. \begin{figure} \caption{Illustration of the convex relaxation approach for solving the inner minimization in Equation~\eqref{eq: regularizer_a_relax}.} \label{fig:convexframework} \end{figure} \subsection{Solving minimiax optimization via convex relaxation}\label{subsec:minimax} Here, we employ the convex relaxation technique discussed in \Cref{sec:convex-relaxation} to solve the inner minimization problem approximately. Specifically, we propose to replace the inner minimization problem in Equation~\eqref{eq: minimax_a} with the following relaxed objective: \begin{eqnarray}\label{eq: regularizer_a_relax} \begin{split} \bar{Q}_i^{\mu}(o,a_i, a^*_{-i}) &= \min_{a_{-i}\in\mathcal{B}_{-i}}\bar{Q}_i^{\mu}(o,a_i, a_{-i})\\ &\le\min_{a_{-i}\in\mathcal{B}_{-i}}Q_i^{\mu}(o,a_i, a_{-i}), \end{split} \end{eqnarray} where $\bar{Q}_i^{\mu}(o,a_i, a_{-i})$ is the lower bound of $Q_i^{\mu}(o,a_i, a_{-i})$ from the relaxation in Equation~\eqref{eq:Q_MLP_relax} (see~\Cref{fig:convexframework}). $\bar{Q}_i^{\mu}(o,a_i, a_{-i})$ is also a function of $\mu$ as well and is a surrogate of the original non-convex nonlinear objective. The main advantage of the convex relaxation in Equation~\eqref{eq: regularizer_a_relax} over \eqref{eq:m1} is that the former does not need the step size hyperparameter $\alpha_{-i}$, {which can be difficult to tune}. The performance of \cite{li2019robust} is highly sensitive to the step size, and it is difficult to tune. By contrast, our convex relaxation problem can be efficiently solved without needing the step size. With this lower bound, we can reformulate the outer maximization problem as: \begin{eqnarray}\label{eq: minimax_a_relax} &&\max_{\theta_i} \big[ (1-\kappa_i)Q_i^{\mu}(o,a_i, a_{-i}) + \kappa_i \bar{Q}_i^{\mu}(o,a_i, a^*_{-i})\big]\nonumber\\ &&\text{with}\ \ {a_i = \mu_i(o_i)}, \end{eqnarray} where $0\!\le \kappa_i\!\le 1$ is a weight coefficient for the term that accounts for the policy change of the other agents. Because we maximize the lower bound of the inner minimization problem, the original inner objective is guaranteed to be maximized. Such a guarantee provides robustness certificates for agent $i$ as it considers the worst-case scenarios caused by other learning agents. By setting $\kappa_i\!\ne\!1$, we do not entirely use the relaxed inner objective (i.e., $\bar{Q}_i^{\mu}(o,a_i, a^*_{-i})$) as the objective of the outer maximization problem for the sake of training stability, as a relaxation gap might be big especially in the early training process. Instead, a combination of the original objective and its relaxed one is used as the objective for the outer maximization problem, as shown in Equation~\eqref{eq: minimax_a_relax}. Because this inner minimization needs to be solved whenever the policy is updated, the convex relaxation problem in Equation~\eqref{eq: regularizer_a_relax} should be efficient enough with a tight bound. Therefore, there is a trade-off to choose a certain convex relaxation method among many candidates, in which we refer to the appendix for details. \begin{algorithm}[t] \caption{Robust MARL via convex relaxation (ROMAX)} \label{alg:minimax} \small \begin{algorithmic}[1] \State \textbf{Require}: batch size $S$, actor learning rate $\alpha_a$, critic learning rate $\alpha_c$, target update $\tau$, random process $\mathcal{N}$, episode length $T$ \State Initialize replay buffer $\mathcal{D}$ \For {Episode=$1\ldots$} \State Initialize environment and get initial observations $o$ \For {$t=1...{T}$} \State For each agent, select action $a_i=\mu_{\theta_i}(o_i) + \mathcal{N}_t$ \State Execute joint action $a$ and receive $r$ and $o'$ \State Store $(o,a,r,o')$ into $\mathcal{D}$, set $o\leftarrow o'$ \For {Each agent $i\in\mathcal{I}$} \State Get $S$ samples $(o,a,r,o')$ from $\mathcal{D}$ \State Solve inner optimization via relaxation in \eqref{eq:critic_loss_minimax} \State Update critic via loss function in \eqref{eq:critic_loss_minimax} with $\alpha_c$ \State Solve inner optimziation via relaxation in \eqref{eq:gradient_minimax} \State Update actor via policy gradient in \eqref{eq:gradient_minimax} with $\alpha_a$ \EndFor \State Update target network $\theta_i = \tau \theta_i + (1-\tau)\theta'_i$ \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{Integrating with MARL algorithm} Our framework based on convex relaxation in Section \ref{subsec:minimax} can be readily integrated into general MARL frameworks. We implement our method based on MADDPG (see Section \ref{section:maddpg}). Integrating the minimax formulation and the convex relaxation in Equation~\eqref{eq: minimax_a_relax} together with the actor update in Equation~\eqref{eq:gradient} yields: \begin{eqnarray}\label{eq:gradient_minimax} \begin{split} \nabla_{\theta_i}J(\theta_i) &= \mathbb{E}_{o,a\sim \mathcal{D}}\Big[\nabla_{\theta_i}\mu_i(o_i)\nabla_{a_i}\Big(\kappa_i \bar{Q}_i^{\mu}(o,a_i, a^*_{-i}) \\ &\ \ \ \ \ + (1-\kappa_i)Q_i^{\mu}(o,a_i, a_{-i})\Big)\Big], \\ a_i &= \mu_i(o_i),\quad a^*_{-i} =\arg \min_{a_{-i}\in\mathcal{B}_{-i}}\bar{Q}_i^{\mu}(o,a_i, a_{-i}), \end{split} \end{eqnarray} where $\mathcal{B}_{-i} = \mathcal{B}_{-i}((a_j=\mu_j(o_j),\forall j\ne i), \epsilon)$ is a $l_p$ ball centered at $(a_j=\mu_j(o_j),\forall j\ne i)$ with a radius $\epsilon$. Then, the critic is updated by: \begin{eqnarray}\label{eq:critic_loss_minimax} \begin{split} \mathcal{L}(\theta_i) &= \mathbb{E}_{o,o',a,r\sim \mathcal{D}}[Q_i^{\mu}(o, a_i,a_{-i}) - y]^2,\\ y &= r_i + \gamma \Big((1-\kappa_i) Q_i^{\mu'}(o', a'_i,a'_{-i}) \\ &\ \ \ \ \ \ \ + \kappa_i \bar{Q}_i^{\mu'}(o',a'_i, a'^*_{-i})\Big)\\ a'_i&=\mu'_i(o'_i), \quad a'^*_{-i} = \arg \min_{a'_{-i}\in\mathcal{B}'_{-i}}\bar{Q}_i^{\mu'}(o',a'_i, a'_{-i}), \end{split} \end{eqnarray} where $\mathcal{B}'_{-i} = \mathcal{B}'_{-i}((a'_j=\mu'_j(o'_j),\forall j\ne i), \epsilon')$ is a $l_p$ ball centered at $(a'_j=\mu'_j(o'_j),\forall j\ne i)$ with a radius $\epsilon'$. We summarize our algorithm in Algorithm \ref{alg:minimax}. \begin{figure} \caption{Illustration of the considered tasks: Predator-prey (Left) and physical deception (Right); Reproduced from~\cite{li2019robust}.} \label{fig:illustration_tasks} \end{figure} \section{Experiments}\label{section:results} \subsection{Evaluation domains and baselines} We evaluate our approach in mixed cooperative-competitive tasks from the multiagent particle benchmark \cite{lowe2017multi}. In these tasks, there are $n_c$ cooperative agents, $n_a$ adversary agents, and $n_L$ landmarks in a 2D continuous space. We focus on tasks that include adversarial agents such that agents need to have diverse strategies to outperform opponents and thus robustness is an important factor. Below are some descriptions of considered tasks with illustration in Figure \ref{fig:illustration_tasks}. \begin{table*}[t] \begin{center} \begin{tabular}{c||c|c|c||c} \hline\\[-1em] \backslashbox{\red{Adv}}{\blue{Agent}} & MADDPG & \makecell{M3DDPG} & ROMAX & ${R}_{\text{\red{Adv}}}$ \\ \\[-1em]\hline\hline \makecell{MADDPG} & $\Big(\makecell{\red{-0.017}{\scriptscriptstyle \pm 0.012}, \\ \blue{-0.550}{\scriptscriptstyle \pm 0.017 }}\Big)$ &$\Big(\makecell{\red{0.160}{\scriptscriptstyle \pm 0.045}, \\ \blue{-0.502}{\scriptscriptstyle \pm 0.053}}\Big)$ &$\Big(\makecell{\red{0.031}{\scriptscriptstyle \pm 0.020}, \\ \blue{-0.406}{\scriptscriptstyle \pm 0.025}}\Big)$ & $0.174 \scriptscriptstyle \pm 0.080 $ \\ \hline \makecell{M3DDPG} & $\Big(\makecell{\red{0.307}{\scriptscriptstyle \pm 0.043},\\ \blue{-0.718}{\scriptscriptstyle \pm 0.051}}\Big)$ & $\Big(\makecell{\red{0.250}{\scriptscriptstyle \pm 0.048},\\ \blue{-0.609}{\scriptscriptstyle \pm 0.060}}\Big)$ &$\Big(\makecell{\red{-0.043}{\scriptscriptstyle \pm 0.031},\\ \blue{-0.290}{\scriptscriptstyle \pm 0.042}}\Big)$ & $0.514\scriptscriptstyle \pm 0.158$ \\ \hline ROMAX & $\Big(\makecell{\red{0.560}{\scriptscriptstyle \pm 0.032},\\ \blue{-1.093}{\scriptscriptstyle \pm 0.037}}\Big)$ & $\Big(\makecell{\red{0.428}{\scriptscriptstyle \pm 0.055},\\ \blue{-0.936}{\scriptscriptstyle \pm 0.057}}\Big)$ &$\Big(\makecell{\red{0.132}{\scriptscriptstyle \pm 0.020},\\ \blue{-0.477}{\scriptscriptstyle \pm 0.026}}\Big)$ & $1.12\scriptscriptstyle \pm 0.183$\\ \hline ${R}_{\text{\blue{Agent}}}$ & $-2.361\scriptscriptstyle \pm 0.230$ & $-2.047\scriptscriptstyle \pm 0.193$ & $-1.173\scriptscriptstyle \pm 0.083$ & \\ \hline \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} & MADDPG & \makecell{M3DDPG} & ROMAX & \\ \hline \makecell{${R}_{\text{overall}}$ \\ $ = {R}_{\text{\red{Adv}}} + {R}_{\text{\blue{Agent}}}$} & $-2.187$ & $-1.533$& \bm{$-0.053$} & \\ \hline \end{tabular} \end{center} \caption{ Evaluation in the predator-prey task. Predator and prey correspond to adversary (Adv for short) and agent in the table, respectively. Each pair is evaluated for $250$ episodes, i.e., $10$ episodes for each of the $5\!\times\!5\!=\!25$ pairs of random seeds. $(\red{\bullet},\blue{\bullet})$ in each cell denotes the mean/standard error of the reward per step in the episode of the adversaries and agents, respectively. The higher the return is, the better the policy is. For each column, different adversaries compete against the same agent, and a higher adversary reward indicates better performance against the same agent; row-wise for the agents. In the last row, we summarize the overall robustness results for playing both teams via the metric ${R}_{\text{overall}}$.} \label{table:predator} \end{table*} \begin{table*}[t] \begin{center} \begin{tabular}{c||c|c|c||c} \hline\\[-1em] \backslashbox{\red{Adv}}{\blue{Agent}} & MADDPG & \makecell{M3DDPG} & ROMAX & ${R}_{\text{\red{Adv}}}$ \\ \\[-1em]\hline\hline \makecell{MADDPG} & $\Big(\makecell{\red{-0.795}{\scriptscriptstyle \pm 0.017}, \\ \blue{0.482}{\scriptscriptstyle \pm 0.005}}\Big)$ &$\Big(\makecell{\red{-0.689}{\scriptscriptstyle \pm 0.031}, \\ \blue{0.248}{\scriptscriptstyle \pm 0.020}}\Big)$ &$\Big(\makecell{\red{-0.814}{\scriptscriptstyle \pm 0.032}, \\ \blue{0.338}{\scriptscriptstyle \pm 0.0199}}\Big)$ & $-2.298\scriptscriptstyle \pm 0.061$\\ \hline \makecell{M3DDPG} & $\Big(\makecell{\red{-0.742}{\scriptscriptstyle \pm 0.029},\\ \blue{0.225}{\scriptscriptstyle \pm 0.021}}\Big)$ & $\Big(\makecell{\red{-0.819}{\scriptscriptstyle \pm 0.018},\\ \blue{0.467}{\scriptscriptstyle \pm 0.004}}\Big)$ &$\Big(\makecell{\red{-0.839}{\scriptscriptstyle \pm 0.037},\\ \blue{0.271}{\scriptscriptstyle \pm 0.020}}\Big)$ & $-2.4\scriptscriptstyle \pm 0.050$ \\ \hline ROMAX & $\Big(\makecell{\red{-0.572}{\scriptscriptstyle \pm 0.0282},\\ \blue{0.128}{\scriptscriptstyle \pm 0.019}}\Big)$ & $\Big(\makecell{\red{-0.613}{\scriptscriptstyle \pm 0.033},\\ \blue{0.133}{\scriptscriptstyle \pm 0.0193}}\Big)$ &$\Big(\makecell{\red{-0.512}{\scriptscriptstyle \pm 0.010},\\ \blue{0.283}{\scriptscriptstyle \pm 0.003}}\Big)$ & $-1.697\scriptscriptstyle \pm 0.048$\\ \hline ${R}_{\text{\blue{Agent}}}$ & $0.835\scriptscriptstyle \pm 0.150 $ & $0.848\scriptscriptstyle \pm 0.139$ & {$0.892\scriptscriptstyle \pm 0.033$} & \\ \hline \Xhline{2\arrayrulewidth} \Xhline{2\arrayrulewidth} & MADDPG & \makecell{M3DDPG} & ROMAX & \\ \hline \makecell{${R}_{\text{overall}}$ \\ $ = {R}_{\text{\red{Adv}}} + {R}_{\text{\blue{Agent}}}$} & $-1.463$ & $-1.552$& \bm{$-0.805$}&\\ \hline \end{tabular} \end{center} \caption{Evaluation in the physical deception task. The evaluation settings and metrics shown in this table are the same as those in Table 1.} \label{table:deception} \end{table*} \begin{itemize} \item \textbf{Predator-prey.} $n_a\!=\!3$ slower cooperative predators aim to catch the $n_c\!=\!1$ faster prey. $n_L\!=\!2$ landmarks are unmovable and can impede the way of all of the agents. Once there is a collision between predators and the prey, the former get rewarded while the latter gets penalized. \item \textbf{Physical deception.} There are $n_a\!=\!1$ adversary and $n_c\!=\!2$ agents together with $n_L\!=\!2$ landmarks in the environments. The adversary aims to occupy a target landmark without knowing which one of the two landmarks is the target. As a result, agents must split and cover all landmarks to deceive the adversary. \end{itemize} \textbf{Transfer to real robot learning.} We note that these tasks closely coincide with real-world robotic missions. For the predator-prey, multiple robots can be deployed to chase an intelligent moving target (e.g., an intruder in a market). For physical deception, we can deploy robots to protect assets of interest with intelligent behaviors to deceive opponents. The fidelity of the models and perception required in simulation can be achieved in the real world via sensors such as cameras, velocity meters, and LiDAR. Sim-to-real is known to be difficult, because the behaviors of other agents deployed in the environment in the real-world may differ significantly from the simulation (e.g., due to varying transition dynamics). This is exactly what this work aims to address: the certified and improved robustness will enhance the resilience and applicability of multiagent algorithms from sim-to-real. Lastly, the learned policy can be easily transferred on-board, and generated actions can be further executed by a lower-level controller if necessary. \textbf{Baselines.} We compare ROMAX to M3DDPG~\cite{li2019robust}, a robust MARL algorithm that also applies the minimax formulation but solves the inner optimization approximately via the one-step gradient descent. We also compare our algorithm to MADDPG~\cite{lowe2017multi}, which uses the centralized critic but does not solve minimax. Implementation details and hyperparameters are specified in the appendix. \subsection{Results} \textbf{Question 1:} \textit{How much does ROMAX improve the robustness of trained policies?} To answer this question and test robustness, each policy from one team is evaluated against a diverse set of policies from the other team. Then the adversaries' policies trained by one algorithm under each random seed will be evaluated against the agents' policy trained by all of the \textit{other} algorithms under \textit{all} random seeds; vice-versa for the agent's policy. As Table 1 and 2 demonstrate, for both tasks, ROMAX can train more robust policies for both teams in a competitive game. For each adversary, when competing against the same set of diverse agents, our adversary get the highest return; see the $R_{\text{\red{Adv}}}$ columns in the tables. Similar conclusion can be made for the agents given the $R_{\text{\blue{Agent}}}$ rows in the tables. These results demonstrate that, via computing the approximate global worst-case situation, policies can generalize and perform well when tested against unseen peer agents' policies. We also note that M3DDPG is outperformed by MADDPG in Table 2 (see the overall robustness results). This might be due to the sensitive step-size parameter of M3DDPG in Equation~\eqref{eq:m1}. This observation implies that a tuned step size for one task cannot generalize to another one and also shows the advantage of ROMAX. {Regarding the computation efficiency, we empirically observe that the factor between wall-clock time per iteration of ROMAX (with certification) and that of MADDPG (without certification), is close to 1 (i.e., $1.08$, averaged among multiple seeds). This validates that our certification module is computationally efficient.} \textbf{Question 2:} \textit{How much can disruptive policies exploit a fixed robust policy?} \begin{figure} \caption{ The performance of the fixed agent (prey) during the training of disruptive adversaries (predators) with $3$ seeds. Higher return implies a more robust policy. } \label{fig:predator_prey} \end{figure} To answer this question, we construct a disruptive policy in the predator-prey task by 1) training both teams with each algorithm until convergence, 2) fixing the prey policy, and 3) training new adversary predators policies based on MADDPG that attempt to exploit this fixed prey trained by each method. In Figure 3, the robustness results of the fixed prey trained with different algorithms are shown. As the disruptive adversaries' training proceeds, the fixed prey's return decreases as expected. However, ROMAX achieves the highest return compared to other methods, validating the robustness advantage of our approach. We observe that M3DDPG and MADDPG perform similarly in this analysis, possibly due to the sensitive tuning of the step size. \section{Conclusion}\label{section:conclusion} In this paper, we propose a robust reinforcement learning algorithm for a multirobot system. To robustify learning, we consider the learning of other agents based on the worst-case scenario criterion, which inherently leads to a minimax formulation. As minimax formulation is computationally expensive to solve, convex relaxation of neural networks is applied to solve the inner minimization problem. By convex relaxation, agents can account for peer agents that possibly have drastically different behaviors, and a certified bound of the original optimization problem can be gained. We believe this is the first work that integrates robustness verification in MARL. Our algorithm outperforms existing robust MARL algorithms in mixed cooperative-competitive tasks. There are a few important directions for future works. First, we would like to develop tight but efficient convex relaxation-based methods for neural network robustness verification. Moreover, there are several real-world robustness applications, including observation perturbation, actuation fault, malicious/stealthy attack, communication delay, that we would like to test our approach on. Lastly, developing principled and general learning methods with theoretical guarantees (e.g., convergence analysis) will be a meaningful direction. \appendix \section{Appendix}\label{section:appendix} \subsection{Repositories}\label{section:specs} The multiagent particle environments we used in simulation is from \url{https://github.com/openai/multiagent-particle-envs}. We use the implementation of the base algorithm MADDPG from \url{https://github.com/shariqiqbal2810/maddpg-pytorch}. Note that with relaxations in Equation \eqref{eq:Q_MLP_relax}, $\bar{Q}_i^{\mu}(o,a_i, a^*_{-i})$ is no longer an explicit function with respect to its input without constraints. Then the framework {auto\_LiRPA} (\url{https://github.com/KaidiXu/auto\_LiRPA},~\cite{xu2020automatic}) is used to get $\bar{Q}_i^{\mu}(o,a_i, a^*_{-i})$ efficiently and automatically. \subsection{Choice of convex relaxation methods} For robustness verification of neural networks there are many convex relaxation based methods, from which we need to choose one for Equation \eqref{eq: regularizer_a_relax}. When there is a trade-off to choose a certain convex relaxation method among many candidates, we can get $\bar{Q}_i^{\mu}(o,a_i, a^*_{-i})$ as a convex combination of the bounds from different methods~\citep{zhang2019towards}. For example, Interval Bound Propagation (IBP,~\citep{gowal2018effectiveness}) and CROWN-IBP~\citep{zhang2019towards} have their respective strengths and shortcomings in terms of bound tightness, sensitivity to hyper-parameters, computational cost with the training going on. As a result, we can have: \begin{eqnarray}\label{eq:two_bounds} \begin{split} \bar{Q}_i^{\mu}(o,a_i, a^*_{-i}) &= \beta\bar{Q}_{i,\text{IBP}}^{\mu}(o,a_i, a^*_{-i}) \\ &+(1-\beta)\bar{Q}_{i,\text{CROWN-IBP}}^{\mu}(o,a_i, a^*_{-i}), \end{split} \end{eqnarray} with $\beta\in [0, 1]$ a tunable parameter which can change with the training iteration index increasing. As both $\bar{Q}_{i,\text{IBP}}^{\mu}(o,a_i, a^*_{-i})$ and $\bar{Q}_{i,\text{CROWN-IBP}}^{\mu}(o,a_i, a^*_{-i})$ are the lower bounds of $Q_i^{\mu}(o,a_i, a_{-i})$, so are their convex combination $\bar{Q}_i^{\mu}(o,a_i, a^*_{-i})$. Hence, the property of certified robustness is kept. \subsection{Hyperparameter} Some key hyperparameters are shown in Table \ref{table:Hyperparameters}. \begin{table}[hbt!] \begin{center} \begin{tabular}{c|c||c|c} \hline\\[-1em] Episode length & $25$ & batch size & $1024$ \\ NN hidden dim & $64$ & $\tau$ & $0.01$\\ learning rate & $0.01$ & $\epsilon_{\max}$ & $0.1$ \\ $\beta_{\min}$& $0.9$ & $\gamma$ & $0.99$ \\ \hline \end{tabular} \end{center} \caption{Hyperparameters choices in the implementation.} \label{table:Hyperparameters} \end{table} \balance \end{document}
arXiv
BMC Medical Research Methodology A preference-based item response theory model to measure health: concept and mathematics of the multi-attribute preference response model Catharina G. M. Groothuis-Oudshoorn ORCID: orcid.org/0000-0002-4875-53791, Edwin R. van den Heuvel2 & Paul F. M. Krabbe3 BMC Medical Research Methodology volume 18, Article number: 62 (2018) Cite this article A new patient-reported health measurement model has been developed to quantify descriptions of health states. Known as the multi-attribute preference response (MAPR) model, it is based on item response theory. The response task in the MAPR is for a patient to judge whether hypothetical health-state descriptions are better or worse than his/her own health status. In its most simple form MAPR is a Rasch model where for each respondent on the same unidimensional health scale values are estimated of their own health status and values of the hypothetical comparator health states. These values reflect the quality or severity of the health states. Alternatively, the respondents are offered health-state descriptions that are based on a classification system (e.g., multi-attribute) with a fixed number of health attributes, each with a limited number of levels. In the latter variant, the weights of the levels of the attributes in the descriptive system, which represents the range of the health states, are estimated. The results of a small empirical study are presented to illustrate the procedures of the MAPR model and possible extensions of the model are discussed. The small study that we conducted to illustrate the procedure and results of our proposed method to measure the quality of health states and patients' own health status showed confirming results. This paper introduces the typical MAPR model and shows how it extends the basic Rasch model with a regression function for the attributes of the health-state classification system. Health is a sociocultural construct encompassing a wide range of phenomena, so it is not surprising that various actors define it differently. Traditionally, physicians have been guided by a biomedical model and have thus understood health predominantly as a condition that falls within acceptable biological norms. Nowadays, there is an increased awareness of the impact of health and health care on the quality of human life. The conventional clinical health-status construct is now often extended to psychological and even social factors, thereby making subjective measures such as (perceived) health status or 'quality of life' necessary — and rightly so, because the ultimate goal of all health interventions is to improve a patient's perceived health condition. The use of these subjective measures has proliferated ever since the World Health Organization published its definition of health in 1946 [1]. There are several ways to express health. We can compile a 'snapshot' of a patient's current health condition from an 'image bank' comprised of health states. These health states consist of discrete health attributes (e.g., domains, dimensions, items) each with a number of levels. When combined, they represent a description of a person's health status or health-related quality of life (HRQoL) [2]. Subsequently, such health-state descriptions can be measured (valued) by assigning meaningful numbers (values) to an individual's health state. 'Meaningful' is here defined as values that reflect the patients' health status in relationship to other health states. This is different from subjective measures (e.g., visual analogue scale) that reflect the perception of how individuals experience their health status in relationship to their own internal standards. It is convenient to express individuals' health in single metric values, as these can be used in health outcomes research, for clinical monitoring of the health status of patient groups, and in particular, in disease modeling studies and economic evaluations. To obtain health-state values (variously called preferences, utilities, index, or weights), the health-state descriptions must be quantified in terms of seriousness or quality. Differences between health states values are assumed to correspond to increments of quality differences between these states, which implies that the values are on an interval-level scale [2]. Most conventional methods of measurement (or valuation) stem from health economics (e.g., standard gamble, time trade-off) and are susceptible to many disturbing factors such as adaptation, time preference, context, reference point, and other biases [3,4,5]. To control for adaptation, which occurs in most of these conventional methods (especially for chronically ill patients), all economic valuation methods use hypothetical health states that are assessed by a sample of (unaffected) members of the general population. However, it is reasonable to assume that healthy people are not adequately informed or lack the imagination to appropriately judge the impact of health states, particularly severe ones [6, 7]. A new way to quantify health states was recently introduced. This measurement method, the multi-attribute preference response (MAPR) model, is based on the Rasch model (an item response theory model) [8]. The MAPR model more or less mimics the situation of a patient with a certain health condition lying in a wardroom where the other occupants have (related) complaints and symptoms. This patient is asked to compare his own health state to that of his roommates by indicating whether his own state is better or worse. The conventional preference-based measurement methods usually yield an opinion on health states from healthy controls, while the result of the MAPR is an internal positioning of a patient's health status with respect to other health states. The response mechanism of the MAPR model is less susceptible to various biases that conventional methods are prone to. Moreover, the MAPR is the first generic health preference model that is fully based on patient perception and reporting; as such it is a genuine patient-reported outcome measure. Apart from being grounded in a renowned measurement theory, the MAPR response tasks are attractive and easy to perform in a self-completion setting. This article introduces and explains the MAPR model conceptually and mathematically. The first section looks into the background of its measurement mechanism, namely the Rasch model, and expands on its operation in a health setting. The second section describes the MAPR model; the third works through its estimation procedures. Finally, the results of a small empirical study are presented to illustrate the procedures of the MAPR model and possible extensions of the model are discussed. Measurement mechanism A probabilistic measurement model was invented by the Danish mathematician Georg Rasch. While primarily employed to assess educational attainment, it is increasingly used for other purposes [9]. Its original setting was the field of reading skills, where it was intended for use with dichotomous response data (e.g., correct/wrong). Nowadays, the Rasch model or the closely related one-parameter logistic model (OPLM) is considered a variant of the class of item response theory (IRT) models [9, 10]. The Rasch model is built around the idea that the probability of a correct response to an item is modeled as a logistic function of the difference between the difficulty of an item (parameterized by β) and the characteristics of a person (e.g., ability parameter θ): $$ \pi =P\left(+|\beta, \theta\ \right)=\frac{1}{1+{e}^{\beta -\theta }}=\frac{e^{\theta -\beta }}{1+{e}^{\theta -\beta }}. $$ The Rasch model poses three stringent requirements. The first is unidimensionality: a unique one-dimensional latent variable explains the response to the items. The second is monotonicity: the probability of a positive response to an item is a non-decreasing function of the latent variable. And the third is local independence: for any given individual, the item responses are independent conditional on where individuals are on the underlying latent scale. Under the Rasch model, a Guttman scale is the most likely response pattern for a person when items are ordered from least difficult to most difficult [11]. This means that if someone responds correctly to an item, then that person should succeed on all easier items; conversely, if one responds incorrectly, then he/she should fail on all items that are more difficult (Fig. 1). Unlike the Guttman scale, the Rasch model is a probabilistic model. In the latter, the probability that any person will succeed on an easier item will always be greater than the probability of success on a more difficult item. The Guttman scale is the deterministic limiting case of the Rasch model. Schematic illustration of the Guttman/Rasch data structure. Representation of the raw data (top) and after sorting of the columns (health states) and the rows (patients) in order to arrive at the hierarchical Guttman/Rasch data structure (a check indicates that this health state is preferred over the next health state, a cross indicates a misfit) Health context In the context of health measurement assuming the Rasch model implies that the more positive the difference between the value (the perceived quality) of the health status of a patient (θ) and the value of another health state (β) to be judged, the higher the probability that the patient will indicate that his/her current health status is better than the presented health state. Or the other way around, patients in very poor health will consider many other health states as better than their own. Using the Rasch model, one can estimate the health status of individual patients (i.e., their ability, in Rasch terminology) and the value of the hypothetical health states (i.e., difficulty of the parameters of items) on the same latent scale. In short, patients are asked to respond to hypothetical health states by comparing these with their own health status. For example, "Is this health state better than your own health state?" In the Rasch model, patients compare their own health status with a few prescribed hypothetical health states. These comparator health states can span the whole continuum from bad to mild (as done in this article in our small empirical study), but they can also denote health states that are closely positioned on the latent scale to the actual health status of the individual patient. Such comparator health states may be based on holistic descriptions or objects. Holistic refers to unstructured verbal descriptions or objects such as people's faces or skin in photos. In general, holistic objects are often extremely easy to compare and judge. However, features (attributes) by which to describe the object specifically are often absent. Alternatively, health descriptions may be derived from a classification system with multiple attributes, whereby each attribute has a limited number of levels (Fig. 2). The latter approach enables the investigator to predict values for health states that are not part of the empirical study (see below). Example of a response task under the multi-attribute preference response (MAPR) model for a multi-attribute health-state description (state '33221') based on the EQ-5D-3 L instrument (3-level version) Let θ p be the (unknown) health status of person p (p = 1, ..., P). Suppose that β i is the (unknown) value of health state i (i = 1, …, I) as measured on a latent scale. Imposing the logistic function of the difference between a person's health status and the values of the comparator health states on the probability that a person prefers his/her own health state over the comparator description leads to the Rasch model. More formally, let Y ip be a random variable with a value of one if a person prefers his/her health status over the hypothetical health state and zero otherwise. In this way it is assumed that the health status θ p of person p is on the same latent scale as the health states i with β i and that a person will most likely prefer his/her own health status over health state i if θ p exceeds β i . Under the Rasch model we assume that $$ {\pi}_{ip}=P\left({Y}_{ip}=1\right|{\theta}_p,{\beta}_i\Big)=\frac{e^{\theta_p-{\beta}_i}}{1+{e}^{\theta_p-{\beta}_i}} = \frac{1}{1+{e}^{\beta_i-{\theta}_p}}, $$ or equivalently in the logit form $$ {\eta}_{ip}={\theta}_p-{\beta}_i, $$ where η = log (π/(1 − π)) is the logit link function. This means that if a person's health status θ p is equal in quality to the hypothetical comparator health state, so there is no preference difference for either state, the probability of choosing the one over the other is fifty-fifty. Also, the further apart the person's health state is from the comparator, the larger the probability that the better state is preferred and chosen. In the following, model (1) will be called the holistic MAPR model. The holistic MAPR model, like the original Rasch, is a descriptive model. It describes the individual patient's health state (e.g., by localizing patients on the health scale) and the value of the judged comparator states without explaining either of these by characteristics of the patients or the health states. The holistic MAPR is both feasible and attractive in many clinical situations where characteristics cannot be easily discerned, such as body and skin deformations. Comparing and assessing pictures or movies may then be more appropriate. A crucial requirement is that the respondents should be located along the whole health scale from very severe to almost perfect health; otherwise, the model cannot be sufficiently estimated. The typical response task of the MAPR model precludes responses from a sample of the general population. The latter are predominantly healthy and therefore do not provide the information needed to estimate the model. MAPR model Several simple classification systems have been developed to capture the major features of health such that they can be used to describe health states. Each system transposes those features into a certain number of health attributes. The health state can then be measured with a discrete response scale for each attribute at a certain number of levels. For example, the classification system of the preference-based EQ-5D-3 L instrument consists of five health attributes: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression, with a value of 1 (best), 2, or 3 (worst) for each attribute [12]. In this way an EQ-5D-3 L health state can be represented by five digits, with 11111 denoting perfect health and 33333 the worst possible condition. The three-level version of the EQ-5D-3 L system defines 35 = 243 possible partially ordered different health states. The SF-6D health-state classification contains six attributes, namely physical functioning, role limitation, social functioning, pain, mental health, and vitality. With response categories ranging from four to six levels, the SF-6D can describe 18,000 different health states. Some other classification systems are the Health Utilities Index version 3 (HUI-3), 15D, Assessment of Quality of Life (AQoL), and the Quality of Well-Being scale (QWB) [13,14,15,16]. Formal representation Assume that we now have a classification system wherein a health state is represented as a vector x i = (xi1, ..., x iJ ) with discrete levels on each of the J attributes. The number of levels in the jth attribute is denoted by n j , so on attribute j the possible values are 1,2, …, n j . In this way the vector (1,1,…,1) represents perfect health and (n 1 , n 2 , …, n J ) the worst state. Suppose that the value β i of health state x i can be described as a function β ι = f (x i ) to reflect the partial ordering of the health states. In the literature several functions have been proposed to model the value of health states as a function of a set of health attributes. For instance, the simple additive linear model assumes that linearity is present in each attribute and that the value drops by the same amount, for example when moving from level 1 to 2 or from level 2 to 3. A less restrictive and more realistic model can be obtained by taking each attribute as a categorical variable in the regression model: $$ {\beta}_i=f\left({\boldsymbol{x}}_{\boldsymbol{i}}\right)=\sum \limits_{j=1}^J\sum \limits_{k=1}^{n_j}{\alpha}_{jk}{d}_{jk}\left({x}_{ij}\right), $$ where d jk (x ij ) is a dummy variable with d jk (x ij ) = 1 if x ij = k and zero otherwise. The contribution to the value β i of health state x i of a change in attribute j from level 1 to k is parameterized by α jk . Notice that the regression equation in (3) has no intercept as this parameter is redundant. Furthermore, additional restrictions on α jks are required for enforcing the partial ordering on the β s and for identifying the parameters.Footnote 1 Substituting linear expression (3) for β i in the logistic expression (2) gives $$ {\eta}_{ip}={\theta}_p-\sum \limits_{j=1}^J\sum \limits_{k=1}^{n_j}{\alpha}_{jk}{d}_{jk}\left({x}_{ij}\right).\kern5.25em $$ The parameterization of the value of health states is not limited to the main effects of the health attributes, as interactions between health attributes can be incorporated in (4) by adding products of (dummies of) health attributes. For identification purposes, the number of parameters should be less than the number of health states that the respondents are asked to compare. In the IRT literature this type of (item/health explanatory) model is called the linear logistic test model (LLTM). It was originally proposed by Scheiblechner [17] and later formalized by Fisher [10, 18,19,20]. LLTM differs from the Rasch model in that the influence of the quality/severity of the comparator health states is reduced to a linear combination of a fixed number of health-state attributes or interactions between those attributes, with fewer parameters than hypothetical health states. The effects of the attributes and their levels on the health states are estimated instead of the holistic health-state parameters themselves (Formula 1). Being more restrictive than the Rasch model, it enables one to predict values for the complete set of health states that can be constructed for a specific classification system, so predictions can also be made for health states that are not evaluated in the response study. Suppose we have a sample of n patients who compared the same m health states β i (i = 1,...,m) with their own health status. By substituting the parameterization of the items in terms of their attributes as formulated in (3) into formula (1), the holistic MAPR model, we can write the probability of response of patient p on health state i as: $$ {\pi}_{ip}=P\left({Y}_{ip}=1\right|{\theta}_p,\beta =f\left({x}_i\right)\Big)=\frac{1}{1+{e}^{-{\theta}_p+\sum \limits_{j=1}^J{\sum}_{k=1}^{n_J}{\alpha}_{jk}{d}_{jk}\left({x}_i\right)}\ }\kern0.5em $$ Model (5) will be denoted as the MAPR model. Estimation of the health state parameters of the MAPR model now boils down to estimation of the parameters α jk . In that way, the value of a health state is reflected in the characteristics of the health states as parameterized with the variables d jk (x i ). Adaptive MAPR model A more adaptive approach is possible. Patients are thereby asked to complete a multi-attribute classification (e.g., EQ-5D-3 L) in advance to classify their own health status, denoted \( {\overset{\sim }{\boldsymbol{x}}}_p=\left({\overset{\sim }{x}}_{p1},..,{\overset{\sim }{x}}_{pJ}\right) \). Then, to perform the MAPR response task, they are confronted with a set of (individualized) hypothetical (comparator) health states that were selected in light of the classification of the patients' own health state from the first task (Fig. 3). So, in this case patients are shown different subsets of health states, depending on\( {\overset{\sim }{\boldsymbol{x}}}_p \). In principle, this approach allows more precise estimation of the position of the patients' health status. It also precludes selecting a restricted set of predetermined comparator states to be judged. However, it complicates the analysis of the data, as the subset of presented health states differs between the respondents and depends on the person's own health state, which is restricted to \( {\theta}_p=f\left({\overset{\sim }{\boldsymbol{x}}}_p\right) \). This adaptive operation of the MAPR model is almost similar to computerized adaptive testing (CAT) that is used for standard IRT models. The difference is that for standard IRT models a routine on a central server determines, from a large item bank of candidate items, the next item offered to an individual respondent. For the MAPR model a simple routine as part of a mobile application (www.healthsnapp.info) determines the comparator states (comprising multiple attributes/items) to be assessed by individual patients. MAPR measurement mechanism Estimation of the Rasch model When assessing health states holistically (i.e., no parameters for the levels of the attributes) as in traditional item response theory, it is assumed that the responses to health states are independent of one another, which gives rise to the following likelihood: $$ L\left(\theta, \beta |{Y}_{ip}={y}_{ip}\right)=\prod \limits_{p=1}^P\prod \limits_{i=1}^I{\pi_{ip}}^{y_{ip}}{\left(1-{\pi}_{ip}\right)}^{1-{y}_{ip}} $$ The parameters of the standard Rasch model can easily be estimated by several methods, e.g. full maximum likelihood estimation, conditional maximum likelihood and marginal maximum likelihood. All of these are based on maximum likelihood estimation or Bayesian estimation, and several procedures have been described in the literature [21].We will describe now the conditional maximum likelihood (CML) estimation. Let \( {R}_p=\sum \limits_{i=1}^I{Y}_{ip} \) be the number of health states that a patient p has compared to his own and were considered worse. This number is a sufficient statistic for estimating the patient's own health state θ p . Thus, the conditional likelihood of the responses is independent of θ p if we condition on R p . This leads to the (CML) estimation equations, after maximizing the likelihood: $$ \sum \limits_{p=1}^P{Y}_{ip}=\sum \limits_{p=1}^PP\left({Y}_{ip}=1\right|{R}_p={r}_p,{\beta}_i\Big)\kern1em \mathrm{for}\ \mathrm{i}=1,..,\mathrm{I}. $$ P (Y ip = 1|R p = r p , β i ) is the probability that the patient's health status is better than health state i, given the number of health states found to be worse than the patient's health state. These I-1 equations can be solved using a Newton-Raphson procedure leading to consistent point estimates for the health-state parameters β i . An estimate of the patient's own health state θ p can be obtained with a maximum likelihood estimation procedure. In this second step, the conditional maximum likelihood estimates of β i are assumed to be fixed and are substituted in the estimation Eq. (6). In this way the uncertainty associated to these estimates is not accounted for. One way to incorporate this uncertainty could be to use a Bayesian estimation method. In that case a sample from the posterior distributions of the item person parameters can be used instead of imputing only the estimates itself [22]. The variance of the ML estimates equals: $$ Var\left({\widehat{\theta}}_p\right)=\frac{1}{I\left({\widehat{\theta}}_p\right)}=\frac{1}{\sum_{i=1}^I{P}_i\left({\widehat{\theta}}_p\right)\left(1-{P}_i\left({\widehat{\theta}}_p\right)\right)} $$ Note that the estimated βs are not incorporated in this variance. The maximum of the function f (x) = x (1-x) is 0.25 for x = 0.5. One can thus see that individual health status can be estimated more precisely when patients have to compare health states that are close to their own state. For the one parameter logistic model (OPLM), the parameter estimates obtained using CML and marginal maximum likelihood (MML) are usually close. The advantage of CML over the MML procedure is that no a priori assumptions have to be made about a person's health-state distribution. When this a priori distribution is misspecified, the MML estimates may be biased. It is expected that the distribution of person's health states is not normally distributed but typically skewed to the right [23]. On the other hand, it has to be underlined that CML estimation also has some pitfalls, such as the fact that individuals with perfect or zero scores do not provide any information and, missing observations can lead to biases in case of missing not completely at random. Whether a Rasch model fits the data, thereby yielding a unidimensional scale, can be tested with Andersen's likelihood ratio test [24]. Note that obeying a Rasch model is a sufficient but not a necessary condition of unidimensionality. Estimation of the MAPR model Estimation of the LLTM model is similar to estimation of the Rasch model. Both procedures are based on the fact that the number of worse health states per person is a minimal sufficient statistic for θ p . As a consequence, the parameters α jk can be estimated without knowledge of the patient's health status (known as person-free item assessment). Instead, finding the values for β I, that maximize the (conditional) likelihood estimation of the LLTM model, boils down to finding the values for α jk . Both the existence and uniqueness of the CML estimates depend on whether the data matrix is well conditioned. A response matrix is said to be well conditioned if in every possible partition of the health states into two non-empty subsets some patients have given a response of one on some health state in the first set and a response of zero on some health state in the second set [25, 26]. The fit of the MAPR model (LLTM) can be compared with the fit of the Rasch model by using a likelihood ratio test. The deviance of − 2 log-likelihood of the two nested models is approximately Χ2- distributed with df = difference between the number of parameters in the two models [18, 27]. When this test is significant, there is evidence that health states are not sufficiently described by the characteristics of the health states as parameterized with the variables d jk (x i ). In case there is no statistically significant difference between the Rasch model and the MAPR model, the latter can be used to describe the values of health states. Different formulations of LLTM models can also be compared by performing a likelihood test, as long as these models are nested. The aim of the empirical study was to show first exploratory results in testing the MAPR model. In order to do so we used data from a previously published study that aimed to explore discrepancies in values for health states between the general population and patients that experience specific illness [7]. For this study we used only the data of the patients (n = 75). Two patient groups from the Radboud University Nijmegen Medical Center (Netherlands) participated in that study, which was approved by the Central Committee on Research Involving Human Subjects (region Arnhem-Nijmegen) [7]. One group included patients who were diagnosed with cancer within a time frame of 4–6 weeks before they participated in the study. The other group consisted of chronically ill patients living with the symptoms of rheumatoid arthritis (RA) for at least 3 years. The study protocol was administered face-to-face by a trained interviewer at the homes of the patients. This initial sample was extended by including patients with a cerebrovascular accident (CVA) or inflammatory bowel disease (IBD) from the hospital Medisch Spectrum Twente (n = 35) and patients with liver disease or paraplegia from the University Medical Center Groningen (n = 53). The Medical Ethics Review Committee Twente (METC/14124) and Medical Ethics Review Committee UMCG (METC 2015/496) declared that this latter part of the research did not fall under the Medical Research Involving Human Subjects Act. In the initial study (Radboud) the judgmental task consisted of ranking 17 EQ-5D-3 L health states, supplemented with the patient's own EQ-5D-3 L description, 'dead', and state '11111'. Each patient ranked the same 20 health states by putting the card with the 'best' health state on top and the 'worst' at the bottom. Additionally, the patients unknowingly assessed their own health status in the judgmental task, as their own EQ-5D-3 L health-state description had been incorporated in the set, but they did not assess the health states of the other participants. The task in the other two studies was slightly different (patients did not assess their own health status), but is not likely affecting the results in the empirical study as described in this article. Respondents in the latter two studies were asked to compare the same 17 EQ-5D-3 L health states from the Radboud study with their own health (not explicitly represented in terms of the EQ-5D-3 L description) and express if the EQ-5D-3 L health states was worse or better than their own health status. In all three studies, the EQ-5D-3 L health states were presented in random order to control for potential biases due to presentation order or respondent fatigue. Analysis of the empirical study First, we fit the Rasch model to the ranking data. Next, we analyze the following (MAPR) model for the value of health state β i : $$ {\beta}_i=\sum \limits_{j=1}^5\sum \limits_{k=1}^3{\alpha}_{jk}{d}_{jk}\left({x}_{ij}\right), $$ a model with only main effects for all attributes (with dummy variables). To ensure identification of the parameters α jk , an additional restriction has to be put on these parameters; in this case we choose α jk = 0 for k = 1, j = 1, . . , 5. Goodness of fit for the holistic MAPR (i.e., the Rasch) model is tested with the Andersen LR test [28]. Then, MAPR model (8) and the Rasch (i.e., the holistic MAPR) model are compared (LR test, correlation coefficient). Next, for every patient the predicted value of its health state following from the estimated MAPR model (8) are calculated based on the patient's own EQ-5D-3 L description. For every health state shown to the patient, it is determined whether the patient's estimated health-status value outperforms (i.e., is preferred by the patient) the estimated value of the shown comparator health state. These predicted preferences will then be compared with the observed preferences using a kappa coefficient as measure of agreement. A kappa larger than 0.75 is considered excellent and between 0.4 and 0.75 fair to good [29]. The eRM package in R was used to estimate the MAPR models (LLTM model) and the Rasch model [30]. In total 163 patients were interviewed for this study. Of these, 48 were cancer patients (34 colorectal cancer, 14 breast cancer) and 42 had a liver-related disease or transplant. The number of participating RA patients was 27. The mean age differs across the participating hospitals, with the oldest patients coming from the Radboud Medical Center (Table 1). Overall, some or major problems were reported for pain (60.1%) and the least problems were reported for self-care (17.8%). Major or severe problems for self-care and mood were reported only by patients with liver-related disease or transplant or by paraplegic patients (Table 2). As the distribution of the health states across the study sites shows, the UMCG had more patients with a severe health condition. But there was a reasonable spread over the whole HRQoL continuum for the three hospitals (Additional file 1). Table 1 Characteristics and evaluation assessment of the study population (n = 163) Table 2 Marginal distribution of patients' own classification of their health status based on the five attributes, each with three levels, of the EQ-5D-3 L instrument (n = 163) The Guttman scalogram reveals that not all health states and persons are perfectly ordered (Fig. 4), this can be seen from the green dots between the red ones that indicate misfit. Given the small number of health states in relation to the small number of patients, this study showed that the Rasch (holistic MAPR) model does not hold on statistical grounds. However, after deleting health states in the analysis that were rather severe and therefore overly judged as worse than the own health conditions of the patients (states: 32211, 33323, 32223, 11133, 32313, 22222, 33333, and 23232) the holistic model does hold. An Andersen LR-test showed a log likelihood value of 7.21 with 8 dfs (p = 0.514). The order of the health states based on their sum score is similar to the order based on the estimates of the Rasch model. This result is as expected since the sum score is a sufficient statistic for the Rasch model. The Person-Item Map shows the distribution of patients' own health status (the above histogram) compared to the assessed health states, see the histogram below (Fig. 5). This figure shows that more than half of the judged comparator health states were assessed as worse than the patient's health status. Guttman scalogram (green dots between red ones show misfit) The estimated values based on the holistic MAPR model on the latent scale of the items (below: small red bars) are given next to the histogram of the person-parameter distribution (above) The estimated regression coefficients for MAPR model (8) reveal logical differences at all levels (Table 3). Some problems with self-care have the highest impact, followed by some problems with mood, pain, mobility, and usual activities. Severe problems with mood and pain have more impact than the other attributes. Estimates of the health states under the MAPR model (8) give almost the same order as for the Rasch model (Table 4). For the MAPR model, the pairs (11211, 21111), (11131, 11113), and (23232, 32223) have a different order and the estimated value of health state 33323 is much smaller than in the Rasch model. Table 3 Parameter estimates (se) of MAPR model (Eq. 8) for the levels 2 and 3 of the five health attributes of the EQ-5D-3 L instrument Table 4 Comparison of sum score, health-state estimates based on Rasch (holistic MAPR) model, MAPR model (LLTM). The absolute differences in outcome between the Rasch model (holistic MAPR) and the LLTM (MAPR) model are due to scaling and should be ignored When comparing the conditional likelihood for the Rasch model and the MAPR model, we found a statistical difference (LR statistic = 87.9; df = 6; p < 0.001). This means that the goodness of fit of the MAPR model is lower than for the Rasch (the holistic MAPR) model. However, the correlation between the item parameters as estimated with the Rasch model and the item parameters of MAPR model is 0.93, so even the elaborated MAPR model performs well in explaining the item parameters. In 88.2% of the comparisons, the observed preferences agree with the predicted preferences based on the MAPR model. The kappa coefficient equals 0.71 (CI: 0.68–0.74), which is considered fair to good. This article presents a novel approach to measuring health: the multi-attribute preference response model (MAPR). It was developed to quantify health states and patients' own health status on the same unidimensional scale. The response mechanism of this model is insensitive to various biases (e.g., time preference, risk aversion, indifference procedure) that arise with conventional methods (i.e., standard gamble, time trade-off) to derive values for health states. Moreover, this is the first generic health preference-based model that fully reflects perception and reporting by patients. Besides being grounded in measurement theory, the response tasks are attractive and easy to perform in a self-completion setting. The small study that we conducted to illustrate the procedure and results of our proposed method to measure the quality of health states and patients' own health status showed confirming results. Although the sample size of our empirical study was very modest for performing an item response theory analysis as done here, the estimated regression weights showed a clear logical structure. Values for the small set of health states included in this study could be computed and showed a valid order of the health states as well as interpretable distances between the health states, compared with results from previous large studies based on conventional measurement methods. Note that in this small study a fixed set of only 17 health states were used and therefore we could not include interaction terms in the regression equation. In the full operational MAPR model patients will not be confronted with a fixed set of states, but with a smaller set of health-state descriptions that are closely similar to their own health status. This will lead to more efficient and robust estimation of the parameters. The MAPR model largely eliminates unwanted mechanisms affecting valuations of health states. Prominent among these is adaptation. Health-state values derived by conventional methods are typically higher when elicited from patients, particularly those with chronic illness or disability, than from non-patients who only imagine themselves in hypothetical health conditions. Adaptation is manifest in almost all standard methods of health measurement, particularly in multi-domain instruments, often based on Likert scales as developed within the setting of classical test theory [31]. Moreover, conventional methods for valuing health states stemming from economics (e.g., standard gamble, time trade-off) are also complex and require abstract reasoning skills. These drawbacks can now be averted. Measurement with the MAPR model is based on a discrimination principle: a patient's own health status serves as a comparator state against other (comparator) states. This indirect approach to derive values for health states is different from the conventional valuation techniques used by health economists. These valuation techniques (e.g., standard gamble, time trade-off) request a direct and absolute score (monadic measurement). Because the response task in the MAPR model is simply a preference (rank order) between a patient's own health status (that serves as a reference standard) and a (closely) related hypothetical health state, the assessment is less likely affected by 'subjective' motives and easier to accomplish. Patients don't quantity their own health status, they only compare it and rank it. This mode of measurement largely prevents biases such as adaptation and coping. From a theoretical and a practical point of view, the MAPR is more attractive than the existing valuation methods, particularly because both the judgmental task and the analysis are executed within one unifying framework. A downside of the MAPR model is that it produces relative positions of health states. For application in DALYs and QALYs, however, MAPR-derived values need to be rescaled around the position where states are considered to become worse than dead (position of dead = 0) [32, 33]. In conventional valuation methods, 'dead' is not only an element of the task itself but also introduces many methodological and practical problems. Separate exercises are needed to localize the position of 'dead' for the MAPR model. However, recent studies suggest promising solutions; separate studies can be conducted to localize the juncture where health states are considered worse than dead [34, 35]. Such additional studies probably are better be worked out based on the input from a sample of the general population, instead of patients. When applying any conventional Rasch model to derive metric measures, it is assumed that the underlying phenomena can be represented on a unidimensional scale. However, this crucial assumption may be questionable when quantifying a subjective phenomenon such as health, a construct with a rather broad scope. Nevertheless, the overall assumption is that health outcomes such as health status, health-related quality of life, and well-being are unidimensional concepts. Of course, this is true of all data to some extent. As many researchers have convincingly argued, unidimensionality does not imply only one factor or dimension. Rather, it implies the presence of a dominant dimension and possibly of minor dimensions that do not affect the dominant one and the unidimensionality of the model is therefore a reflection of the assumed unidimensionality of the majority of assessments we use [36]. Our health model is comparable to widely accepted models of intelligence. Typically, cognitive abilities are represented as a three-level hierarchy with numerous narrow factors at the bottom, a handful of broad, more general factors at the intermediate level, and at the apex a single factor, the g factor, which stands for the variance common to all cognitive tasks [37]. The MAPR model can even be extended to offer respondents a large set of candidate attributes (far more than the traditional four to nine attributes in existing instruments). An individual patient could then select those most relevant to his or her assessment. By breaking the fixed-set mold, this MAPR variant leads to a truly patient-centered preference-based health measurement approach. An extended MAPR model would most likely require thousands and thousands of responses from patients. Some solutions for this problem have already been introduced [22, 38, 39]. An alternative method to quantify health states may be the discrete choice (DC) model [40, 41]. It is based on (paired) comparisons of two or more hypothetical health states (and not a person's health status itself). In that sense, the difference between the MAPR model and the DC model seems only minor, but in fact it is significant. The DC model only scales health states, not respondents (patients). Several elements related to the MAPR model must be investigated empirically to confirm the assumptions underpinning it and explore its potential limitations. In particular, the data should show a Guttman structure. As data for the MAPR model is collected in patient groups, suboptimal response data may result. This may be due to problems with interpreting the health attributes and their levels, from taking cognitive shortcuts and other factors. This new patient-reported health IRT model can be used as a coherent measurement method and has a profound connection to measurement theories. Apart from developing instruments that can be used in medical settings, the MAPR model can also be used to develop health-outcome instruments to measure in health care. Operated by dedicated data collection technology with interactive routines, data capturing based on this new measurement method becomes simple and even certain distinct patient populations can be easily approached (e.g., children, elderly). In principle, the MAPR model may be suitable to measure other unidimensional phenomena such as well-being, capabilities, and other subjective attributes that are essentially based on quality [42]. For example, suppose that two attributes take values 1 to 3. Thus x i = (xi1, xi2) ∈ {1, 2, 3} × {1, 2, 3}. Since x2=(1,2) is worse than x1=(1,1), we obtain an ordering in the alphas. Due to the ordering we have β1 < β2, which implies β1 = α1, 1 + α2, 1 < α1, 1 + α2, 2 = β2 and thus \( {\alpha}_{j,{k}_1}<{\alpha}_{j,{k}_2} \)when k1 < k2. CML: Conditional Maximum Likelihood CVA: Discrete Choice HrQoL: Health related Quality of Life IBD: IRT: LLTM: Linear Logistic Test Model MAPR: Multi-Attribute Preference Response MML: Marginal Maximum Likelihood. OPLM: One-Parameter Logistic Model. RA: World Health Organization. WHO LU-WHO. Preamble to the Constitution of the World Health Organization as adopted by the International Health Conference, New York, 19-22 June, 1946. Geneva, Switzerland. http://whqlibdoc.who.int/hist/official_records/constitution.pdf. Krabbe PFM. The Measurement of health and Health Status: Concepts, Methods and Applications from a Multidisciplinary Perspective. San Diego: Elsevier/Academic Press; 2016. Nord E, Enge AU, Gundersen V. QALYs: is the value of treatment proportional to the size of the health gain? Health Econ. 2010;19:596–607. Salomon JA. Techniques for valuing health states. Encyclopedia of Health Economics. 2014;2:454–8. Attema AE, Edelaar-Peters Y, Versteegh MM, Stolk EA. Time trade-off: one methodology, different methods. Eur J Health Econ. 2013;14:53–64. Brazier JE, Dixon S, Ratcliffe J. The role of patient preferences in cost-effectiveness analysis: a conflict of values. PharmacoEconomics. 2009;27 https://0-doi-org.brum.beds.ac.uk/10.2165/11314840-000000000-00000 LB - Brazier2009. Krabbe PFM, Tromp N, Ruers TJM, van Riel PLCM. Are patients' judgments of health status really different from the general population. Heal Qual Life Outcomes. 2011;9:31. Krabbe PFM. A generalized measurement model to quantify health: the multi-attribute preference response model. PLoS One. 2013;8 Rasch G. Probabilistic models for some intelligence and attainment tests expanded edition with foreword and afterword by B.D. Wright. Chicago: University of Chicago Press; 1980. Fischer GH. Einführung in die Theorie Psychologischer Tests. In: Bern: Verlag Hans Huber; 1974. Andrich D. An elaboration of Guttman scaling with Rasch models for measurement. Sociol Methodol. 1985;15:33–80. Dolan P. Modeling valuation for EuroQol health states. Med Care. 1997;35:1095–108. Sintonen H. The 15D instrument of health-related quality of life: properties and applications. Ann Intern Med. 2001;33:328–36. Richardson J, Sinha K, Iezzi A, khan MANV-63 MA. Modelling the utility of health states with the assessment of quality of life (AQoL) 8D instrument: overview and utility scoring algorithm. Centre for Health Econ. 2011; Anderson JP, Kaplan RM, Berry CC, Bush JW, Rumbaut RG. Interday reliability of function assessment for a health status measure. The Quality of Well-being Scale Med Care. 1989;27:1076–84. Feeny D, Furlong W, Torrance GW, Goldsmith CH, Zhu Z, DePauw S, et al. Multiattribute and single-attribute utility functions for the health utilities index mark 3 system. Med Care. 2002;40:113–28. Scheiblechner H. Das Lernen und Lösen komplexer Denkaufgaben. Zeitschrift für Exp und Angew Psychol. 1972;19:476–506. Fischer GH. Linear logistic test model as an instrument in educational research. Acta Psychol. 1973;37:359–74. Fischer GH. Logistic latent trait models with linear constraints. Psychometrika. 1983;48:3–26. de Boeck P, Wilson M. Explanatory item response models: a generalized linear and nonlinear approach. New York: Springer-Verlag. Scheerens J, Glas C, Thomas SM. Educational evaluation, assessment and monitoring. A systematic approach. In: New York: Taylor & Francis; 2003. Fox JP. Bayesian item response modeling: theory and applications. New York: Springer; 2010. Parkin D, Devlin NJ, Foo Y. What determines the shape of an EQ-5D index distribution. Med Decis Mak. 2016;36:941–51. Andersen EB. Sufficient statistics and latent trait models. Psychometrika Psychometrica. 1977;24:69–81. Fischer GH. On the existence and uniqueness of maximum-likelihood estimates in the Rasch model. Psychometrica. 1981;46:59–77. Fischer GH. Unidimensional linear logistic Rasch models. In: Hambleton RK, Van der WJ LI, editors. Handbook of modern item response theory. New York: Springer-Verlag; 1997. p. 225–43. Fischer GH. Linear logistic test models. In: Encyclopedia of social measurement; 2005. p. 505–14. Andersen EB. A goodness of fit test for the Rasch model. Psychometrica. 1973;38:123–40. Fleiss JL. Statistical methods for rates and proportions. 2nd ed. New York: John Wiley; 1981. Mair P, Hatzinger R. Mair MJ. eRm: extended Rasch modeling [computer software]. R package version. 0:15–4. http://cran.r-project.org/package=eRm Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. 5th ed. Oxford: Oxford University Press; 2015. Weinstein M, Torrance G, McGuire A. QALYs: the basics. Value Heal 2009;12 Supplement:55–59. Fanshel S, Bush J. A health-status index and its application to health-services outcomes. Oper Res. 1970;18:1021–66. Scalone L, Stalmeier PFM, Milani S, Krabbe PFM. Values for health states with different life durations. Eur J Health Econ. 2015;16:917–25. van Hoorn RA, Donders ART, Oppe M, Stalmeier PM. The better than dead method: feasibility and interpretation of a valuation study. PharmacoEconomics. 2014;32:789–99. Panayides P, Robinson C, Tymms P. Rasch measurement: a response to Goldstein. Br Educ Res J. 2015;41:180–2. Neisser UBG, Boodoo GBTJ, Bouchard TJ, Boykin AW, Brody N, Ceci SJ, et al. Intelligence: knowns and unknowns. Am Psychol. 1996;51:77–101. Albert JH. Bayesian estimation of normal ogive item response curves using Gibbs sampling. J Educ Stat. 1992;17:251–69. Chrzan K. Using partial profile choice experiments to handle large numbers of attributes. Int J Mark Res. 2010;52:827–40. Bansback N, Brazier JE, Tsuchiya A, Anis A. Using a discrete choice experiment to estimate health state utility values. J Health Econ. 2012;31:306–18. Krabbe PFM, Devlin NJ, Stolk EA, Shah KK, Oppe M, van Hout B, et al. Multinational evidence on the feasibility and consistency of a discrete choice model in quantifying EQ-5D-5L health states. Med Care. 2014;52:935–43. Barofsky I. Quality: its definition and measurement as applied to the medically ill. New York: Springer; 2012. We want to thank Jiske Verburg, Martine de Zoete, Kees Brandsema, Katja Morsink, Colin Koel, Gerda Drent, and Noor Tromp for their dedicated effort in collecting data from the patients. The anonymized dataset is available from the corresponding author. Department of Health Technology and Services Research, Faculty of Behavioural, Management and Social Sciences, Technical Medical Centre, University of Twente, PO Box 217, 7500 AE, Enschede, The Netherlands Catharina G. M. Groothuis-Oudshoorn Department of Mathematics and Computer Science, Eindhoven University of Technology, PO Box 513, 5600 MB, Eindhoven, The Netherlands Edwin R. van den Heuvel University Medical Center Groningen, Department of Epidemiology, University of Groningen, PO Box 30.001, 9700 RB, Groningen, The Netherlands Paul F. M. Krabbe Search for Catharina G. M. Groothuis-Oudshoorn in: Search for Edwin R. van den Heuvel in: Search for Paul F. M. Krabbe in: CGO participated in the design of the study, performed the statistical analysis, interpreted the results of the analysis and drafted the manuscript; EH advised on the mathematics and the statistical analysis, interpreted the results of the analysis, drafted the manuscript; PK participated in the design of the study, interpreted the results of the analysis and drafted the manuscript. All authors read and approved the final manuscript. Correspondence to Catharina G. M. Groothuis-Oudshoorn. The data collection under cancer and RA patients was approved by the Central Committee on Research Involving Human Subjects (region Arnhem-Nijmegen), the Medical Ethics Review Committee Twente (METC/14124), and the Medical Ethics Review Committee UMCG (METC 2015/496). They issued a waiver for this study, indicating that the pertinent Dutch Legislation (the Medical Research Involving Human Subjects Act) did not apply for this non-interventional study. Formal informed consent was therefore not mandatory. After sending an information letter about the study and the research task, consent to participate was obtained from all patients by a research nurse. Consent to participate was given verbally. All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare that they have no financial relationships with any organizations that might have an interest in the submitted work in the previous 3 years; no competing interest or other relationships or activities that could appear to have influenced the submitted work. As an extension of the IRT-based measurement model presented in this paper, additional measurement models, tools and instruments are developed by PFMK as part of academic/commercial activities. Classification of the patients in the three studies of their own health status on the EQ-5D-3 L. (DOCX 21 kb) Groothuis-Oudshoorn, C.G.M., van den Heuvel, E.R. & Krabbe, P.F.M. A preference-based item response theory model to measure health: concept and mathematics of the multi-attribute preference response model. BMC Med Res Methodol 18, 62 (2018) doi:10.1186/s12874-018-0516-8 Health-related quality of life Latent logistic test model Patient-reported measurement Rasch model Data analysis, statistics and modelling'
CommonCrawl
Various far-field hydrological responses during 2015 Gorkha earthquake at two distant wells Xudong Huang1 & Yu Zhang ORCID: orcid.org/0000-0002-8528-77461,2 Aquifer hydraulic parameter can change during earthquakes. Continuous monitoring of the response of water level to seismic waves or solid Earth tides provides an opportunity to document how earthquakes influence hydrological properties. Here, we use data of two groundwater wells, Dian-22 (D22) and Lijiang (LJ) well, in southeast Tibet Plateau in response to the 2015 Mw 7.8 Gorkha earthquake to illustrate hydrological implications. The coherences of water level and seismic wave before and after the far-field earthquake show systematic variations, which may confirm the coseismic dynamic shaking influence at high frequencies (f > 8 cpd). The tidal response of water levels in these wells shows abrupt coseismic changes of both phase shift and amplitude ratio after the earthquake, which may be interpreted as an occurrence in the vertical permeability of a switched semiconfined aquifer in the D22 well, or an enhancement unconfined aquifer in the LJ well. Using the continuous short-term transmissivity monitoring, we show that the possible coseismic response for about 10 days and instant healing after 10 days to the causal earthquake impact. Thus, the dynamic shaking during the Gorkha earthquake may have caused the short-term aquifer responses by reopening of preexisting vertical fractures and later healing at epicentral distances about 1500 km. It has been widely reported that earthquakes can cause various hydrological responses, such as changes in groundwater level (Roeloffs 1998; Brodsky et al. 2003), increases in stream flow (Muir-Wood and King 1993; Manga, 2001; Wang et al. 2004a; Wang and Manga 2015), water-temperature variations (Mogi et al. 1989; Shi et al. 2007; Wang et al. 2013; Shi and Wang 2014; Zeng et al. 2015; Ma 2016) and chemical composition release (Claesson et al. 2004; Skelton et al. 2014; Zeng et al. 2015), which are always interpreted by induced underground water transport (e.g., Claesson et al. 2007; Wang et al. 2013). Different mechanisms have been proposed to explain these coseismic transport enhancement phenomena, including changes in aquifer permeability or storativity (Rojstaczer et al. 1995; Wang et al. 2004a; Manga et al. 2012; Zhang et al. 2019a, b), static stress and dynamic stress induced pore pressure diffusion (Muir-Wood and King 1993; Jónssonet al. 2003), consolidation/liquefaction of saturated sediments (Manga 2001; Manga et al. 2003), rupturing and unclogging of fractures (Sibson and Rowland 2003; Wang et al. 2004b), and water recharging tank (Kagabua et al. 2020). In phenomenological analysis of this underground flow, the coseismic change in permeability always can be adopted, which also usually attaches to horizontal flow in observation wells (Elkhoury et al. 2006; Xue et al. 2013; Shi et al. 2015) since permeability of a layered groundwater system in the horizontal direction is normally larger than that in the vertical direction (Liao et al. 2015). Recent studies show that near field and intermediate earthquakes may also breach the aquitards between aquifers and increase vertical permeability (Wang et al. 2004a, 2018; Shi and Wang 2014, 2016; Zhang et al. 2019a, b) that can explain some unexpected aquifer hydrological responses at near field. Groundwater level monitoring plays an important part in the earthquake prediction program in China, and a nationwide groundwater monitoring well network has been constructed for this purpose (Shi et al. 2015). In this work, we examine the water-level responses at the Dian-22 (D22) and Lijiang (LJ) wells at southeast margin of Tibet Plateau due to the 2015 Mw 7.8 Nepal Gorkha earthquake (Fig. 1). The distant earthquake (> 1000 km) induced distinct coseismic water-level responses in these two adjacent wells at successive days. The D22 well may be caused by a switched coseismic vertical diffusion, while the LJ well may be caused by a coseismic enhancement in vertical aquifer leakage. It indicates the seismic wave may induce vertical groundwater flow in this far field. We try to interpret the earthquake-induced changes using moving time-window coherence with regards to seismic and atmospheric excitations to identify the time-dependent hydrological responses in preseismic, coseismic and postseismic phases. Thus, the tidal responses of groundwater in these wells are further interpreted by the transmissivity variations by continuous tidal analysis using different ground flow models. Furthermore, we infer that the preexisting factures unclogging and reclogging by seismic shaking during the earthquake and the local site amplification may favor this effect on shallow hydrological systems at epicentral distances > 1000 km. Map showing the locations of the D22 well and LJ well (red solid triangles). 'Beach ball' shows the epicenter and focal mechanism of the 2015 Mw 7.8 Gorkha earthquake (Avouac et al. 2015). Black lines denote the location of surface faults on the Chinese mainland (Deng et al. 2006). The blue and purple lines indicate the variation amplitude and duration of water-level responses to The Gorkha earthquake, respectively Observations and data The April 2015 Nepal Gorkha earthquake occurred at Main Himalayan thrust (MHT) for ~ 140 km east–west and ~ 50 km across strike and 80 km WNW of Kathmandu, with a hypocentral depth of ~ 15 km. The focal mechanism indicated thrusting on a sub-horizontal fault dipping at ~ 10° north, which nucleated approximating to the brittle–ductile transition and propagated east along the MHT but did not rupture to the surface (see Fig. 1; Avouac et al. 2015; Elliott et al. 2016). We have collected the water-level data from two distant wells with distinct coseismic responses to the earthquake located in the south boundary of the Tibet Plateau (also see Fig. 1). The D22 well is located near the center of Luxi Basin. According to drilling data, the well with 200.17 m depth is cased with pipe to 95.57 m, with a screened interval between 95.57 and 200.17 m. It is drilled with a diameter of 168 mm to 6.1 m, and with a diameter of 146 mm from 6.1 and 56.84 m. The diameter between 56.84 and 95.57 m is 127 mm, and at much deeper levels the well has no casing (see details in Fig. 2a and Table 1). The LJ well is situated at the north end of Honghe (HH) fault zone, at the intersection of Lijiang–Jianchuan (LJ-JS) fault and Zhongdian–Dali (ZD-DL) fault (Feng et al. 2004). The well is drilled to 310 m and cased with pipe to 240 m, with a screened interval between 240 and 310 m. Within 20 m depth, the well diameter is 194 mm. And the diameter reduces to 166 mm to the depth of 80 m. At depth of 80–240 m the well diameter is 127 mm. The rest part of the well has the diameter of 108 mm (see details in Fig. 2b and Table 1). Wellbore structure and lithology of the D22 well (a) and LJ well (b) Table 1 Observation wells to Gorkha earthquake The water level is measured by an LN-3A digital water-level gauge (Institute of Seismic Science, China Earthquake Administration). It converts the reading of the digital pressure measurement into the groundwater level. The sampling is 1 min with the resolution of 1 mm, and the absolute accuracy is 0.2%. The along well seismic waves are recorded by the CTS-1 broad-band seismometer with sampling of 0.01 s. Seismic-wave and water-level data collected from February 8 to August 7 2015 are analyzed. Coseismic response identifications Preseismic, coseismic and postseismic water-level changes caused by the Gorkha earthquake are clearly identified in the seismic wave and water-level data (Figs. 3 and 4). Compared with the long-term water-level oscillation, the water-level show obvious coseismic impulses with short-term sustained postseismic variation mentioned as the duration times (see Fig. 1). The approximate durations of the short-term coseismic water level sustainable changes (the recovery times) are also measured by when the water level decreases down to 5% of the maximum coseismic response with the tidal response removal. The amplitudes of the coseismic water-level changes are also shown in Table 1 and Fig. 1. We calculate the difference between the levels within 2 h after the origin time of the Gorkha earthquake to get the amplitude of the oscillation in each well (Ma and Huang, 2017). Vertical velocity seismic wave records at the D22 (a) and LJ (b), The unit is nm/s. The red dotted lines represent the earthquake origin time. PGV represents the ground peak velocity of seismic wave. RW is the arrival time of Rayleigh wave Water levels at the D22 (a) and LJ (b). The red dotted lines represent the earthquake origin time. RW is the arrival time of Rayleigh wave. TR is the tidal response of water level On comparing the seismic wave (Fig. 3) with the water level (Fig. 4), it is easy to distinguish the coseismic water-level response to the Gorkha earthquake in the D22 well and LJ well whose maximum amplitudes of 0.76 m and 0.16 m, respectively. The duration times of the coseismic water-level changes are 109 min and 97 min. By exact checking of the seismic-wave and water-level time windows, we can discover that the coseismic water-level response mainly corresponds to the Rayleigh surface-wave arrival times (see Figs. 3 and 4). These results suggest that the oscillation amplitudes and afterward durations of the coseismic water-level changes are associated with teleseismic surface-wave oscillations. Coseismic coherency analysis To estimate how the seismic wave influences the water level, we first calculated the cross ordinary coherence functions \({\gamma }_{xy}^{2}\) among the water level, vertical velocity seismograms and barometric pressure for each station (Lai et al. 2013). The ordinary coherence function is defined as $${\gamma }_{xy}^{2}=\frac{{\left|{G}_{xy}(\omega )\right|}^{2}}{{G}_{xx}\left(\omega \right){G}_{yy}(\omega )},$$ where \({G}_{xx}(\omega )\) and \({G}_{yy}(\omega )\) are the power spectra of two signals, respectively, and \({G}_{xy}(\omega )\) is the cross-power spectrum between them. In the calculation, we use the data of each 3 days around the earthquake from April 23, 2015 to April 28, 2015. These data are continuous and stable. The time window and overlap are 1024 min and 512 min, respectively. We calculate the coherence before, around and after the earthquake origin time (Figs. 5 and 6). Previous studies show that the barometric pressure and Earth tide are two persistent factors affecting the water-level change (e.g., Lai et al. 2013; Zhang et al. 2019a, b). Because barometric pressure is one of the factors of well water-level change, the coseismic influence of seismic wave on water level in a well may be contaminated by it. We must exclude the possibility that barometric pressure has influence on the seismic wave, and determine the potential coseismic relationship between seismic wave and water level. The great transfer efficiency of semidiurnal tide for water level locates around 1–8 cpd. In the low-frequency band (f < 1 cpd), the barometric pressure also shows the great transfer efficiency for most wells whose ordinary coherence functions are up to 0.9. In the high-frequency band (f > 8 cpd), both efficiencies for water level decrease though (Lai et al. 2013). However, in the D22 well, we can observe the high efficiencies from barometric pressure, which may be caused by the in-well barometer. The coseismic shaking breaks this high coherence in this high-frequency band (Fig. 5c second panel), while the water-level response to velocity show the coherence enhancement between the preseismic and postseismic stages (see Figs. 5a, second panel). At the postseismic stage, the low-frequency (f < 8 cpd) transfer efficiencies of velocity from water level and barometric pressure increases (up to 0.7) in semidiurnal tide band which may represent in-phase responses of the semidiurnal tides and post seismic hydrological recovery. The coseismic coherence of barometric pressure and vertical velocity shows little downward at high frequencies. In the low-frequency band, the increased coherence may represent the tide transfer efficiencies in the both time series. In the LJ well, we can observe the similar water level in response to the velocity excited by the seismic waves at different stages (Fig. 6a). The barometric pressure transfer efficiencies to water level do not show obvious coseismic change as mainly affected in the 1–8 cpd (Fig. 6c), as previous mentioned (Lai et al. 2013). It is noted that the coherence between the velocity and barometric pressure show the similar coseismic decrease in the high-frequency band (f > 8 cpd) as that between the water level and barometric pressure in the D22 well (comparing Figs. 5c and 6b). An open well barometric pressure and seismic wave observation may make this correspondence. The relative high coherence except the coseismic stage may come from the wind noise. In short, the Gorkha earthquake coseismic responses directly may shake in high-frequency band (f > 8 cpd), and further switched the low-frequency semidiurnal tide transfer efficiencies (1–8 cpd), which further illustrate far field the hydrological responses to the great earthquake. Gorkha earthquake preseismic, coseismic and postseismic coherences between water level and vertical velocity (a), barometric pressure and vertical velocity (b), as well as water level and barometric pressure (c) in the D22 well. WL, VV and BP are abbreviations of water level, coseismic and barometric pressure, respectively. Figure 6 Gorkha earthquake preseismic, coseismic and postseismic coherences between water level and vertical velocity (a), barometric pressure and vertical velocity (b), as well as water level and barometric pressure (c) in the LJ well. WL, VV and BP are abbreviations of water level, coseismic and barometric pressure, respectively Gorkha earthquake preseismic, coseismic and postseismic coherences between water level and vertical velocity (a), barometric pressure and vertical velocity (b), as well as water level and barometric pressure (c) in the LJ well. WL, VV and BP are abbreviations of water level, coseismic and barometric pressure, respectively Aquifer tidal response analysis Among the tidal constituents, the O1 and M2 tides have large amplitudes and with low barometric effect. Therefore, they are the constituents most widely used in tidal analysis (e.g., Zhang et al. 2016; Wang et al. 2019). The M2 tide is more popular for its higher accuracy at 12.42 h period (Hsieh et al. 1987; Rojstaczer and Agnew 1989; Doan et al. 2006), which we apply for the following tide analysis. The amplitude and phase responses of water level to the M2 tide can reflect aquifer storativity and permeability around well (Hsieh et al. 1987; Elkhoury et al. 2006; Doan et al. 2006; Xue et al. 2013). In a confined system, high permeability usually causes small phase lags, whereas low permeability results in large phase lags. The amplitude response is primary to measure specific storage (Zhang et al. 2016). According to Cooper et al. (1965), the steady fluctuation of water level in a well occurs at the same frequency as the harmonic pressure head disturbance in the aquifer which, however, leads to different amplitude and phase shift. Hsieh et al. (1987) described the pressure head disturbance and water-level response as $${h}_{f}={h}_{0}\text{exp}\left(i\omega t\right),$$ $$x={x}_{0}\text{exp}(i\omega t),$$ where \({h}_{f}\) denotes the fluctuating pressure head in the aquifer.\({h}_{0}\) is the complex amplitude of the pressure head fluctuation.\(x\) is the water level from the static position. \({x}_{0}\) is the complex amplitude of the water-level fluctuation. t indicates the time. \(\omega =2\pi /\tau\) is the frequency of fluctuation, where \(\tau\) is the period of fluctuation. The ratio between the amplitude of the water-level and that of the pressure head defines the amplitude response A as $$A=\left|{x}_{0}/{h}_{0}\right|.$$ The phase shift is defined as $$\eta =\text{arg}({x}_{0}/{h}_{0}),$$ where arg() is the argument of a complex number. Figures 7 and 8 show the M2 tidal wave amplitude and phase response of water level in the D22 and LJ well, respectively. The blue error bars indicate the root-mean-square error (RMSE) of the tidal analysis. The bottom panels in figures (a) and (b) are the magnifications of the corresponding period of the gray background in the top panels, respectively. The M2 tidal signal is decomposed using the Baytap-G software based on Bayesian statistics (Tamura et al. 1991). Steps and spikes caused by instrument malfunctions or maintenance work are removed before the analysis. The barometric correction is performed by local barometric data. We use the data from February 8 to August 7 in 2015 when there were no other large earthquakes. The 10-day time window is selected to determine the amplitude and phase responses, since Elkhoury et al. (2006) have proven that using 240-h or 10-day time window can differentiate the M2 and S2 tidal constituents sufficiently. We have applied the Baytap-G software with different time windows to two different synthetic datasets (see Additional file 1: Text S1–S2 in the support information). The phase-shift estimations have proven that a 10-day time window is sufficient to separate the M2 and S2 tides and detect the phase changes (see Additional file 1: Figures S1–S4). Previous study (Shi et al. 2013) and our test may validate that the Baytap-G software with this fine time window selection can applied appropriately for the M2 tide analysis. In our data processing, the time label for each time window is the last day of the 10 days for the causal signal of the coseismic hydrological response which the past influences the present (Menke and Menke 2016). We thought in this way the coseismic changes can be reflected causally in time. Otherwise, there may be a delay for the standardization of coseismic variations in this small time-step analysis. We find during the coseismic days, the responses show significant variations that break the background long-term seasonal trend. The transient amplitude ratio decrease and the transient phase-shift increase in both wells caused by the Gorkha earthquake in two wells can be clearly distinguished and identified, which are consistent to the time-dependent coherence analysis. Near zero phase before the earthquake becomes positive after the earthquake for the D22 well (from 0° to 10°). While the positive phase before the earthquake becomes larger after the earthquake for the LJ well (from 7° to 9°), and restores at the postseismic phase. When the day of the coseismic well water-level response appears into the end of the time window, the changes of the response show in the fitting for amplitude and phase shift. The larger variations of amplitude ratio, and phase shift appears when the successive days after the earthquake go into the time window. But, we also find when the coseismic day moves out the beginning of the time window, the tidal response damps as the ordinary time. A complete successive causal impact may induce the intensive tidal responses in both amplitude and phase shift in the wells. Amplitude ratio (a) and phase response (b) of D22 well at M2 wave frequency change with time. Amplitude response is amplitude ratio of earth tide and water level. Dotted line shows the origin time of 2015 Gorkha earthquake (Beijing time). Blue error bars indicate the error of the tidal analysis. Bottom panels in figures a and b are the magnifications of the corresponding period of the gray background in the top panels, respectively Amplitude ratio (a) and phase response (b) of LJ well change with time at M2 wave frequency. Amplitude response is amplitude ratio of earth tide and water level. Dotted line shows the origin time of 2015 Gorkha earthquake (Beijing time). Blue error bars indicate the error of the tidal analysis. Bottom panels in figures a and b are the magnifications of the corresponding period of the gray background in the top panels, respectively Mechanisms for the water-level variations during the earthquake Many studies have applied the tidal response of the water level to study the permeability change of well-aquifer systems (Elkhoury et al. 2006; Xue et al. 2013; Lai et al. 2014; Shi and Wang 2014; Yan et al. 2014; Shi et al. 2015; Zhang et al. 2019a, b). The most common method used for permeability change estimation from tidal analysis was demonstrated by Hsieh et al. (1987). For a homogeneous, isotropic, laterally extensive, and confined aquifer, the phase shifts between Earth tides and water level are assumed to be caused by the time required for the water to flow into and out of the well, related to the aquifer properties. In this case, the drainage effect of groundwater level is ignored and the resulting phase shift should always be negative due to the time required for water to horizontally flow. The near zero phase shift in the D22 well sometimes show negative value, which may be interpreted by this model (Elkhoury et al. 2006). For the coseismic positive phase shifts observed in the D22 well and in the LJ well, which mean preceding water level to tidal strain, Hsieh's model no longer suffices. Roeloffs (1996) presented another model in which vertical diffusion of pore pressure to the water table can cause wave-level change in advance. Figure 7 shows that in the D22 well, the coseismic phase response changes to be positive and restores to be near zero gradually after the earthquake. Thus, these phase responses are a combination of normal phase lag caused by wellbore storage effect (aquifer confined) and coseismic phase lead caused by pore diffusion (aquifer not well confined). These observed phase responses can be considered a measurement of permeability for either horizontal or vertical fluid flows (e.g., Lai et al. 2013, 2014). Estimation of the aquifer property changes The coseismic phase shift of the D22 well that varies within 0°–10° could be interpreted as a switched vertical diffusion (Shi and Wang 2016). Nevertheless, the preseismic and postseismic situations need to be identified. Based on Hsieh's horizontal flow model, the transmissivity T of aquifers can be estimated, which is the rate of water transmission through a unit width of aquifer under a unit hydraulic gradient as (Doan et al. 2006): $$A=({E}^{2}+{F}^{2}{)}^{-1/2},$$ $$\eta =-{\text{tan}}^{-1}(F/E),$$ $$E = 1 - \frac{{\omega r_{c}^{2} }}{2T}\left[ {\Psi Ker\left( \alpha \right) + \Phi Kei\left( \alpha \right)} \right] \approx 1 - \frac{{\omega r_{c}^{2} }}{2T}Kei(\alpha) ,$$ $$F=\frac{\omega {r}_{c}^{2}}{2T}[\Phi Ker(\alpha )-\Psi Kei(\alpha )]\approx \frac{\omega {r}_{c}^{2}}{2T}Ker(\alpha ),$$ $$\alpha =(\omega S/T{)}^{-1/2}{r}_{w},$$ where S is the dimensionless storage coefficient. Ker and Kei are the zero-order Kelvin functions. \({r}_{w}\) is the radius of the well, and \({r}_{c}\) is the inner radius of the casing. ω is the frequency of the tide. \({r}_{w}\) and \({r}_{c}\) can be gotten from drilling data (Fig. 2a). \(\Phi =\frac{-\left[Ke{r}_{1}\left(\alpha \right)+Ke{i}_{1}\left(\alpha \right)\right]}{{2}^{1/2}\alpha \left[Ke{r}_{1}^{2}\left(\alpha \right)+Ke{i}_{1}^{2}\left(\alpha \right)\right]}\) and \(\Psi =\frac{-\left[Ke{r}_{1}\left(\alpha \right)-Ke{i}_{1}\left(\alpha \right)\right]}{{2}^{1/2}\alpha \left[Ke{r}_{1}^{2}\left(\alpha \right)+Ke{i}_{1}^{2}\left(\alpha \right)\right]}\), where \(Ke{r}_{1}\) and \(Ke{i}_{1}\) are the first-order Kelvin functions. Finally, the horizontal transmissivity Th can be estimated in the D22 well of a confined aquifer above 10–4 m2/s at before and after the earthquake with low phase shift (Zhang et al. 2019b). While in a vertical pore-pressure diffusion model with positive phase shift, the pressure spreads to the free surface, the amplitude of the tide disappears (Roeloffs 1996). At the tidal frequencies, the skin length may exceed the depth of the shielding layer under where to measure the well pressure diffusion as (Shi and Wang 2016): $$D\frac{{\partial }^{2}p}{\partial {z}^{2}}+B{K}_{u}\frac{\partial \varepsilon }{\partial t}=\frac{\partial p}{\partial t},$$ $$p\left(z\to \infty \right)=B{K}_{u},$$ $$p\left(z=0\right)=0,$$ where the solution can be written as $$p\left(z,\omega \right)=B{K}_{u}\varepsilon \left(1-{e}^{-\left(1+i\right)z/\sqrt{2D/\omega }}\right),$$ representing the pore-pressure fluctuation at depth z. B is Skempton's coefficient. \({K}_{u}\) is the bulk modulus of the saturated rock under undrained conditions. ɛ is the change in the volumetric strain. D is the hydraulic diffusivity. With the relation of vertical transmissivity Tv = DS, Fig. 9 shows the switched coseismic vertical transmissivity Tv in the D22 well above 3 \(\times\) 10–6 m2/s. Considering the coherence between water level and barometric pressure is nearly 0.8 before and after the Gorkha earthquake, without distinct declinations, these results provide evidence that aquifer was confined both before and after the earthquake, which is also insensitive to changes in the horizontal transmissivity while Th > 10–4 m2/s in this well. While the coseismic turned positive phase may mean the instant seismic wave induced vertical diffusion, which also breaks the barometric efficiency in the high-frequency band (see Fig. 5c second panel). Time-dependent transmissivity of the D22 well based on M2 tide analysis. Dotted line shows the origin time of 2015 Gorkha earthquake. Tv is calculated from the vertical pore-pressure diffusion model The phase shifts of LJ well are all positive, which may indicate leakage to the water table as the unconfined aquifer model. Figure 10 shows the coseismic enhancement of vertical transmissivity around the Gorkha earthquake. Although a hydrogeological background of layered aquifer–aquitard system may bring controversy, a surface hydrological estimation has justified the tide insensitive high horizontal transmissivity in the LJ well (Liao and Wang, 2018). Therefore, the vertical pore pressure diffusion model is applicable to interpret the coseismic response in the LJ well. Time-dependent transmissivity of the LJ well based on M2 tide analysis. Dotted line shows the origin time of 2015 Gorkha earthquake. Tv is calculated from the vertical pore-pressure diffusion model For the D22 well, the coseismic positive phase shift from 0° to 10° (Fig. 7) indicates vertical diffusion in a semiconfined well-aquifer system during the earthquake. We use the data from February 8 to August 7, 2015 to analyze the tidal response over a long period and identified the coseismic well water-level responses. It indicates that the well-confined aquifer before the Gorkha earthquake became semiconfined after the earthquake shaking arrival. These shifts recover at the 11th day after the earthquake plausibly related to the time window, which means a relative short-term hydrological process. Maybe, these short-term variations are captured by the small time-steps which may be sometimes ignored by the long-term tide analysis. This switch may have been caused by a reopening of vertical fractures (Liao et al. 2015), which would reseal over time (Fig. 7). Therefore, the change of aquifer type lasted 10 days and instantly recovered (Fig. 9). It is noted that the 10 days change represented the vertical reopening, and the successive meant the covariant change in clogging. The instant recovery can be interpreted as the high-frequency seismic wave induced the tiny mineral particle unclogging movement in the narrow pore throat in possible vertical fractures in marl stone, hereafter minerals instantly reclogged by the vertical water diffusion after the high-frequency shaking. For the LJ well, the coseismic enhancement in positive phase shift and decrease in amplitude ratio of the response to the M2 tide break the seasonal downward phase shift and upward amplitude ratio in 2015 (Liao and Wang 2018). Knowing the Gorkha earthquake, they can be identified as causal impact by the earthquake (Fig. 8), as the expected behaviors of unconfined aquifers. We can also infer that the Gorkha earthquake had only short-time influence on the continuously seasonal vertical leakage at the LJ well. The LJ well remained well unconfined during the earthquake. By the continuous monitor of the vertical transmissivity (Tv), the distinct enhancement 8th day after the earthquake can last 3 days (see Fig. 10). The total recovery to the preseismic state also happened at 11th day after the earthquake. The enhancement of the vertical transmissivity following the Gorkha earthquake may be caused by the further unclogging of preexisting fractures aligned with the local groundwater flow perturbation after the shaking of seismic waves, which is also followed by an instant recovery to the common state. Many studies have documented the earthquake-induced water permeability switching and enhancement in the near field (epicentral distance < 1 rupture length) and intermediate field (Rojstaczer et al. 1995; Manga and Rowland 2009; Manga et al. 2012; Shi and Wang 2014, 2016; Wang and Manga 2015; Wang et al. 2018). The postseismic healing or reclogging of fractures permeability will restore permeability (Xue et al. 2013; Shi and Wang 2015). The epicentral distances for the Gorkha earthquake of the two wells are similar in far field (several times rupture length ~ 1500 km, see Table 1). Using the empirical seismic energy estimation as (Wang 2007): $$\text{log}\left(r\right)=0.48M-0.33 {log}e\left(r\right)-1.4,$$ where r is the actual epicentral distance in kilometers, M is the earthquake magnitude, and e is the seismic energy density (in Jm−3). The seismic energy density at the D22 well was \(3.4\times {10}^{-3}\) Jm−3, while at the LJ well was \(2.8\times {10}^{-3}\) Jm−3, using this relation derived in southern California, which were not consistent to the peak ground velocity (PGV) estimated from the seismograms (Fig. 3). The PGV at the LJ well was two orders of magnitude greater. However, the epicenter distance of it is longer (1530.9 km), and coseismic water-level response magnitude and duration was shorter (Fig. 1 and Table 1), which may induce the corresponding response in water level. We also calculated the static strain caused by the Gorkha earthquake based on the Okada model (Lin and Stein 2004; Toda et al. 2005; Zhang et al. 2015), which turns out to be about magnitude of 10−12 (positive for dilatation). It is too small to cause water-level rise, indicating that the effects of static strain on the coseismic water-level change in the far field are negligible. The water-level changes may only be caused by the dynamic strain induced by seismic waves. We infer that the local site amplification effect may make these abnormal seismic responses, which may further differ the hydrological responses under the hydrogeological backgrounds (Liao and Wang 2018). The main aquifer lithology of the LJ well is dolomitic limestone that is aquitard, but more suitable for fracture development than the marl stone in the D22 well (e.g., Mavko et al. 2003). Under the rough estimation of seismic energy at the two wells, about \({10}^{-3}\) Jm−3 is capable of triggering different hydrological responses (Wang and Manga 2015). Therefore, it is reasonable to deduce whether the vertical fracture becomes active and totally heals at these two far-field wells can be referred to the high-frequency seismic waves. Aquifer-type change may occur at the D22 well. The unclogging and reclogging may represent the tiny mineral colloidal particle movement and accumulation in narrow pore throat induced by the less seismic energy during the earthquake (see Fig. 3). Comparing Figs. 9 and 10, we can see instant healing process during about 10-day period after the Gorkha earthquake. At the D22 well, the aquifer-type change of the switched vertical transmissivity changed from \(0\) m2/s to around \(1\times {10}^{-5}\) m2/s after the earthquake and decreased further to \(0\) m2/s of total vertical healing. The distinct vertical transmissivity changes induced by the earthquake in the LJ well lasted 10 days. The persistent vertical transmissivity changed from about \(1.3\times {10}^{-5}\) m2/s to 1.7 \(\times {10}^{-5}\) m2/s, but returned to \(1.2\times {10}^{-5}\) m2/s as the preseismic stage. These resealing may have been caused by the partial blocking of preexisting fractures induced by the Gorkha earthquake, indicating that the local hydrogeological conditions (e.g., permeability, aquifer lithology, and fracture aperture) are important to the recovery process. The coherence of water level and seismic wave indicated that the coseismic hydrogeological responses were induced by high-frequency (f > 8 cpd) ground oscillations with the high dynamic seismic energy. The recoveries only lasted 10 days which also means high-frequency impact did not make persistent change in aquifer system, different from that as reported after the 2011 Tohoku earthquake (see Zhang et al. 2019b). With the observations of short time-step, we can discover this short-term change. The wells are distant (about 1500 km) to the epicenter, and they are all affected by the Tibet plateau geological evolution. The various far-field hydrogeological responses, especially a coseismic aquifer-type change to semiconfined, may be considered as a significant factor in earthquake fluid monitoring in the seismic active area to great earthquakes. The data used in this study can be applied from the National Earthquake Data Center of the China Earthquake Administration (http://earthquake.cn/). Avouac JP, Meng LS, Wei SJ, Wang T, Ampuero JP (2015) Lower edge of locked Main Himalayan Thrust unzipped by the 2015 Gorkha earthquake. Nat Geosci 8(9):708–711. https://doi.org/10.1038/Ngeo2518 Brodsky EE, Roeloffs E, Woodcock D, Gall I, Manga M (2003) A mechanism for sustained groundwater pressure changes induced by distant earthquakes. J Geophys Res 108(B8):503–518. https://doi.org/10.1029/2002jb002321 Claesson L, Skelton A, Graham C, Dietl C, Morth M, Torssander P, Kockum I (2004) Hydrogeochemical changes before and after a major earthquake. Geology 32(8):641–644. https://doi.org/10.1130/G20542.1 Claesson L, Skelton A, Graham C, Mörth CM (2007) The timescale and mechanisms of fault sealing and water-rock interaction after an earthquake. Geofluids 7(4):427–440. https://doi.org/10.1111/j.1468-8123.2007.00197.x Cooper HH, Bredehoeft JD, Papadopulos IS, Bennett RR (1965) The response of well-aquifer systems to seismic waves. J Geophys Res 70(16):3915–3926. https://doi.org/10.1029/JZ070i016p03915 Deng QD, Zhang PZ, Ran YK (2006) Distribution of active faults in China (1:4000000). Science Press, Beijing Doan ML, Brodsky EE, Prioul R, Signer C (2006) Tidal analysis of borehole pressure—a tutorial. Schlumberger Research Report Elkhoury JE, Brodsky EE, Agnew DC (2006) Seismic waves increase permeability. Nature 441(7097):1135–1138. https://doi.org/10.1038/nature04798 Elliott JR, Jolivet R, Gonzalez PJ, Avouac JP, Hollingsworth J, Searle MP, Stevens VL (2016) Himalayan megathrust geometry and relation to topography revealed by the Gorkha earthquake. Nat Geosci 9(2):174–180. https://doi.org/10.1038/Ngeo2623 Feng WP, Zhang JF, Tian YF, Gong LX (2004) Analysis of the characteristic about the Honghe active fault zone based on the ETM plus remote sensing images. Int Geosci Remote Sensing Symp 5:2985–2987 Hsieh PA, Bredehoeft JD, Farr JM (1987) Determination of aquifer transmissivity from earth tide analysis. Water Resour Res 23(10):1824–1832. https://doi.org/10.1029/WR023i010p01824 Jónsson S, Segall P, Pedersen R, Björnsson G (2003) Post-earthquake ground movements correlated to pore-pressure transients. Nature 424(6945):179–183. https://doi.org/10.1038/nature01776 Kagabu M, Ide K, Hosono T, Nakagawa K, Shimada J (2020) Describing coseismic groundwater level rise using tank model in volcanic aquifers, Kumamoto, southern Japan. J Hydrol 582:124464. https://doi.org/10.1016/j.jhydrol.2019.124464 Lai GJ, Ge HK, Wang WL (2013) Transfer functions of the well-aquifer systems response to atmospheric loading and Earth tide from low to high-frequency band. J Geophys Res 118(5):1904–1924. https://doi.org/10.1002/jgrb.50165 Lai GJ, Ge HK, Xue L, Brodsky EE, Huang FQ, Wang WL (2014) Tidal response variation and recovery following the Wenchuan earthquake from water level data of multiple wells in the nearfield. Tectonophysics 619:115–122. https://doi.org/10.1016/j.tecto.2013.08.039 Liao X, Wang CY (2018) Seasonal permeability change of the shallow crust inferred from deep well monitoring. Geophys Res Lett 45(20):11130–11136. https://doi.org/10.1029/2018gl080161 Liao X, Wang CY, Liu CP (2015) Disruption of groundwater systems by earthquakes. Geophys Res Lett 42(22):9758–9763. https://doi.org/10.1002/2015gl066394 Lin J, Stein RS (2004) Stress triggering in thrust and subduction earthquakes and stress interaction between the southern San Andreas and nearby thrust and strike-slip faults. J Geophys Res 109(B2):B02303. https://doi.org/10.1029/2003jb002607 Ma Y (2016) Earthquake-related temperature changes in two neighboring hot springs at Xiangcheng. China Geofluids 16(3):434–439. https://doi.org/10.1111/gfl.12161 Ma YC, Huang FQ (2017) Coseismic water level changes induced by two distant earthquakes in multiple wells of the Chinese mainland. Tectonophysics 694:57–68. https://doi.org/10.1016/j.tecto.2016.11.040 Manga M (2001) Origin of postseismic streamflow changes inferred from baseflow recession and magnitude-distance relations. Geophys Res Lett 28(10):2133–2136. https://doi.org/10.1029/2000gl012481 Manga M, Rowland JC (2009) Response of Alum Rock springs to the October 30, 2007 Alum Rock earthquake and implications for the origin of increased discharge after earthquakes. Geofluids 9(3):237–250. https://doi.org/10.1111/j.1468-8123.2009.00250.x Manga M, Brodsky EE, Boone M (2003) Response of streamflow to multiple earthquakes. Geophys Res Lett 30(5):1214. https://doi.org/10.1029/2002gl016618 Manga M, Beresnev I, Brodsky EE, Elkhoury JE, Elsworth D, Ingebritsen SE, Mays DC, Wang CY (2012) Changes in permeability caused by transient stresses: field observations, experiments, and mechanisms. Rev Geophys 50(2):81–88. https://doi.org/10.1029/2011rg000382 Mavko G, Mukerji T, Dvorkin J (2003) The rock physics handbook. Cambridge University Press, Cambridge Menke W, Menke J (2016) Environmental data analysis with Matlab, 2nd edn. Academic Press, New York Mogi K, Mochizuki H, Kurokawa Y (1989) Temperature-changes in an Artesian Spring at Usami in the Izu Peninsula (Japan) and their relation to earthquakes. Tectonophysics 159(1–2):95–108. https://doi.org/10.1016/0040-1951(89)90172-8 Muir-wood R, King GCP (1993) Hydrological signatures of earthquake strain. J Geophys Res 98(B12):22035–22068. https://doi.org/10.1029/93jb02219 Roeloffs E (1996) Poroelastic techniques in the study of earthquake-related hydrologic phenomena. Adv Geophys 37:135–195. https://doi.org/10.1016/S0065-2687(08)60270-8 Roeloffs EA (1998) Persistent water level changes in a well near Parkfield, California, due to local and distant earthquakes. J Geophys Res 103(B1):869–889. https://doi.org/10.1029/97jb02335 Rojstaczer S, Agnew DC (1989) The influence of formation material properties on the response of water levels in wells to earth tides and atmospheric loading. J Geophys Res 94(B9):12403–12411. https://doi.org/10.1029/JB094iB09p12403 Rojstaczer S, Wolf S, Michel R (1995) Permeability enhancement in the shallow crust as a cause of earthquake-induced hydrological changes. Nature 373(6511):237–239. https://doi.org/10.1038/373237a0 Shi ZM, Wang GC (2014) Hydrological response to multiple large distant earthquakes in the Mile well. China J Geophys Res 119(11):2448–2459. https://doi.org/10.1002/2014jf003184 Shi ZM, Wang GC (2015) Sustained groundwater level changes and permeability variation in a fault zone following the 12 May 2008, Mw 7.9 Wenchuan earthquake. Hydrol Process 29(12):2659–2667. https://doi.org/10.1002/hyp.10387 Shi ZM, Wang GC (2016) Aquifers switched from confined to semiconfined by earthquakes. Geophys Res Lett 43(21):11166–11172. https://doi.org/10.1002/2016gl070937 Shi Y, Cao J, Ma L, Yin B (2007) Tele-seismic coseismic well temperature changes and their interpretation. Acta Seismol Sin 29(3):280–289. https://doi.org/10.1007/s11589-007-0280-z Shi ZM, Wang GC, Liu CL, Mei JC, Wang JW, Fang HN (2013) Coseismic response of groundwater level in the three gorges well network and its relationship to aquifer parameters. Chinese Sci Bull 58(25):3080–3087. https://doi.org/10.1007/s11434-013-5910-3 Shi ZM, Wang GC, Manga M, Wang CY (2015) Mechanism of co-seismic water level change following four great earthquakes—insights from co-seismic responses throughout the Chinese mainland. Earth Planet Sc Lett 430:66–74. https://doi.org/10.1016/j.epsl.2015.08.012 Sibson RH, Rowland JV (2003) Stress, fluid pressure and structural permeability in seismogenic crust, North Island. New Zealand Geophys J Int 154(2):584–594. https://doi.org/10.1046/j.1365-246X.2003.01965.x Skelton A, Andrén M, Kristmannsdóttir H, Stockmann G, Mörth CM, Sveinbjörnsdóttir Á, Jónsson S, Sturkell E, Gudrúnardóttir HR, Hjartarson H, Siegmund H, Kockum I (2014) Changes in groundwater chemistry before two consecutive earthquakes in Iceland. Nat Geosci 7(10):752–756. https://doi.org/10.1038/Ngeo2250 Tamura Y, Sato T, Ooe M, Ishiguro M (1991) A procedure for tidal analysis with a Bayesian information criterion. Geophys J Int 104(3):507–516. https://doi.org/10.1111/j.1365-246X.1991.tb05697.x Toda S, Stein RS, Richards-Dinger K, Bozkurt SB (2005) Forecasting the evolution of seismicity in southern California: animations built on earthquake stress transfer. J Geophys Res 110(B5):B05S16. https://doi.org/10.1029/2004jb003415 Wang CY (2007) Liquefaction beyond the near field. Seismol Res Lett 78(5):512–517. https://doi.org/10.1785/gssrl.78.5.512 Wang CY, Manga M (2015) New streams and springs after the 2014 Mw6.0 South Napa earthquake. Nat Commun 6:7597. https://doi.org/10.1038/ncomms8597 Wang CY, Wang CH, Manga M (2004b) Coseismic release of water from mountains: evidence from the 1999 (M-W=7.5) Chi-Chi, Taiwan, earthquake. Geology 32(9):769–772. https://doi.org/10.1130/G20753.1 Wang CY, Manga M, Dreger D, Wong A (2004a) Streamflow increase due to rupturing of hydrothermal reservoirs: evidence from the 2003 San Simeon, California. Earthquake Geophys Res Lett 31(10):L10502. https://doi.org/10.1029/2004gl020124 Wang CY, Wang LP, Manga M, Wang CH, Chen CH (2013) Basin-scale transport of heat and fluid induced by earthquakes. Geophys Res Lett 40(15):3893–3897. https://doi.org/10.1002/grl.50738 Wang CY, Doan ML, Xue L, Barbour AJ (2018) Tidal response of groundwater in a leaky aquifer-application to Oklahoma. Water Resour Res 54(10):8019–8033. https://doi.org/10.1029/2018wr022793 Wang CY, Zhu AY, Liao X, Manga M, Wang LP (2019) Capillary effects on groundwater response to earth tides. Water Resour Res 55(8):6886–6895. https://doi.org/10.1029/2019wr025166 Xue L, Li HB, Brodsky EE, Xu ZQ, Kano Y, Wang H, Mori JJ, Si JL, Pei JL, Zhang W, Yang G, Sun ZM, Huang Y (2013) Continuous permeability measurements record healing inside the Wenchuan Earthquake Fault Zone. Science 340(6140):1555–1559. https://doi.org/10.1126/science.1237237 Yan R, Woith H, Wang RJ (2014) Groundwater level changes induced by the 2011 Tohoku earthquake in China mainland. Geophys J Int 199(1):533–548. https://doi.org/10.1093/gji/ggu19 Zeng XP, Lin YF, Chen WS, Bai ZQ, Liu JY, Chen CH (2015) Multiple seismo-anomalies associated with the M6.1 Ludian earthquake on August 3, 2014. J Asian Earth Sci 114:352–361. https://doi.org/10.1016/j.jseaes.2015.04.0276 Zhang Y, Fu LY, Huang FQ, Chen XZ (2015) Coseismic water-level changes in a well induced by teleseismic waves from three large earthquakes. Tectonophysics 651:232–241. https://doi.org/10.1016/j.tecto.2015.02.027 Zhang Y, Fu LY, Ma YC, Hu JH (2016) Different hydraulic responses to the 2008 Wenchuan and 2011 Tohoku earthquakes in two adjacent far-field wells: the effect of shales on aquifer lithology. Earth Planets Space 68:178. https://doi.org/10.1186/s40623-016-0555-5 Zhang Y, Wang CY, Fu LY, Zhao B, Ma YC (2019b) Unexpected far-field hydrological response to a great earthquake. Earth Planet Sc Lett 519:202–212. https://doi.org/10.1016/j.epsl.2019.05.007 Zhang H, Shi Z, Wang G, Sun X, Yan R, Liu C (2019a) Large earthquake reshapes the groundwater flow system: insight from the water-level response to earth tides and atmospheric pressure in a deep well. Water Resour Res 55(5):4207–4219. https://doi.org/10.1029/2018wr024608 We thank the China Earthquake Administration for providing the data used in this study. This research is financially supported by the National Natural Science Foundation of China (NSFC), under Grant No. 41774119), the special Fundamental Research Funds for the Central Universities (2042017kf0228) and Wuhan University Experiment Technology Project Funding (WHU-2019-SYJS-12). School of Geodesy and Geomatics, Wuhan University, Wuhan, 430079, China Xudong Huang & Yu Zhang Key Laboratory of Geospace Environment and Geodesy, Ministry of Education, Wuhan, 430079, China Yu Zhang Xudong Huang XH and YZ carried out most of geologic investigation, data collection, and data pre-treatment. XH finished the calculation and analysis of coherence and tidal responses and calculated the transmissivities. YZ finished the coseismic hydrological analysis. XH and YZ conceived and coordinated the study, drafted and approved the manuscript. All authors read and approved the final manuscript. Correspondence to Yu Zhang. 40623_2021_1441_MOESM1_ESM.docx Additional file 1: Text S1. Two 1-hour sampling 60-day synthetic data without and with an impulse. Text S2. Phase-shift estimation using the Baytap-G software. Figure S1. The 1-hour sampling 60-day synthetic data with a phase change in the M2 period on the 31st day. The red dotted line represents the time when the phase change occurs. Figure S2. The 1-hour sampling 60-day synthetic data with a phase change in the M2 period and an impulse on the 31st day. The red dotted line represents the time when the phase change and the impulse occur. Figure S3. The phase shift with error bars of the 60-day synthetic data without (a) and with the impulse (b) to the M2 tide using the Baytap-G software with different time windows. The highlighted error bars indicate the error of the tidal analysis. The black dotted lines represent the time when the phase change or the impulse occurs. The vertically downward red arrows point the last time windows that contain the phase change. Figure S4. The phase shift with error bars of the 60-day synthetic data without (a) and with the impulse (b) to the S2 tide using the Baytap-G software with different time windows. The highlighted error bars indicate the error of the tidal analysis. The black dotted lines represent the time when the phase change or the impulse occurs. Huang, X., Zhang, Y. Various far-field hydrological responses during 2015 Gorkha earthquake at two distant wells. Earth Planets Space 73, 119 (2021). https://doi.org/10.1186/s40623-021-01441-0 Tidal response Transmissivity
CommonCrawl
\begin{definition}[Definition:Contradictory/Statements] Two statements $p$ and $q$ are said to be '''contradictory''' {{iff}}: :whenever $p$ is true, $q$ is false. and: : whenever $q$ is true, $p$ is false. \end{definition}
ProofWiki
In the rectangle below, line segment $MN$ separates the rectangle into $2$ sections. What is the largest number of sections into which the rectangle can be separated when $4$ line segments (including $MN$) are drawn through the rectangle? [asy] size(3cm,3cm); pair A,B,C,D,M,N; A=(0,0); B=(1.5,0); C=(1.5,1); D=(0,1); draw (A--B--C--D--A); M=(0.8,0); N=(1.2,1); draw(M--N); label("M",M,S); label("N",N,NNE); [/asy] $\underline{\text{Method 1}}$ Make a diagram and draw $4$ lines so that they intersect each other as shown. The number of different sections is $\boxed{11}.$ [asy] draw((0,0)--(6,0)--(6,4)--(0,4)--(0,0)); draw((2,0)--(4,4)); label("N",(4,4),N); label("M",(2,0),S); draw((4,0)--(2,4)); draw((5.5,4)--(0,1.5)); draw((0,3)--(5,0)); [/asy] $\underline{\text{Method 2}}$ Make a table. The original rectangle without lines added is considered to be one section. $$ \begin{array}{|c|c|c|c|c|c|} \hline \text{Total number of lines added} & 0 & 1 & 2 & 3 & 4 \\ \hline \text{Total number of sections} & 1 & 2 & 4 & 7 & ?\\ \hline \end{array} $$ Look for a pattern. Observe that the $1^\text{st}$ added line results in increasing the preceding total of sections by $1,$ the $2^\text{nd}$ added line increases the preceding total of sections by $2,$ the $3^\text{rd}$ added line increases the preceding total sections by $3.$ It seems that the $4^\text{th}$ added line will increase the preceding total of sections by $4$ and that there will be $7+4$ or $11$ sections. Examine the $4^\text{th}$ line in the diagram below. When the $4^\text{th}$ line intersects the first of the $3$ interior lines, it creates a new section. This happens each time the $4^\text{th}$ line crosses an interior line. When the $4^\text{th}$ line finally ends at a point on the rectangle, it creates a $4^\text{th}$ new section. Thus the $4^\text{th}$ line creates a total of $4$ new sections. The answer to the given problem is $\boxed{11}.$ (If a 5th line were added, it would increase the preceding total of sections by 5.) [asy] draw((0,0)--(6,0)--(6,4)--(0,4)--(0,0)); draw((2,0)--(4,4)); label("4",(4,4),N); draw((4,0)--(2,4)); label("$3$",(2,4),NE); draw((5.5,4)--(0,1.5)); label("$1$",(0,1.5),W); draw((0,3)--(5,0)); label("$2$",(0,3), NW); [/asy]
Math Dataset
\begin{document} \title{Corrigendum to `Holomorphic Function Spaces on Homogeneous Siegel Domains'} \author{Mattia Calzi \\ Dipartimento di Matematica,\\ Universit\`a degli Studi di Milano,\\ Via C. Saldini 50, 20133 Milano, Italy\\ {\tt [email protected]} \and Marco M.\ Peloso\footnote{The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The authors were partially funded by the INdAM-GNAMPA Project CUP\_E55F22000270001.} \\ Dipartimento di Matematica,\\ Universit\`a degli Studi di Milano,\\ Via C. Saldini 50, 20133 Milano, Italy\\ {\tt [email protected]}} \maketitle \begin{tabular}{|p{0.9cm}|p{0.9cm}|p{6cm}|p{6cm}|} \hline PAGE & LINE & ERRATA & CORRIGE\\ \hline 6 & 17 & $(\Im w)^{s'-1}$ & $(\Im w)^{2 s'-1}$\\ 6 & 21 & $2s '>s+2$ & $2 s'>s$\\ 6 & 24 & $2 s'>s+1+ 1/\min(1,p)$ & $2s'> s+(1/p-1)_+$\\ 11 & 9 & investigate & investigated\\ 18 & 22 & $i(\zeta^*i \zeta')-i(\zeta'^*i \zeta)$ & $i(\zeta^*i \zeta'-\zeta'^*i \zeta)$\\ 42 & 25 & & delete this line\\ 42 & 27 & $(t,x)\mapsto t x t^*$ and $(t,x)\mapsto t^* x t$ & $(t,x)\mapsto (t x)t^*-x (t^* t^*)+ (x t^*) t^*$ and $(t,x)\mapsto t^*( x t)- (t^* t^*) x+ t^*( t^* x) $\\ 46 & 13 & & add the clarification `(where the product in the last two terms is the ordinary product of matrices)'\\ 56 & -8,-7 & $t_j$ & $e_j$ \\ 58 & -10& $r\to +\infty$ & $\rho\to +\infty$\\ 58 & -6,-4 & $\lim\limits_{r\to+\infty}$ & $\lim\limits_{\rho\to +\infty}$\\ 64 & 4 & $\pi^{(r-1)}(tx t^*)=\pi^{(r-1)}(t) \pi^{(r-1)}(x) \pi^{(r-1)}(t)^*$ & $\pi^{(r-1)}(t\cdot x)=\pi^{(r-1)}(t)\cdot \pi^{(r-1)}(x)$ \\ 64 & 14 & there is $g\in GL(E)$ such that $t\cdot \Phi$ & and for every $t'\in T_+$ such that $\pi^{(r-1)}(t')=t$ there is $g\in GL(E)$ such that $t'\cdot \Phi$\\ 65 & -13 & there are constants $R,C>0$ & for every $R>0$ there is a constant $C>0$\\ 67 & 2 & $ =\abs{t x t^*}\leqslant C_1^2\abs{t} \abs{x} \abs{t}=C_1^2 \abs{x} \abs{t}^2$ & $\leqslant 3 C_1^2 \abs{x} \abs{t}^2$\\ 77 & 13 & (resp.\ \dots); & or \dots;\\ 77 & 12 & & add the assumption `$\vect s' \in\vect b+\vect d-\frac 1 2 \vect m-(\mathds{R}_+^*)^r$'\\ 77 & -8,-7 & & delete this sentence\\ 83 & -9 & $f$ & $f_{h'}$\\ 83 & -7 & $B(e_\Omega$ & $B_\Omega(e_\Omega$\\ 83 & -3 & $f$ & $f_{h'}$\\ 86 & -2 & $h'\coloneqq \Im z'-\Phi(\zeta')$ & $h'\coloneqq \Im z'-\Phi(\zeta')=t'\cdot e_\Omega$, with $t'\in T_+$\\ 86 & -1 & $h'^{-1}\cdot h$ & $t'^{-1}\cdot h$\\ 88 & -8 & $C_9$ & $C_8$\\ 90 & 14, 18, 20 & $(\vect b+\vect d)$ & $(\vect b+\vect d)/p$\\ 94 & 10 & & delete this line\\ 96 & -8 & $\ell^{p,k}$ & $\ell^{p,q}$\\ 100 & 10 & $\vect s'\in \vect b+\vect d-\frac 1 2 \vect m-(\mathds{R}_+^*)^r$ & $\vect s'\in \frac{1}{p'} (\vect b+\vect d)-\frac{1}{2 p'}\vect m'-(\mathds{R}_+^*)^r$ \\ 100 & 11 & $\vect s+\vect s'\in \frac{1}{\min(1,p)}(\vect b+\vect d)-\frac{1}{2 q'} \vect m-(\mathds{R}_+^*)^r$ & $\vect s+\vect s'\in \frac{1}{\min(1,p)}(\vect b+\vect d)-\frac{1}{2 q'} \vect m-(\mathds{R}_+^*)^r$ or $\vect s+\vect s'\in \frac{1}{\min(1,p)}(\vect b+\vect d)-\mathds{R}_+^r$ if $q'=\infty$ \\ 100 & -11 & for every $R_1>1$ & if $p,q\geqslant 1$, then for every $R_1>1$ \\ 112 & -11 & $\mathrm{d} \pi_\lambda$ & $\pi_\lambda$\\ 140 & -4 & $\vect s\in \sup \Big(\frac{1}{2 q}\vect m+\Big(\frac{1}{2\min(p,p')}-\frac{1}{2 q}\Big)_+\vect m',\frac 1 p (\vect b+\vect d)+\frac{1}{2 q'}\vect m' \Big)+(\mathds{R}_+^*)^r$ & $\vect s\in \frac{1}{2 q}\vect m+\left(\frac{1}{2\min(p,p')}-\frac{1}{2 q}\right)_+\vect m'+(\mathds{R}_+^*)^r$ \\ 141 & 7, 15 & $\Delta^{\vect s''}_\Omega$ & $\Delta^{\vect s''-(\vect b+\vect d)}_\Omega$ \\ 147 & -9 & $\vect s\in \big( \frac 1 p(\vect b+\vect d)+\frac 1 2 \vect m'+(\mathds{R}_+^*)^r\big)$ & $\vect s\in \frac 1 p(\vect b+\vect d)+\frac 1 2 \vect m'+(\mathds{R}_+^*)^r$\\ 151 & -2, -1 & $\vect b+\vect d-\vect s-\vect s'$ & $\vect b+\vect d-\vect s-\vect s',\vect s'$ \\ \hline \end{tabular} We observe explicitly that, even though (3) of Proposition 3.39 has been weakened, Corollary 3.40 holds in full generality, as a consequence of~\cite[Corollaries 4.7 and 4.14]{Paralipomena}. In addition, the formulae for $t\cdot x$ and $x\cdot t$ may be found in~\cite[(1.2)]{Geatti}. Finally, it does not seem that the reference cited in Page 101, line -3 be sufficient to prove the stated assertion. Cf.~\cite[Proposition 4.6 and Theorem 4.8]{Ishi} for the proof of a more general fact. \end{document}
arXiv
Experiments in Fluids December 2018 , 59:183 | Cite as Simultaneous micro-PIV measurements and real-time control trapping in a cross-slot channel Farzan Akbaridoust Jimmy Philip David R. A. Hill Ivan Marusic Here we report novel micro-PIV measurements around micron-sized objects that are trapped at the centre of a stagnation point flow generated in a cross-slow microchannel using real-time control. The method enables one to obtain accurate velocity and strain rate fields around the trapped objects under straining flows. In previous works, it has been assumed that the flow field measured in the absence of the object is the one experienced by the object in the stagnation point flow. However, the results reveal that this need not be the case and typically the strain rates experienced by the objects are higher. Therefore, simultaneously measuring the flow field around a trapped object is needed to accurately estimate the undisturbed strain rate (away from the trapped object). By combining the micro-PIV measurements with an analytical solution by Jeffery (Proc R Soc Lond A 102(715):161–179, 1922), we are able to estimate the velocity and strain rate around the trapped object, thus providing a potential fluidic method for characterising mechanical properties of micron-sized materials, which are important in biological and other applications. Open image in new window A novel combination of classical micro-PIV and real-time flow control setups enabled us to measure the velocity field around a target trapped in the extensional flow, which opens up new vistas of characterisation of the mechanical properties of micron-sized objects. The online version of this article ( https://doi.org/10.1007/s00348-018-2637-6) contains supplementary material, which is available to authorized users. The authors gratefully acknowledge the Australian Research Council for the financial support of this work. This work was performed in part at the Melbourne Centre for Nanofabrication (MCN) in the Victorian Node of the Australian National Fabrication Facility (ANFF). Supplementary material 1 (MP4 51 KB) 348_2018_2637_MOESM3_ESM.pdf (1.1 mb) Supplementary material 3 (PDF 1165 KB) Appendix A: Mechanism of the microfluidic hydrodynamic trap A.1: Continuous stagnation point repositioning Different views of the cross-slot junction with the two fixed and variable constraints on the outlets and the effect of the constraint on the stagnation point position and streamlines The effect of displacing the stagnation point on the flow streamlines and the trajectory of an object in cross-junction. The red ellipse, solid black circle, hollow circle and the hollow cross represent the target object, its centroid, the stagnation point and the centre of the channel, respectively If the two outlets of the cross channel have the same flow resistance, the inlet streams are equally bifurcated to the two outlets. If there is a constraint on one of the outlets (e.g. the constraint shown at the bottom outlet in Fig. 13a), the outlet flow rates will no longer be the same because of the inequality in the flow resistance of the outlets. Moreover, the stagnation point is displaced towards the outlet with higher flow resistance. Therefore, if the cross-sectional area of the outlets constantly varies, the stagnation point will be continuously repositioned along the extensional axis. In this work, repositioning the stagnation point (shown in Fig. 13b) is conducted by implementing a fixed width constriction in one of the outlets, and a variable height constriction on the other (Tanyeri et al. 2010). The former is shown in the bottom outlet and the latter is shown in red on the top outlet in Fig. 13a. Figure 13c shows the section A–A of the variable height constriction, known as an on-chip membrane valve (in Fig. 13a), and how its deformation constricts the height of the fluidic channel. The control channel is filled with pressurised gas, which results in the deflection of the thin membrane, thereby constricting the height of the fluidic channel. It is called the variable constriction because changes in the gas pressure leads to changes in the deflection of the membrane, and consequently constrict the channel at different heights. A.2: Automation of the stagnation point repositioning to confine a target object at the channel centre In the microfluidic hydrodynamic trap, when a target object enters the cross-slot region (shown in Fig. 14a), a camera captures and streams the image of the cross-junction to a computer. Using image-processing methods the shape of the object is determined and the position of the object centroid is computed. Based on the location of the object in extensional direction, the ratio of the outlet flow rates is changed and consequently the stagnation point is repositioned. The repositioning is carried out using the on-chip membrane valve, which allows the flow streamlines to be manipulated. This manipulation places the target object between the stagnation point and the centre of the channel (shown in Fig. 14b), thereby exerting a hydrodynamic force on the target object towards the trap centre. The manipulation results in placing the object on a new streamline and moving the object towards the channel centre. All of these steps are repeated until both the target and the stagnation point converge in the centre of the trap/channel (shown in Fig. 14c), and at this stage the pressure stays unchanged. However, if due to an external or internal disturbance the target object is displaced from the centre, the same procedure is re-applied forcing the object to return to the centre. A.3: Feedback control algorithm A linear feedback control algorithm was implemented to update the pressure in the control layer and displace the stagnation point using (Tanyeri et al. 2010): $$\begin{aligned} P_{\text {val}} = P_{\text {tc}} + K_{\text {P}} ~ K_{\text {C}} ~ e_{\text {tt}}, \end{aligned}$$ where \(P_{\text {val}}\) is the pressure in the control channel (on-chip valve), \(P_{\text {tc}}\) is the required pressure to keep the stagnation point at the trap centre. The proportional gain is presented by \(K_{\text {P}}\) and it can be considered as a constant (\(K_{\text {P}} = -\ 1.5\)) and \(K_{\text {C}}\) is a factor to convert pressure to distance. If there is a linear relation between the pressure and the stagnation point position, \(K_{\text {C}}\) will be a constant. The offset error (\(e_{\text {tt}}\)) in Eq. 1 is defined as $$\begin{aligned} e_{\text {tt}} = Y_{\text {tp}}-Y_{\text {tc}}, \end{aligned}$$ which is the distance between the position of target (\(Y_{\text {tp}}\)) and trap centre (\(Y_{\text {tc}}\)) along the extensional axis (shown in Fig. 14a). Figure 15 depicts the flowchart of the steps employed for confining a target object in the centre of the cross-slot junction. Algorithm for confining a target object in the cross-slot region of the microchannel, which is schematically shown in Fig. 14 Implementation of the image-processing steps that were used for foreground detection (detecting the A. circinalis filament in the microchannel) An example image of 150 overlaid recordings (at the flow rate of \(50\, \upmu {\text {L/h}}\) and control channel pressure of about 8 psi) that were used to determine the calibration curve of the microfluidic trap Experimentally determined calibration curve of the microfluidic trap. The green line is the trend line fitted to the linear region, and the slope of this line represents the conversion factor (\(K_{\text {C}}\)) A.4: Inline image processing In this work, the foreground detection was carried out using the "Computer Vision System" toolbox in MATLAB. This includes acquiring the cross-region as the background and comparing it with the image of the target in the cross-region. Image comparison was followed by determining whether each pixel belongs to the background (channel) or the foreground (target). Before and after the foreground detection, image contrast enhancement and some morphological operations were carried out to improve in locating the accuracy the target's centroid. Figure 16 depicts the implementation of the image-processing steps used in the present work to locate the centroid of the A. circinalis filament. First, the background image of the region of interest (ROI) was acquired. This was conducted prior to the filament entering ROI (partially shown in Fig. 16a). Figure 16b shows a filament of the cyanobacteria in the ROI. Upon acquiring the image of the target (filament), the image contrast was enhanced to increase pixel intensity of the filament (shown in Fig. 16c). Afterwards, the foreground was detected. Foreground detection requires defining a threshold parameter that depends on the light condition to determine whether each pixel belongs to the channel or the target. The background was then subtracted from the foreground, thereby achieving the target object mask (the binary image shown in Fig. 16d). This step was followed by removing small objects and noises, by defining a certain threshold and removing the blobs (connected white regions) smaller than the threshold (shown in Fig. 16e). Due to the low resolution of camera and the shape of the objects, detecting the connections of the filaments is challenging. As can be seen, the connections are neither observable in Fig. 16b, nor detectable in Fig. 16e. To alleviate this issue, we used morphologically close operation by a \(7\times 7\)-pixel square-shaped structural element to increase the connectivity of the blobs without over-fattening it. Stated in Smith et al. (1997), morphologically close operation in processing of a binary image includes a morphologically erosive operation (making the blob smaller) followed by a morphologically dilative operation (making the blob larger). Figure 16f shows the post-processed mask of the filament after applying the morphologically close operation. Eventually, the centroid of the filament was determined using the blob analysis of the MATLAB computer vision system toolbox. A.5: Calibrating the microfluidic trap The calibration curve of the microfluidic trap is the variation of the stagnation point position at different control channel pressures. The linear region of this curve is used to determine the conversion factor (\(K_{\text {C}}\) in Eq. 1). To experimentally determine this curve, a fluid visualisation experiment was conducted, which delivered the fluid seeded with tracer particles to the cross-junction. By capturing the images of the cross-region, the flow is visualised. In this experiment, the flow was illuminated using the Nikon microscope Epi-fl illuminator (Mercury lamp). The epifluorescence imaging setup configuration that was used in micro-PIV experiments Akbaridoust et al. (2018), except the PIV camera that was replaced by the control camera was employed to acquire the images. The backgrounds (out-of-focus particles) of the acquired images were then removed Akbaridoust et al. (2018) and then 150 images were overlaid. This procedure was repeated at the different pressures in the control channel, and for each set of images the stagnation point was determined manually/visually with the accuracy of one pixel. One pixel corresponds to one micron for the control camera used in this experiment. Figure 17 shows an example of 150 overlaid images at the flow rate of \(50~\upmu {\text {L/h}}\) and control channel pressure of about 8 psi. The distance of the stagnation point position from the centre line of the channel along the extensional and compressional axes are represented by \(Y_o\) and \(X_o\), respectively. As can be seen, the stagnation point is not located on the vertical centre line, due to the flow added to one of the inlets where the sample injection port is located. However, \(X_o\ne 0\) only occurs when the injection port is open. In trapping experiments once a target is trapped, the injection port is manually closed and the stagnation point gradually moves on the centre line along the compressional axis. Figure 18 shows the calibration curve that was experimentally determined (at \(50~\upmu {\text {L/h}}\)) at different pressures in the control channel. The slope of the fitted trend line represents the conversion factor (\(K_{\text {C}}\)). Appendix B: Methodology of simultaneous micro-PIV measurements and trapping In principle, the use of volume illumination in PIV measurements results in bright background in the images that drastically reduce the signal-to-noise ratio and the correlation peak detectability. Therefore, in micro-PIV, the epifluorescence imaging technique (i.e. illuminating the fluorescent particles by green pulsed light and capturing the red light from them) is implemented to alleviate this issue (Santiago et al. 1998). In hydrodynamic microfluidic trap experiments (in this work and also Schroeder and co-workers (Johnson-Chavarria et al. 2011, 2014; Latinwo et al. 2014; Li et al. 2015; Shenoy et al. 2015; Tanyeri and Schroeder; Tanyeri et al. 2010, 2011), a constant white light source (here ambient room lights) was used for imaging. Hence, simultaneous trapping of objects and micro-PIV measurements is challenging because both require their own illumination, optics and camera focused on the micron-sized objects. Simply combining the two experimental setups (micro-PIV and the microfluidic trap setups) to conduct simultaneous micro-PIV measurements and trapping in the microfluidic device leads to two major problems. The combination of the two setups is shown in Fig. 19. The first issue is the disruption of the control camera by the laser pulses. The second issue is the production of an extremely bright background in the images acquired by the PIV camera sensor due to the constant white light source. The former results in preventing the detection of a target object in the cross-slot channel. The top inset image in Fig. 19a shows an example of the image captured by the control camera when the laser was pulsing. As can be seen, the laser pulse interferes with the image-processing techniques for object detection. While, the latter is the production of extremely bright background that precludes correlating the recorded images. The bottom inset image in Fig. 19b shows an image of the particles captured by the PIV camera, while the constant white light was illuminating the flow. Schematic of the combination of micro-PIV and active control setups and the resulting imaging problems in the images acquired by the PIV and the control camera To tackle the first issue we replaced the continuous white light with a continuous blue light source (i.e. filtering all the wavelengths except blue). Whilst, to address the second issue, we filtered all the wavelengths (except blue) incident on the control camera sensor. Using this arrangement, the control camera sensor does not capture the laser pulses, and the PIV camera does not acquire the bright background caused by the constant light source. Figure 4 shows the schematic of the combined micro-PIV and active control system setups, where the two problems that were pointed out are solved using two single-band (blue) bandpass filters. The bandpass filters block all the wavelengths out of the range 420–480 nm. Akbaridoust F (2017) Characterisation of a microfluidic hydro-trap to study the effect of straining flow on waterborne microorganisms. Ph.D. thesis, University of MelbourneGoogle Scholar Akbaridoust F, Philip J, Marusic I (2016) A miniature high strain rate device. In: Proceedings of 20th AFMC conferenceGoogle Scholar Akbaridoust F, Philip J, Marusic I (2018) Assessment of a miniature four-roll mill and a cross-slot microchannel for high-strain-rate stagnation point flows. Meas Sci Technol 29(4):045302CrossRefGoogle Scholar Alicia TGG, Yang C, Wang Z, Nguyen N-T (2016) Combinational concentration gradient confinement through stagnation flow. Lab Chip 16(2):368–376CrossRefGoogle Scholar Amsler CD (2008) Algal chemical ecology, vol 468. Springer, BerlinCrossRefGoogle Scholar Ashkin A, Dziedzic JM, Bjorkholm JE, Chu S (1986) Observation of a single-beam gradient force optical trap for dielectric particles. Opt Lett 11(5):288–290CrossRefGoogle Scholar Barnkob R, Kähler CJ, Rossi M (2015) General defocusing particle tracking. Lab Chip 15(17):3556–3560CrossRefGoogle Scholar Bernassau AL, Glynne-Jones P, Gesellchen F, Riehle M, Hill M, Cumming DRS (2014) Controlling acoustic streaming in an ultrasonic heptagonal tweezers with application to cell manipulation. Ultrasonics 54(1):268–274CrossRefGoogle Scholar Cha S, Shin T, Lee SS, Shim W, Lee G, Lee SJ, Kim Y, Kim JM (2012) Cell stretching measurement utilizing viscoelastic particle focusing. Anal Chem 84(23):10471–10477CrossRefGoogle Scholar Cook PLM, Holland DP, Longmore AR (2010) Effect of a flood event on the dynamics of phytoplankton and biogeochemistry in a large temperate Australian lagoon. Limnol Oceanogr 55(3):1123–1133CrossRefGoogle Scholar Curtis MD, Sheard GJ, Fouras A (2011) Feedback control system simulator for the control of biological cells in microfluidic cross slots and integrated microfluidic systems. Lab Chip 11(14):2343–2351CrossRefGoogle Scholar De Loubens C, Deschamps J, Boedec G, Leonetti M (2015) Stretching of capsules in an elongation flow, a route to constitutive law. J Fluid Mech 767:R3CrossRefGoogle Scholar Dylla-Spears R, Townsend JE, Jen-Jacobson L, Sohn LL, Muller SJ (2010) Single-molecule sequence detection via microfluidic planar extensional flow at a stagnation point. Lab Chip 10(12):1543–1549CrossRefGoogle Scholar Gosse C, Croquette V (2002) Magnetic tweezers: micromanipulation and force measurement at the molecular level. Biophys J 82(6):3314–3329CrossRefGoogle Scholar Gossett DR, Henry TK, Lee SA, Ying Y, Lindgren Anne G, Yang OO, Rao J, Clark AT, Di Carlo D (2012) Hydrodynamic stretching of single cells for large population mechanical phenotyping. PNAS 109(20):7630–7635CrossRefGoogle Scholar Grier DG (2003) A revolution in optical manipulation. Nature 424(6950):810–816CrossRefGoogle Scholar Hall DO, Scurlock JMO, Bolhar-Nordenkampf HR, Leegood RC, Long SP (1993) Photosynthesis and production in a changing environment: a field and laboratory manual. Chapman & Hall, LondonGoogle Scholar Henon Y, Sheard GJ, Fouras A (2014) Erythrocyte deformation in a microfluidic cross-slot channel. RSC Adv 4(68):36079–36088CrossRefGoogle Scholar Henry TK, Gossett DR, Moon YS, Masaeli M, Sohsman M, Ying Y, Mislick K, Adams RP, Rao J, Carlo DD (2013) Quantitative diagnosis of malignant pleural effusions by single-cell mechanophenotyping. Sci Transl Med 5(212):212ra163–212ra163CrossRefGoogle Scholar Hertz HM (1995) Standing-wave acoustic trap for nonintrusive positioning of microparticles. J Appl Phys 78(8):4845–4849CrossRefGoogle Scholar Jeffery GB (1922) The motion of ellipsoidal particles immersed in a viscous fluid. Proc R Soc Lond A 102(715):161–179CrossRefGoogle Scholar Johnson-Chavarria EM, Tanyeri M, Schroeder CM (2011) A microfluidic-based hydrodynamic trap for single particles. J Vis Exp 47:e2517–e2517Google Scholar Johnson-Chavarria EM, Agrawal U, Tanyeri M, Kuhlman TE, Schroeder CM (2014) Automated single cell microbioreactor for monitoring intracellular dynamics and cell growth in free solution. Lab Chip 14(15):2688–2697CrossRefGoogle Scholar Latinwo F, Hsiao K-W, Schroeder CM (2014) Nonequilibrium thermodynamics of dilute polymer solutions in flow. J Chem Phys 141(17):174903CrossRefGoogle Scholar Lee H, Purdon AM, Westervelt RM (2004) Manipulation of biological cells using a microelectromagnet matrix. Appl Phys Lett 85(6):1063–1065CrossRefGoogle Scholar Li Y, Hsiao K-W, Brockman CA, Yates DY, Robertson-Anderson RM, Kornfield JA, San Francisco MJ, Schroeder CM, McKenna GB (2015) When ends meet: circular DNA stretches differently in elongational flows. Macromolecules 48(16):5997–6001CrossRefGoogle Scholar Pajdak-Stós A, Fiakowska E, Fyda J (2001) Phormidium autumnale (cyanobacteria) defense against three ciliate grazer species. Aquat Microb Ecol 23(3):237–244CrossRefGoogle Scholar Pathak JA, Hudson SD (2006) Rheo-optics of equilibrium polymer solutions: wormlike micelles in elongational flow in a microfluidic cross-slot. Macromolecules 39(25):8782–8792CrossRefGoogle Scholar Perkins TT, Smith DE, Chu S (1997) Single polymer dynamics in an elongational flow. Science 276(5321):2016–2021CrossRefGoogle Scholar Qiu Y, Wang H, Demore CEM, Hughes DA, Glynne-Jones P, Gebhardt S, Bolhovitins A, Poltarjonoks R, Weijer K, Schönecker A et al (2014) Acoustic devices for particle and cell manipulation and sensing. Sensors 14(8):14806–14838CrossRefGoogle Scholar Rossi M, Kähler CJ (2014) Optimization of astigmatic particle tracking velocimeters. Exp Fluids 55(9):1809CrossRefGoogle Scholar Santiago JG, Wereley ST, Meinhart CD, Beebe DJ, Adrian RJ (1998) A particle image velocimetry system for microfluidics. Exp Fluids 25(4):316–319CrossRefGoogle Scholar Schroeder CM, Babcock HP, Shaqfeh ESG, Chu S (2003) Observation of polymer conformation hysteresis in extensional flow. Science 301(5639):1515–1519CrossRefGoogle Scholar Schroeder CM, Shaqfeh ESG, Chu S (2004) Effect of hydrodynamic interactions on DNA dynamics in extensional flow: simulation and single molecule experiment. Macromolecules 37(24):9242–9256CrossRefGoogle Scholar Shenoy A, Tanyeri M, Schroeder CM (2015) Characterizing the performance of the hydrodynamic trap using a control-based approach. Microfluid Nanofluid 18(5–6):1055–1066CrossRefGoogle Scholar Shenoy A, Rao CV, Schroeder CM (2016) Stokes trap for multiplexed particle manipulation and assembly using fluidics. PNAS 113(15):3976–3981CrossRefGoogle Scholar Smith SW et al (1997) The scientist and engineer's guide to digital signal processing. California Technical Publications, San DiegoGoogle Scholar Tanyeri M, Schroeder CM (2013) Manipulation and confinement of single particles using fluid flow. Nano Lett 13(6):2357–2346CrossRefGoogle Scholar Tanyeri M, Johnson-Chavarria EM, Schroeder CM (2010) Hydrodynamic trap for single particles and cells. Appl Phys Lett 96(22):224101CrossRefGoogle Scholar Tanyeri M, Ranka M, Sittipolkul N, Schroeder CM (2011) A microfluidic-based hydrodynamic trap: design and implementation. Lab Chip 11(10):1786–1794CrossRefGoogle Scholar Taylor GI (1934) The formation of emulsions in definable fields of flow. Proc R Soc Lond A 146(858):501–523CrossRefGoogle Scholar Ulloa C, Ahumada A, Cordero M (2014) Effect of confinement on the deformation of microfluidic drops. Phys Rev E 89(3):033004CrossRefGoogle Scholar Wacklin P, Hoffmann L, Komárek J et al (2009) Nomenclatural validation of the genetically revised cyanobacterial genus. Dolichospermum (Ralfs ex Bornet et Flahault) comb. nova. Fottea 9(1):59–64Google Scholar Weilin X, Muller SJ (2011) Exploring both sequence detection and restriction endonuclease cleavage kinetics by recognition site via single-molecule microfluidic trapping. Lab Chip 11(3):435–442CrossRefGoogle Scholar Yang AHJ, Moore SD, Schmidt BS, Klug M, Lipson M, Erickson D (2009) Optical manipulation of nanoparticles and biomolecules in sub-wavelength slot waveguides. Nature 457(7225):71–75CrossRefGoogle Scholar © Springer-Verlag GmbH Germany, part of Springer Nature 2018 Email authorView author's OrcID profile 1.Department of Mechanical EngineeringUniversity of MelbourneParkvilleAustralia 2.Department of Chemical and Biomolecular EngineeringUniversity of MelbourneParkvilleAustralia Akbaridoust, F., Philip, J., Hill, D.R.A. et al. Exp Fluids (2018) 59: 183. https://doi.org/10.1007/s00348-018-2637-6 Received 01 August 2018 Revised 21 September 2018 Accepted 23 October 2018 First Online 09 November 2018
CommonCrawl
\begin{definition}[Definition:Sterling/Florin] The '''florin''' is an obsolete unit of the Sterling monetary system. {{begin-eqn}} {{eqn | o = | r = 1 | c = '''florin''' }} {{eqn | r = 2 | c = shillings }} {{eqn | r = \frac 1 {10} | c = pound sterling }} {{eqn | r = 24 | c = (old) pence }} {{eqn | r = 10 | c = (new) pence }} {{end-eqn}} Category:Definitions/Sterling \end{definition}
ProofWiki
Maximum a posteriori estimation In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution (that quantifies the additional information available through prior knowledge of a related event) over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of maximum likelihood estimation. Part of a series on Bayesian statistics Posterior = Likelihood × Prior ÷ Evidence Background • Bayesian inference • Bayesian probability • Bayes' theorem • Bernstein–von Mises theorem • Coherence • Cox's theorem • Cromwell's rule • Principle of indifference • Principle of maximum entropy Model building • Weak prior ... Strong prior • Conjugate prior • Linear regression • Empirical Bayes • Hierarchical model Posterior approximation • Markov chain Monte Carlo • Laplace's approximation • Integrated nested Laplace approximations • Variational inference • Approximate Bayesian computation Estimators • Bayesian estimator • Credible interval • Maximum a posteriori estimation Evidence approximation • Evidence lower bound • Nested sampling Model evaluation • Bayes factor • Model averaging • Posterior predictive •  Mathematics portal Description Assume that we want to estimate an unobserved population parameter $\theta $ on the basis of observations $x$. Let $f$ be the sampling distribution of $x$, so that $f(x\mid \theta )$ is the probability of $x$ when the underlying population parameter is $\theta $. Then the function: $\theta \mapsto f(x\mid \theta )\!$ is known as the likelihood function and the estimate: ${\hat {\theta }}_{\mathrm {MLE} }(x)={\underset {\theta }{\operatorname {arg\,max} }}\ f(x\mid \theta )\!$ is the maximum likelihood estimate of $\theta $. Now assume that a prior distribution $g$ over $\theta $ exists. This allows us to treat $\theta $ as a random variable as in Bayesian statistics. We can calculate the posterior distribution of $\theta $ using Bayes' theorem: $\theta \mapsto f(\theta \mid x)={\frac {f(x\mid \theta )\,g(\theta )}\int _{\Theta }f(x\mid \vartheta )\,g(\vartheta )\,d\vartheta }}\!$ where $g$ is density function of $\theta $, $\Theta $ is the domain of $g$. The method of maximum a posteriori estimation then estimates $\theta $ as the mode of the posterior distribution of this random variable: ${\begin{aligned}{\hat {\theta }}_{\mathrm {MAP} }(x)&={\underset {\theta }{\operatorname {arg\,max} }}\ f(\theta \mid x)\\&={\underset {\theta }{\operatorname {arg\,max} }}\ {\frac {f(x\mid \theta )\,g(\theta )}\int _{\Theta }f(x\mid \vartheta )\,g(\vartheta )\,d\vartheta }}\\&={\underset {\theta }{\operatorname {arg\,max} }}\ f(x\mid \theta )\,g(\theta ).\end{aligned}}\!$ The denominator of the posterior distribution (so-called marginal likelihood) is always positive and does not depend on $\theta $ and therefore plays no role in the optimization. Observe that the MAP estimate of $\theta $ coincides with the ML estimate when the prior $g$ is uniform (i.e., $g$ is a constant function). When the loss function is of the form $L(\theta ,a)={\begin{cases}0,&{\text{if }}|a-\theta |<c,\\1,&{\text{otherwise}},\\\end{cases}}$ as $c$ goes to 0, the Bayes estimator approaches the MAP estimator, provided that the distribution of $\theta $ is quasi-concave.[1] But generally a MAP estimator is not a Bayes estimator unless $\theta $ is discrete. Computation MAP estimates can be computed in several ways: 1. Analytically, when the mode(s) of the posterior distribution can be given in closed form. This is the case when conjugate priors are used. 2. Via numerical optimization such as the conjugate gradient method or Newton's method. This usually requires first or second derivatives, which have to be evaluated analytically or numerically. 3. Via a modification of an expectation-maximization algorithm. This does not require derivatives of the posterior density. 4. Via a Monte Carlo method using simulated annealing Limitations While only mild conditions are required for MAP estimation to be a limiting case of Bayes estimation (under the 0–1 loss function),[1] it is not very representative of Bayesian methods in general. This is because MAP estimates are point estimates, whereas Bayesian methods are characterized by the use of distributions to summarize data and draw inferences: thus, Bayesian methods tend to report the posterior mean or median instead, together with credible intervals. This is both because these estimators are optimal under squared-error and linear-error loss respectively—which are more representative of typical loss functions—and for a continuous posterior distribution there is no loss function which suggests the MAP is the optimal point estimator. In addition, the posterior distribution may often not have a simple analytic form: in this case, the distribution can be simulated using Markov chain Monte Carlo techniques, while optimization to find its mode(s) may be difficult or impossible. In many types of models, such as mixture models, the posterior may be multi-modal. In such a case, the usual recommendation is that one should choose the highest mode: this is not always feasible (global optimization is a difficult problem), nor in some cases even possible (such as when identifiability issues arise). Furthermore, the highest mode may be uncharacteristic of the majority of the posterior. Finally, unlike ML estimators, the MAP estimate is not invariant under reparameterization. Switching from one parameterization to another involves introducing a Jacobian that impacts on the location of the maximum.[2] As an example of the difference between Bayes estimators mentioned above (mean and median estimators) and using a MAP estimate, consider the case where there is a need to classify inputs $x$ as either positive or negative (for example, loans as risky or safe). Suppose there are just three possible hypotheses about the correct method of classification $h_{1}$, $h_{2}$ and $h_{3}$ with posteriors 0.4, 0.3 and 0.3 respectively. Suppose given a new instance, $x$, $h_{1}$ classifies it as positive, whereas the other two classify it as negative. Using the MAP estimate for the correct classifier $h_{1}$, $x$ is classified as positive, whereas the Bayes estimators would average over all hypotheses and classify $x$ as negative. Example Suppose that we are given a sequence $(x_{1},\dots ,x_{n})$ of IID $N(\mu ,\sigma _{v}^{2})$ random variables and a prior distribution of $\mu $ is given by $N(\mu _{0},\sigma _{m}^{2})$. We wish to find the MAP estimate of $\mu $. Note that the normal distribution is its own conjugate prior, so we will be able to find a closed-form solution analytically. The function to be maximized is then given by $f(\mu )f(x\mid \mu )=\pi (\mu )L(\mu )={\frac {1}{{\sqrt {2\pi }}\sigma _{m}}}\exp \left(-{\frac {1}{2}}\left({\frac {\mu -\mu _{0}}{\sigma _{m}}}\right)^{2}\right)\prod _{j=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma _{v}}}\exp \left(-{\frac {1}{2}}\left({\frac {x_{j}-\mu }{\sigma _{v}}}\right)^{2}\right),$ which is equivalent to minimizing the following function of $\mu $: $\sum _{j=1}^{n}\left({\frac {x_{j}-\mu }{\sigma _{v}}}\right)^{2}+\left({\frac {\mu -\mu _{0}}{\sigma _{m}}}\right)^{2}.$ Thus, we see that the MAP estimator for μ is given by ${\hat {\mu }}_{\mathrm {MAP} }={\frac {\sigma _{m}^{2}\,n}{\sigma _{m}^{2}\,n+\sigma _{v}^{2}}}\left({\frac {1}{n}}\sum _{j=1}^{n}x_{j}\right)+{\frac {\sigma _{v}^{2}}{\sigma _{m}^{2}\,n+\sigma _{v}^{2}}}\,\mu _{0}={\frac {\sigma _{m}^{2}\left(\sum _{j=1}^{n}x_{j}\right)+\sigma _{v}^{2}\,\mu _{0}}{\sigma _{m}^{2}\,n+\sigma _{v}^{2}}}.$ which turns out to be a linear interpolation between the prior mean and the sample mean weighted by their respective covariances. The case of $\sigma _{m}\to \infty $ is called a non-informative prior and leads to an ill-defined a priori probability distribution; in this case ${\hat {\mu }}_{\mathrm {MAP} }\to {\hat {\mu }}_{\mathrm {MLE} }.$ References 1. Bassett, Robert; Deride, Julio (2018-01-30). "Maximum a posteriori estimators as a limit of Bayes estimators". Mathematical Programming: 1–16. arXiv:1611.05917. doi:10.1007/s10107-018-1241-0. ISSN 0025-5610. 2. Murphy, Kevin P. (2012). Machine learning : a probabilistic perspective. Cambridge, Massachusetts: MIT Press. pp. 151–152. ISBN 978-0-262-01802-9. • DeGroot, M. (1970). Optimal Statistical Decisions. McGraw-Hill. ISBN 0-07-016242-5. • Sorenson, Harold W. (1980). Parameter Estimation: Principles and Problems. Marcel Dekker. ISBN 0-8247-6987-2. • Hald, Anders (2007). "Gauss's Derivation of the Normal Distribution and the Method of Least Squares, 1809". A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713–1935. New York: Springer. pp. 55–61. ISBN 978-0-387-46409-1. Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
The cost of severe haemophilia in Europe: the CHESS study Jamie O'Hara1, David Hughes1, Charlotte Camp ORCID: orcid.org/0000-0003-4250-42181, Tom Burke2, Liz Carroll3 & Daniel-Anibal Garcia Diego4 Severe haemophilia is associated with major psychological and economic burden for patients, caregivers, and the wider health care system. This burden has been quantified and documented for a number of European countries in recent years. However, few studies have taken a standardised methodology across multiple countries simultaneously, and sought to amalgamate all three levels of burden for severe disease. The overall aim of the 'Cost of Haemophilia in Europe: a Socioeconomic Survey' (CHESS) study was to capture the annualised economic and psychosocial burden of severe haemophilia in five European countries. A cross-section of haemophilia specialists (surveyed between January and April 2015) provided demographic and clinical information and 12-month ambulatory and secondary care activity for patients via an online survey. In turn, patients provided corresponding direct and indirect non-medical cost information, including work loss and out-of-pocket expenses, as well as information on quality of life and adherence. The direct and indirect costs for the patient sample were calculated and extrapolated to population level. Clinical reports for a total of 1,285 patients were received. Five hundred and fifty-two patients (43% of the sample) provided information on indirect costs and health-related quality of life via the PSC. The total annual cost of severe haemophilia across the five countries for 2014 was estimated at EUR 1.4 billion, or just under EUR 200,000 per patient. The highest per-patient costs were in Germany (mean EUR 319,024) and the lowest were in the United Kingdom (mean EUR 129,365), with a study average of EUR 199,541. As expected, consumption of clotting factor replacement therapy represented the vast majority of costs (up to 99%). Indirect costs are driven by patient and caregiver work loss. The results of the CHESS study reflect previous research findings suggesting that costs of factor replacement therapy account for the vast majority of the cost burden in severe haemophilia. However, the importance of the indirect impact of haemophilia on the patient and family should not be overlooked. The CHESS study highlights the benefits of observational study methodologies in capturing a 'snapshot' of information for patients with rare diseases. Haemophilia is a genetic disorder characterised by a deficiency of a clotting factor in the blood, leading to prolonged bleed events. The disease is carried on the X chromosome and primarily affects males, though female carriers of the gene may exhibit symptoms of mild haemophilia. The two forms of the condition are Haemophilia A (Factor VIII (FVIII) deficiency) and Haemophilia B (Factor IX (FIX) deficiency); Haemophilia A is approximately four times more common than Haemophilia B [1]. The global incidence of haemophilia is approximately 1 per 4,000–5,000 male births; in a given year, approximately 400 boys are born with haemophilia in Europe [1]. Individuals with severe haemophilia – representing approximately one-third of the haemophilia population in Europe [2, 3] – have factor levels less than 1% of that expected in a healthy person. Such individuals will experience recurrent, spontaneous bleeds, often in the absence of any trauma event [4]; four-fifths of bleed events occur within the musculoskeletal system [5]. Approximately 90% of people with severe haemophilia experience chronic haemophilic joint disease, characterised by chronic inflammation and progressive joint deformity, in one or more major joints by the age of 30 [5]. As well as joint stiffness and diminished range of motion, individuals with haemophilia experience significant acute pain during bleed events and chronic pain due to arthropathy, leading to disability and impaired quality of life in more than half of cases [6]. Prior to the 1960s, when clotting factor replacement therapy (CFRT) became widely available, life expectancy of persons with haemophilia (PWH) was very low (<30 years) in comparison to that of the general male population, with haemorrhage as the most prevalent cause of death [2]. Innovations in factor production mean that PWH in nations with developed healthcare systems can now expect a near-normal quality of life and an average lifespan no more than 10 years shorter than that of an unaffected individual [3, 4]. In this current "golden era" of haemophilia care [5], management of age-related conditions has become a novel necessity for clinicians and PWH [6, 7]. Nevertheless, a continuing major risk associated with the use of CFRT is the development of inhibitors, so-called due to the development of antibodies that inhibit factor uptake. Affected individuals experience poor bleed control and subsequently higher levels of morbidity and mortality [8] and reduced HRQoL [9, 10]. The cost of treating an individual with an inhibitor can exceed EUR 1 million per year, with regimens of immune tolerance induction (ITI) therapy lasting more than 3 years in rare cases [11, 12]. The objective of the CHESS study was to estimate and extrapolate resource use, costs, and health-related quality of life (HRQOL) for adults with severe haemophilia in the five largest European economies (EU5: France, Germany, Italy, Spain and the United Kingdom (UK)). The psychological, societal, and economic burden of haemophilia has been documented for a number of European countries [13,14,15,16,17]; few studies have taken a standardised methodology across multiple countries simultaneously [18], and sought to amalgamate all three levels of burden for severe disease. The overall aim of the CHESS study was to capture the annualised economic and psychosocial burden of severe haemophilia in the EU5. Study cohort The study population is a sample of adult patients (18 years and over) with severe inherited haemophilia A or B (FVIII/FIX level <1 IU dL−1), drawn approximately in proportion to the population of individuals with haemophilia in each of the five countries participating. Individuals with acquired haemophilia or other clotting factor deficiencies (e.g. von Willebrand disease (VWD), haemophilia C) were excluded, as symptoms of vWD are generally comparable to that of mild or moderate haemophilia [19], and our sample size would preclude meaningful conclusions for other rare coagulation disorders. A retrospective, cross-sectional methodology was employed. Data was collected by means of two questionnaires, designed specifically for specialists and patients: the web-based case record forms (CRFs) were used to collect information on direct medical resource utilisation and clinical data based on recorded notes; and the paper patient self-completion form (PSC), given to patients following the consultation, provided complimentary socio-economic information. One hundred and thirty-nine haematologists (and haemophilia care providers (HCPs) in France) based in hospitals and clinics completed up to eight patient CRFs, with a small number providing information for up to 16 patients. In order to minimise the risk of selection bias, physicians were encouraged to recruit the next eligible patients with whom they consulted, irrespective of their reason for consultation. All patient participants provided informed consent. The study protocol was approved by the Research Ethics Sub Committee of the Faculty of Health and Social care within the University of Chester. The approval stipulated that the study was to be carried out in correspondence with regional and relevant guidelines. Data collected from clinicians included clinical economic and demographic information, while data collected from patients included indirect and direct non-medical resource use, HRQOL (via the EuroQol EQ-5D-3 L), work productivity impact (via the Work Productivity and Activity Impairment (WPAI) Questionnaire), and therapy adherence (via the Morisky Medication Adherence Scale 8-item (MMAS-8)). Data was collected between December 2014 and April 2015 and captured a period of 12 months retrospectively. Estimates of healthcare utilisation and costs were then calculated for the 12-month period. The online format of the CRFs ensured that numeric responses were kept within reasonable boundaries, and that infeasible responses (such as the ability to provide two mutually exclusive responses, for instance) were kept to a minimum. Nevertheless, a small amount of post-hoc data processing was conducted based on expert clinical guidance. All direct medical and non-medical costs were sourced from publically available data (Table 1). Choice of resources to be included in the CHESS study was defined by the societal and participant/family perspectives [20]. To investigate the distribution of direct costs, resource use was separated into four categories: Table 1 National costs for CHESS resource units Ambulatory activity, including: haemophilia-related visits to haematologists and other specialists, paramedical practitioners (nurse specialists, physiotherapists, diet and nutritional support, etc.), and all tests and procedures (e.g. blood tests, diagnostic imaging); Haemophilia-related admissions to hospital, based on length of stay, including admission to ICU, and surgical procedure/diagnosis (where applicable): bleed-related hospital admissions; and procedures on 'target' joints – defined here as areas of chronic synovitis [21] – including arthroscopy, arthrodesis, arthroplasty, arthrocentesis, and synovectomy; CFRT; and Use of a professional (paid) care provider. All factor consumption was reported by the physician. For on-demand regimens, factor consumption for the most recent 3-month period was annualised. For prophylaxis regimens, mean IU per infusion was multiplied by the weekly infusion rate, and annualised. Direct non-medical and indirect costs were defined based on seven categories of resource use: Loss of earnings by the patient (due to absenteeism and/or early retirement); Loss of earnings by an informal caregiver (family or friend) (as above); Transfer payments (e.g. financial aid, state allowances); Over-the-counter (OTC) medications; Other medical device, personal aid, or home alteration costs (e.g. walking aids, orthotics); Alternative therapies (e.g. yoga/Pilates, massage, nutritionist); and Transport to/from hospital attendances. Costs associated with temporary and long-term work absence (including early retirement) were valued using the traditional human capital approach (HCA), due to its benefits in terms of transparency and comparability versus that of the newer friction cost approach (FCA) [22]. Patients who reported early retirement due to haemophilia, or who reported that they were unable to work due to haemophilia in the 3 months prior to responding, were costed at the average annual salary (stratified by country). For patients in employment, days of work lost due to bleeds across the preceding 3 months was reported and extrapolated. Whilst information regarding comorbidities was captured in the questionnaire, all direct and indirect costs were limited to those resulting directly from haemophilia-related activity. All local currency total costs were converted to Euros using the official conversion rate as of 30th May, 2015. Per-patient costs were calculated by multiplying the quantities of the resource used with the national unit price of each resource. To extrapolate the sample costs to country population level, the per-patient costs were multiplied by national prevalence weights [23]: $$ \begin{array}{l}-{\mathrm{P}}_i\mathrm{x}\ {\mathrm{Q}}_i = \mathrm{Cos}{\mathrm{t}}_i\\ {}-\mathrm{Cos}{\mathrm{t}}_i\mathrm{x}\ \mathrm{prevalence}\ \mathrm{weights} = \mathrm{population}\ \mathrm{cost}\\ {}-\mathrm{P} = \mathrm{price},\ \mathrm{Q} = \mathrm{resource}\ \mathrm{use},\ \mathrm{and}\ \mathrm{i} = 1\hbox{--} n\left(\mathrm{wheren} = \mathrm{number}\ \mathrm{of}\ \mathrm{cost}\ \mathrm{i}\mathrm{tems}\right)\end{array} $$ Patient outcomes were obtained by means of the self-administered EQ-5D-3 L, a non-disease specific instrument covering the areas of mobility, self-care, everyday activities, pain/discomfort and anxiety/depression [24]. A three-level rating scale is applied to each dimension, with a total of 245 possible health states defined. The questionnaire has been validated in many countries in Europe and is commonly used in economic evaluation and health technology assessment [25]. The values or utilities are indicated on a scale on which death has a value of 0 and perfect health a value of 1, with negative values being possible. Validated country-specific adult value sets obtained via the EuroQol website were used, with the exception of Italy, for which values were taken from literature [26]. The demographic data from CHESS are detailed in Table 2 and the severe haemophilia prevalence data, which was used for extrapolation of sample costs, are presented in Table 3. Reports for a total of 1,285 patients were received – 996 with severe haemophilia A and 289 with severe haemophilia B representing approximately 12% and 21% of the respective populations. 552 patients (43% of the sample) provided information on indirect costs and HRQOL via the PSC. Patients were an average of 35.9 years of age (standard deviation (SD) 14.9), with the lowest average age in Germany (31.1 years), and the highest in Italy (38.9 years). The majority of patients recruited in the study (57%) were receiving CFRT via a prophylaxis regimen; 5% of patients were diagnosed with an inhibitor at the time of the study. Coinfection with HIV and/or HCV was 3% and 5%, respectively. Mean EQ-5D-3 L index score ranged from 0.59 in the UK to 0.90 in Germany (pooled average 0.76). Table 2 CHESS demographic data Table 3 Prevalence of severe haemophilia A and B in CHESS countries (all ages) Total economic burden of illness Total 1-year costs of severe haemophilia for the five countries was estimated to be EUR 1.55 billion (2014 values) (Table 4), with per-country costs ranging from EUR 93.8 million in Spain to EUR 700.6 million in Germany. Annual per-patient costs across the pooled sample were estimated to be EUR 199,541, ranging from EUR 129,365 in the UK to EUR 319,024 in Germany. Direct costs were estimated to be EUR 1.38 billion, representing approximately 97% of total costs and an average of EUR 193,363 per annum. Drug expenditure constitutes the vast majority (97.9%) of direct costs, a pattern that is found across all five countries (range 95.3–99.4%) (Fig. 1). Total indirect costs for the five countries were estimated to be EUR 43.3 million, averaging EUR 6,075 per patient across the pooled sample. Table 4 Total economic burden of severe haemophilia in the EU5 (EUR) Distribution of per-patient costs (EUR). Total per-patient costs are shown for each of the five countries as well as a study average split by CFRT, other direct medical costs, and indirect costs With an overall population of more than 300 million, the EU5 is home to approximately 8,123 people with severe haemophilia A and 1,370 people with severe haemophilia B. The CHESS study aimed to provide a snapshot view of the economic, societal, and psychological burden of severe haemophilia in adults within the EU5. We estimated the mean per-patient annual direct cost of severe haemophilia at EUR 173,102, EUR 313,068, EUR 210,025, EUR 132,329, and EUR 116,963 for France, Germany, Italy, Spain, and the UK, approximately 48, 79, 87, 65, and 34 times higher than the mean per-capita health expenditure in these countries, respectively [27]. Direct medical resource use was the predominant cost driver, with more than 97% of direct costs taken by CFRT, reflecting previous research suggesting that costs of CFRT account for the vast majority of the cost burden [15, 16, 28]. Annual indirect costs averaged just over EUR 6,000 per patient, with marginal variation between countries (range EUR 4,850–8,651). The total economic and societal cost of severe haemophilia in the EU5 in 2014 was estimated at EUR 1.4 billion. Patients were recruited to the CHESS study via their haematologist or HCP, with a response rate of 43% across the pooled sample. Participation in the study by clinicians and patients was voluntary, and we therefore cannot rule out a degree of selection bias. Because the cost of illness for non-responding patients remains unknown, it is also not possible to assess the extent of the potential bias. The ratio of haemophilia A patients relative to that of haemophilia B in the CHESS cohort (approximately 1:3.5) is more narrow than current epidemiological studies, which place the ratio closer to 1:5 in severe disease [29]. The differential burden between haemophilia A and B is a topic of ongoing research; current evidence suggests that any overrepresentation by haemophilia B may generate more conservative estimates of disease burden, due to milder disease symptoms [30]. The proportion of patients receiving prophylaxis in the study cohort (57%) broadly matches that of a recent retrospective audit of haemophilia care in the EU5 [31]; prevalence of diagnosed inhibitors (4.5%) is almost identical to that of a recent study in Portugal (4.7%) [16], though the study included children (more than half of inhibitors present before the age of 9 years [32]), as well as mild and moderate haemophilia patients, who are at lower risk of inhibitor development [33]. Five percent of the CHESS cohort have recorded HCV comorbidity, with the highest rate in Italy (8%); results from the World Federation of Hemophilia Annual Global Survey [34] suggests that prevalence of HCV among haemophilia patients is upwards of one in ten, with almost half of patients in Germany co-diagnosed. Furthermore, approximately 7% of PWH in the EU5 are HIV-positive [34,35,36]; we find prevalence in CHESS to be less than half this amount. We suggest that a sampling bias towards non-infected patients may be present to some extent in this study. The implications of any underrepresentation for our burden estimates are uncertain: the extent to which HIV/HCV coinfection impact upon costs of haemophilia is of some dispute in current literature [37, 38], but may be more than 50% higher compared to those with haemophilia alone. By profiling consulting patients, we draw from a cohort of patients who consult more frequently – maybe because they have difficulty with bleed control, due to joint degeneration, suboptimal therapy, or adherence. It is possible, therefore, that our sample is biased towards those older, more costly, and less adherent patients, and that our extrapolated costs may be an overestimate of the actual disease burden. However, the age distribution of the CHESS cohort is similar to that of another recent study that sourced data directly from haemophilia treatment centres and data registers across the EU5, Belgium and Sweden [31]. Our study is limited in its cost estimates by the need to use publically available reimbursement data, particularly for hospital admissions, rather than actual costs to hospital providers and patients. Any divergence between these costs may lead to an under- or over-estimate of the actual realised costs. Recall bias regarding consultations and outpatient visits may be present for clinicians, particularly for consultations with other specialists. Smaller payments by patients may also be overlooked when reporting use of devices, aids, and OTC medications. Costs of informal care provision are limited to that of the primary caregiver; informal care and household burden cost estimates were consequently underestimated for households in which additional individuals contributed to informal care. Randomized controlled trials (RCTs) are the gold standard of evidence generation for health interventions. However, design of RCTs in rare diseases such as haemophilia is difficult due to the limited size of the patient population [39]. Further, there is often disparities between the highly structured environment of an RCT and 'real world' practice, particularly in the absence of formal clinical guidelines. Instead, much of the available evidence for rare diseases is based on larger-scale registries, post-authorisation surveillance studies, and observational studies such as CHESS, which have in recent years produced major insights into the treatment and management of individuals with haemophilia [40]. PWH in the five CHESS countries benefit from major investment of resources for their care and treatment. These patients are likely to obtain first access to novel long-acting and gene-based therapies in haemophilia, and it is therefore important to understand the continuing unmet need within these populations and where these new therapies will fit with current treatment approaches. The estimates of resource use and cost burden within this study are less translatable outside of Western Europe, where haemophilia often remains underdiagnosed and undertreated [41]. In countries where access to CFRT is restricted and many PWH experience major joint damage and deformity by adolescence, the burden of disease will be weighted more towards the psychosocial and employment impact resulting from severe disability and premature mortality [42]. The results of the CHESS study underscore the wide variety of costs that accompany a rare disease such as haemophilia and the substantial economic burden carried by patients, caregivers, and the health care systems in these countries. Based on other existing studies of this patient group, we believe that the cost estimates from the CHESS cohort extrapolate well to the population of severe haemophilia patients in the EU5. Though the costs of CFRT account for the vast majority of the burden in this patient group, the importance of the indirect impact of haemophilia on the patient and family should not be overlooked. The CHESS study highlights the benefits of observational study methodologies in capturing a 'snapshot' of information for patients with rare diseases. Berntorp E. Future of haemophilia outcome assessment: registries are key to optimized treatment. J. Intern. Med. 2016:279(6):498-501. Mejia-Carvajal C, Czapek EE, Valentino LA. Life expectancy in hemophilia outcome. J Thromb Haemost. 2006;4:507–9. Darby SC, Kan SW, Spooner RJ, Giangrande PLF, Hill FGH, Hay CRM, et al. Mortality rates, life expectancy, and causes of death in people with hemophilia A or B in the United Kingdom who were not infected with HIV. Blood. 2007;110:815–25. World Federation of Hemophilia. Frequently asked questions about hemophilia [Internet]. 2012 [cited 2015 Jul 31]. Available from: http://www.wfh.org/en/page.aspx?pid=637#Life_expectancy Franchini M, Mannucci PM. Past, present and future of hemophilia: a narrative review. Orphanet J Rare Dis. 2012;7:24. Franchini M, Tagliaferri A, Mannucci PM. The management of hemophilia in elderly patients. Clin Interv Aging. 2007;2:361–8. Mauser-Bunschoten EP, van de Putte DE F, Schutgens REG. Co-morbidity in the ageing haemophilia patient: The down side of increased life expectancy. Haemophilia. 2009;15:853–63. Darby SC, Keeling DM, Spooner RJD, Wan Kan S, Giangrande PLF, Collins PW, et al. The incidence of factor VIII and factor IX inhibitors in the hemophilia population of the UK and their effect on subsequent mortality, 1977–99. J Thromb Haemost. 2004;2:1047–54. Gringeri A, Mantovani LG, Scalone L, Mannucci PM. Cost of care and quality of life for patients with hemophilia complicated by inhibitors: the COCIS Study Group. Blood. 2003;102:2358–63. Scalone L, Mantovani LG, Mannucci PM, Gringeri A. Quality of life is associated to the orthopaedic status in haemophilic patients with inhibitors. Haemophilia. 2006;12:154–62. Rocino A, Cortesi P, Scalone L, Mantovani L, Crea R, Gringeri A. Immune tolerance induction in patients with haemophilia A and inhibitors: effectiveness and cost analysis in an European Cohort (The ITER Study). Haemophilia. 2016;22:96–102. Knight C. Health economics of treating haemophilia A with inhibitors. Haemophilia. 2005;11 Suppl 1:11–7. Coppola A, Cerbone AM, Mancuso G, Mansueto MF, Mazzini C, Zanon E. Confronting the psychological burden of haemophilia. Haemophilia. 2011;17:21–7. Iannone M, Pennick L, Tom A, Cui H, Gilbert M, Weihs K, et al. Prevalence of depression in adults with haemophilia. Haemophilia. 2012;18:868–74. Henrard S, Devleesschauwer B, Beutels P, Callens M, De Smet F, Hermans C, et al. The health and economic burden of haemophilia in Belgium: a rare, expensive and challenging disease. Orphanet J Rare Dis. 2014;9:39. Rocha P, Carvalho M, Lopes M, Araujo F. Costs and utilization of treatment in patients with hemophilia. BMC Health Serv Res. 2015;15:484. BMC Health Services Research. Kodra Y, Cavazza M, Schieppati A, De Santis M, Armeni P, Arcieri R, et al. The social burden and quality of life of patients with haemophilia in Italy. Blood Transfus (Trasfusione del sangue). 2014;12 Suppl 3:s567–75. Cavazza M, Kodra Y, Armeni P, De Santis M, López-Bastida J, Linertová R, et al. Social/economic costs and quality of life in patients with haemophilia in Europe. Eur J Heal Econ. 2016;17:53–65. Laffan M, Brown SA, Collins PW, Cumming AM, Hill FGH, Keeling D, et al. The diagnosis of von Willebrand disease: a guideline from the UK Haemophilia Centre Doctors' Organization. Haemophilia. 2004;10:199–217. Segel J. Cost of illness studies - a primer. North Carolina: RTI International; 2006. Mulder K, Llinás A. The target joint. Haemophilia. 2004;10 Suppl 4:152–6. Hanly P, Timmons A, Walsh PM, Sharp L. Breast and prostate cancer productivity costs: a comparison of the human capital approach and the friction cost approach. Value Health. 2012;15:429–36. Jönsson B. Revealing the cost of Type II diabetes in Europe. Diabetologia. 2002;45:S5–S12. Dolan P. Modeling valuations for EuroQol health states. Med Care. 1997;35:1095–108. Drummond MF, Sculpher MJ, Torrance GW, O'Brien BJ, Stoddart GL. Methods for the economic evaluation of health care programmes. Oxford: Oxford University Press; 2005. Scalone L, Cortesi PA, Ciampichini R, Belisari A, D'Angiolella LS, Cesana G, et al. Italian Population-Based Values of EQ-5D Health States. Value Heal. 2013;16:814–22. Elsevier. Organization for Economic Co-operation and Development. OECD.stat (database). Health expenditure and financing. [Internet]. [cited 2016 Mar 12]. Available from: http://stats.oecd.org/index.aspx?DataSetCode=SHA Zhou Z-Y, Koerper MA, Johnson KA, Riske B, Baker JR, Ullman M, et al. Burden of illness: direct and indirect costs among persons with hemophilia A in the United States. J Med Econ. 2015;18(6)1–9. World Federation of Hemophilia. Report on the Annual Global Survey 2015. 2015. Mannucci PM, Franchini M. Is haemophilia B less severe than haemophilia A? Haemophilia. 2013;19:499–502. Berntorp E, Dolan G, Hay C, Linari S, Santagostino E, Tosetto A, et al. European retrospective study of real-life haemophilia treatment. Haemophilia. 2016;23(1):105-14. Hay CRM, Palmer B, Chalmers E, Liesner R, Maclean R, Rangarajan S, et al. Incidence of factor VIII inhibitors throughout life in severe hemophilia A in the United Kingdom. Blood. 2011;117:6367–70. American Society of Hematology. Lee CA, Kessler CM, Varon D, Martinowitz U, Heim M, Hay CRM. Factor VIII inhibitors in mild and moderate-severity haemophilia A. Haemophilia. 1998;4:558–63. Blackwell Science Ltd. World Federation of Hemophilia. World Federation of Hemophilia Report on the Annual Global Survey 2014. 2014. Guh S, Grosse SD, McAlister S, Kessler CM, Soucie JM. Health care expenditures for Medicaid-covered males with haemophilia in the United States, 2008. Haemophilia. 2012;18:276–83. Tencer T, Friedman HS, Li-McLeod J, Johnson K. Medical Costs and Resource Utilization for Hemophilia Patients With and Without HIV or HCV Infection. J Manag Care Pharm. 2007;13:790–8. Halpern SD, Karlawish JHT, Berlin JA. The Continuing Unethical Conduct of Underpowered Clinical Trials. JAMA. 2002;288:358. Johnson KA, Zhou Z-Y. Costs of care in hemophilia and possible implications of health care reform. Hematology Am Soc Hematol Educ Program. 2011;2011:413–8. O'Mahony B, Black C. Expanding hemophilia care in developing countries. Semin Thromb Hemost. 2005;31:561–8. Kar A, Phadnis S, Dharmarajan S, Nakade J. Epidemiology & social costs of haemophilia in India. Indian J Med Res. 2014;140:19–31. Causemann C. Querschnittsuntersuchung zum altersabhängigen Gelenkstatus und Konzentratverbrauch von Patienten mit einer schweren Hämophilie A oder B im Jahr 2010. Bonn: Rheinische Friedrich-Wilhelms-Universität; 2014. Aznar JA, Lucía F, Abad-Franch L, Jiménez-Yuste V, Pérez R, Batlle J, et al. Haemophilia in Spain. Haemophilia. 2009;15:665–75. United Kingdom Haemophilia Centres Doctors' Organisation. Bleeding disorder statistics for April 2013 to March 2014. 2014. Institut de Veille Sanitaire. Réseau Francecoag - Statistiques nationales: Hémophilie A [Internet]. 2015 [cited 2015 Jul 31]. Available from: http://www.francecoag.org/SiteWebPublic/public/stats/stats_page.jsp?stat2=on Abbonizio F, Giampaolo A, Arcieri R, Hassan HJ, AICE (AICE). Registro Nazionale delle Coagulopatie Congenite. Rapporto 2013. Rapp. ISTISAN 15/14. 2015. Institut de Veille Sanitaire. Réseau Francecoag - Statistiques nationales: Hémophilie B [Internet]. 2015 [cited 2015 Jul 31]. Available from: http://www.francecoag.org/SiteWebPublic/public/stats/stats_page.jsp?stat3=on Kind thanks to Ian Jacob (HCD Economics), Andi Mabhala (University of Chester), Declan Noone (European Haemophilia Consortium), and Janice Gidman (University of Chester) for reviewing the manuscript drafts. This study was supported by unrestricted research grants from Swedish Orphan Biovitrum AB (Sobi) and Novo Nordisk. The datasets generated and/or analysed during the current study are held under license by the University of Chester and are not publicly available, but are available from the corresponding author on reasonable request. JOH and DH designed the study protocol. JOH, DH, and CC designed the CRF and PSC. TB and CC analysed the data. JOH and CC wrote the manuscript. LC and DAG provided non-clinical perspective for the study materials and manuscript. All authors read and approved the final manuscript. The study was reviewed by the Research Ethics Sub Committee of the Faculty of Health and Social Care within the University of Chester. Patient consent for use of clinical data was not required (as per European Pharmaceutical Market Research Association (EPhMRA) guidelines). Patient consent was obtained via tick box selection for the PSC element of the study. Faculty of Health and Social Care, University of Chester, Chester, UK Jamie O'Hara, David Hughes & Charlotte Camp HCD Economics, Daresbury, UK The Haemophilia Society, London, UK Liz Carroll FedHemo, Madrid, Spain Daniel-Anibal Garcia Diego Jamie O'Hara David Hughes Charlotte Camp Correspondence to Charlotte Camp. O'Hara, J., Hughes, D., Camp, C. et al. The cost of severe haemophilia in Europe: the CHESS study. Orphanet J Rare Dis 12, 106 (2017). https://doi.org/10.1186/s13023-017-0660-y DOI: https://doi.org/10.1186/s13023-017-0660-y
CommonCrawl
# Basic Linux commands for working with files Before diving into regular expressions, it's important to have a solid understanding of basic Linux commands for working with files. These commands will serve as the foundation for utilizing regular expressions effectively. Some essential commands include: - `ls`: Lists the contents of a directory. - `cd`: Changes the current directory. - `cp`: Copies a file or directory. - `mv`: Moves a file or directory. - `rm`: Deletes a file or directory. - `cat`: Displays the contents of a file. - `grep`: Searches for a pattern in a file. Here's an example of using the `ls` command to list the contents of a directory: ```bash ls ``` This command will display the files and directories in the current directory. ## Exercise List the contents of a specific directory using the `ls` command. ### Solution ```bash ls /path/to/directory ``` # Shell scripting and automating tasks with regular expressions Shell scripting is a powerful tool for automating tasks in Linux. Regular expressions can be used in shell scripts to manipulate files and perform complex text processing tasks. To use regular expressions in a shell script, you'll need to use the `grep`, `sed`, and `awk` text processing tools. These tools can be combined with shell scripting to create powerful automation scripts. Here's an example of using regular expressions in a shell script to search for a pattern in a file: ```bash #!/bin/bash file="example.txt" pattern="search_pattern" if grep -q "$pattern" "$file"; then echo "Pattern found." else echo "Pattern not found." fi ``` This script searches for the specified pattern in the `example.txt` file and prints a message indicating whether the pattern was found or not. ## Exercise Create a shell script that uses regular expressions to search for a pattern in a file and prints the matching lines. ### Solution ```bash #!/bin/bash file="example.txt" pattern="search_pattern" grep "$pattern" "$file" ``` # Utilizing regular expressions for file manipulation Regular expressions can be used to manipulate files in various ways. This section covers some common applications of regular expressions for file manipulation. - Searching for specific patterns in files. - Replacing text in files. - Extracting specific information from files. Here's an example of using `grep` to search for a pattern in a file: ```bash grep "search_pattern" "example.txt" ``` This command will display all lines in `example.txt` that contain the specified pattern. ## Exercise Create a shell script that searches for a pattern in a file and prints the matching lines. ### Solution ```bash #!/bin/bash file="example.txt" pattern="search_pattern" grep "$pattern" "$file" ``` # Specific examples of using regular expressions in Linux - Searching for all lines containing a specific email address. - Replacing all occurrences of a word with another word. - Extracting all IP addresses from a log file. Here's an example of using `grep` to search for all lines containing a specific email address: ```bash grep "[email protected]" "example.txt" ``` This command will display all lines in `example.txt` that contain the specified email address. ## Exercise Create a shell script that searches for all lines containing a specific email address in a file and prints the matching lines. ### Solution ```bash #!/bin/bash file="example.txt" email="[email protected]" grep "$email" "$file" ``` # Introduction to text processing tools like sed and awk `sed` and `awk` are powerful text processing tools that can be used with regular expressions in Linux. These tools offer a wide range of functionality for manipulating and processing text files. - `sed`: Stream editor, used for editing text files. - `awk`: Text-processing tool, used for field-based text processing. Here's an example of using `sed` to replace all occurrences of a word with another word: ```bash sed 's/old_word/new_word/g' "example.txt" ``` This command replaces all occurrences of `old_word` with `new_word` in `example.txt`. ## Exercise Create a shell script that replaces all occurrences of a word with another word in a file. ### Solution ```bash #!/bin/bash file="example.txt" old_word="old_word" new_word="new_word" sed "s/$old_word/$new_word/g" "$file" ``` # Using sed for stream editing - Deleting lines that match a pattern. - Inserting text before or after a matching line. - Changing the case of text (uppercase, lowercase). Here's an example of using `sed` to delete all lines that match a pattern: ```bash sed '/pattern/d' "example.txt" ``` This command deletes all lines in `example.txt` that match the specified pattern. ## Exercise Create a shell script that deletes all lines that match a pattern in a file. ### Solution ```bash #!/bin/bash file="example.txt" pattern="pattern" sed "/$pattern/d" "$file" ``` # Using awk for field-based text processing - Summing the values of a specific column. - Calculating the average of a specific column. - Sorting records based on a specific column. Here's an example of using `awk` to sum the values of a specific column: ```bash awk '{sum += $2} END {print sum}' "example.txt" ``` This command sums the values in the second column of `example.txt`. ## Exercise Create a shell script that sums the values of a specific column in a file. ### Solution ```bash #!/bin/bash file="example.txt" column=2 awk "{sum += \$$column} END {print sum}" "$file" ``` # Creating and running shell scripts with regular expressions Shell scripts can be created to automate tasks using regular expressions. This section covers the process of creating and running shell scripts that utilize regular expressions. - Writing a shell script that uses regular expressions for file manipulation. - Running the shell script from the command line. - Testing the shell script for correctness. Here's an example of a shell script that uses regular expressions to search for a pattern in a file and prints the matching lines: ```bash #!/bin/bash file="example.txt" pattern="search_pattern" grep "$pattern" "$file" ``` To run this script, save it as a file (e.g., `search_script.sh`) and execute it with the following command: ```bash bash search_script.sh ``` ## Exercise Create a shell script that searches for a pattern in a file and prints the matching lines. ### Solution ```bash #!/bin/bash file="example.txt" pattern="search_pattern" grep "$pattern" "$file" ``` # Advanced regular expression concepts and techniques - Positive and negative lookaheads. - Non-greedy matching. - Using regular expressions in more complex text processing tasks. Here's an example of using a positive lookahead to search for a pattern that is followed by another pattern: ```bash grep -P 'search_pattern(?=following_pattern)' "example.txt" ``` This command searches for lines in `example.txt` that contain `search_pattern` followed by `following_pattern`. ## Exercise Create a shell script that uses a positive lookahead to search for a pattern that is followed by another pattern in a file and prints the matching lines. ### Solution ```bash #!/bin/bash file="example.txt" pattern="search_pattern" following_pattern="following_pattern" grep -P "$pattern(?=$following_pattern)" "$file" ``` # Applications of regular expressions in system administration and data analysis Regular expressions can be applied to a wide range of tasks in system administration and data analysis. This section covers some common applications of regular expressions in these areas. - Parsing log files for specific error messages. - Extracting specific information from configuration files. - Analyzing data in CSV or JSON format. Here's an example of using `grep` to search for specific error messages in a log file: ```bash grep "Error: " "log_file.log" ``` This command displays all lines in `log_file.log` that contain the specified error message. ## Exercise Create a shell script that searches for specific error messages in a log file and prints the matching lines. ### Solution ```bash #!/bin/bash file="log_file.log" error_message="Error: " grep "$error_message" "$file" ``` # Conclusion and further resources for learning In this textbook, we've covered the fundamentals of using regular expressions for manipulating files in Linux. There's still much more to learn and explore, but we hope this textbook has provided a solid foundation for further study. Some further resources for learning include: - Online tutorials and courses on regular expressions. - Books on Linux command-line tools and text processing. - Community forums and discussion groups for regular expressions and Linux. By exploring these resources and practicing your skills, you'll continue to expand your knowledge and become a more proficient Linux user.
Textbooks
\begin{document} \makeatletter \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \@addtoreset{equation}{section} \makeatother \baselineskip 18pt \begin{titlepage} \begin{center} \baselineskip=16pt {\Large\bf {Dynamical Wave Function Collapse Models in Quantum Measure Theory}} \vskip 10.mm Fay Dowker and Yousef Ghazi-Tabatabai\ \end{center} \par \begin{center} {\bf Abstract} \end{center} \begin{quote} The structure of Collapse Models is investigated in the framework of Quantum Measure Theory, a histories-based approach to quantum mechanics. The underlying structure of coupled classical and quantum systems is elucidated in this approach which puts both systems on a spacetime footing. The nature of the coupling is exposed: the classical histories have no dynamics of their own but are simply tied, more or less closely, to the quantum histories. \end{quote} \hrule width 5.cm \vskip 5mm {\small \noindent Blackett Laboratory, Imperial College, London SW7 2AZ, UK. \\ } \end{titlepage} \setcounter{equation}{0} \section{Introduction} Models of ``spontaneous localisation'' or ``dynamical wavefunction collapse'' are observer independent alternatives to standard Copenhagen quantum theory (see \cite{Bassi:2003gd} for a review). These models have a generic structure: there is a quantum state $\Psi$ which undergoes a stochastic evolution in Hilbert space and there is a ``classical'' (c-number) entity -- call it $\alpha$ -- with a stochastic evolution in spacetime. The stochastic dynamics for the two entities -- $\Psi$ and $\alpha$ -- are coupled together. The stochastic dynamics in Hilbert space tends to drive $\Psi$ into an eigenstate of an operator $\hat{\alpha}$ that corresponds to $\alpha$. And the probability distribution for the realised values of $\alpha$ depends on $\Psi$ so that the history of $\alpha$ follows, noisily, the expectation value of $\hat\alpha$ in $\Psi$. That collapse models have both quantum and classical aspects has been pointed out before, notably by Di\'osi. The nature of this interaction between the classical and quantum parts of these models is, however, somewhat obscured by the profound difference in the nature of their descriptions: the classical variable traces out a history in spacetime and the quantum state traces out its evolution in Hilbert space. In order to illuminate the nature of the quantum-classical coupling within collapse models we will, in the case of a concrete and specific example, recast the formalism into the framework of {\it {generalised measure theory}} \cite{Sorkin:1994dt} in which both classical and quantum systems are treated on as equal a footing as possible. The classical variables will continue to have a spacetime description but the quantum system will now also be described in terms of its spacetime histories and not fundamentally in terms of any state in Hilbert space. The model we will focus on is a discrete, finite, 1+1 dimensional lattice field theory. This is a useful model because it is completely finite (so long as we restrict ourselves to questions involving finite times) and expressions can be written down exactly and also because there is a well-defined background with non-trivial causal structure, so that questions of causality can be explored. We will show that the model contains both ``classical'' and ``quantum'' histories, and demonstrate the nature of their interaction. We will show that one choice of ontology for collapse models, the Bell ontology \cite{Bell:1987i}, corresponds to coarse graining over the quantum histories. We will also show how the well-known relationship between collapse models and open quantum systems coupled to an environment reveals itself in this histories framework. \section{Quantum measure theory}\label{sec:QMT} We start with a brief review of generalised measure theory and quantum measure theory and refer to \cite{Sorkin:1994dt,Sorkin:1995nj,Salgado:1999pu, Sorkin:2006wq,Sorkin:2007} for more details. A generalized measure theory consists of a triple, $(\Omega, \Buchstabe{A}, \mu)$, of a space of histories, an event algebra and a measure. The space of histories, $\Omega$, contains all the ``fine grained histories'' or ``formal trajectories'' for the system {\it e.g.} for $n$-particle mechanics -- classical or quantum -- a history would be a set of $n$ trajectories in spacetime, and for a scalar field theory, a history would be a field configuration on spacetime. The event algebra, $\Buchstabe{A}$, contains all the (unasserted) propositions that can be made about the system. We will call elements of $\Buchstabe{A}$ {\textit{ events}}, following standard terminology in the theory of stochastic processes. In cases where $\Omega$ is finite, $\Buchstabe{A}$ can be identified with the power set, $2^\Omega$. When $\Omega$ is infinite, $\Buchstabe{A}$ can be identified with an appropriate ring of sets contained in the power set: $\Buchstabe{A} \subset 2^\Omega$.\footnote {$\Buchstabe{A}$ is a Boolean algebra with addition in the algebra corresponding to symmetric difference and multiplication in the algebra corresponding to intersection. We will not employ this algebraic notation in this paper.} Predictions about the system --- the dynamical content of the theory --- are to be gleaned, in some way or another, from a generalized measure $\mu$, a non-negative real function on $\Buchstabe{A}$. $\mu$ is the dynamical law and initial condition rolled into one. Given the measure, we can construct the following series of symmetric set functions,which are sometimes referred to as the Sorkin hierarchy\footnote{These are the generalised interference terms introduced in \cite{Sorkin:1994dt}}: \begin{align*} I_1(X) & \equiv \mu(X) \\ I_2(X,Y) & \equiv \mu(X\sqcup Y) - \mu(X) - \mu(Y) \\ I_3(X, Y, Z) & \equiv \mu(X\sqcup Y \sqcup Z) - \mu(X\sqcup Y) - \mu(Y\sqcup Z) - \mu(Z\sqcup X) \\ {}&\ + \mu(X) + \mu(Y) + \mu(Z) \end{align*} and so on, where $X$, $Y$, $Z$, {\it etc.\ }are disjoint elements of $\Buchstabe{A}$, as indicated by the symbol `$\sqcup$' for disjoint union. A {\it measure theory of level $k$} is defined as one which satisfies the sum rule $I_{k+1}=0$. It is known that this condition implies that all higher sum rules are automatically satisfied, {\it viz.} $I_{k+n}=0$ for all $n\geq 1$ \cite{Sorkin:1994dt}. A level 1 theory is thus one in which the measure satisfies the usual Kolmogorov sum rules of classical probability theory, classical Brownian motion being a good example. A level 2 theory is one in which the Kolmogorov sum rules may be violated but $I_3$ is nevertheless zero. Any unitary quantum theory can be cast into the form of a generalised measure theory and its measure satisfies the condition $I_3 = 0$. We refer to level 2 theories, therefore, as {\it quantum} measure theories. The existence of a quantum measure, $\mu$, is more or less equivalent \cite{Sorkin:1994dt} to the existence of a {\it decoherence functional}, $D(\,\cdot\,\,;\,\cdot\,)$, a complex function on $\Buchstabe{A}\times \Buchstabe{A}$ satisfying \cite{Hartle:1989,Hartle:1992as}: \noindent (i) Hermiticity: $D(X\,;Y) = D(Y\,;X)^*$ ,\ $\forall X, Y \in \Buchstabe{A}$; \noindent (ii) Additivity: $D(X\sqcup Y\,; Z) = D(X\,;Z) + D(Y\,;Z)$ ,\ $\forall X, Y, Z \in \Buchstabe{A}$ with $X$ and $Y$ disjoint; \noindent (iii) Positivity: $D(X\,;X)\ge0$ ,\ $\forall X\in \Buchstabe{A}$; \noindent (iv) Normalization: $D(\Omega \,;\Omega)=1$.\footnote{The normalisation condition may turn out not to be necessary, but we include it because all the quantum measures we consider in this paper will satisfy it.} \noindent The quantal measure is related to the decoherence functional by \begin{equation} \mu(X) = D(X \,; X)\quad \forall X \in \Buchstabe{A} \,. \end{equation} The quantity $D(X\,;Y)$ is interpretable as the quantum interference between two sets of histories in the case when $X$ and $Y$ are disjoint. \section{The lattice field model} We review the lattice field model \cite{Dowker:2002wm, Dowker:2004zn} whose structure we will investigate. The model is based on a unitary QFT on a 1+1 null lattice \cite{Destri:1987ze}, which becomes a collapse model on the introduction of local ``hits'' driving the state into field eigenstates. The spacetime lattice is a lightcone discretisation of a cylinder, $N$ vertices wide and periodic in space. It extends to the infinite future, and the links between the lattice vertices are left or right going null rays. Figure \ref{fig:lattice} shows a part of such a spacetime lattice, identifying the leftmost vertices with the rightmost vertices we see that $N=6$. A spacelike surface $\sigma$ is maximal set of mutually spacelike links, and consists of $N$ leftgoing links and $N$ rightgoing links cut by the surface; an example of a spatial surface is shown in figure \ref{fig:lattice}. We assume an initial spacelike surface $\sigma_0$. \begin{figure} \caption{The light cone lattice. $\sigma_0$ is the initial surface and $\sigma_n$ is a generic spacelike surface. The surface $\sigma_{n+1}$ is shown after the vertex $v_{n+1}$ is evolved over. A vertex $v_i$ is shown with its two outgoing links: $l_{2i-1}$ to the left and $l_{2i}$ to the right.} \label{fig:lattice} \end{figure} An assignment of labels, $v_1, v_2, v_3, \dots$, to the vertices to the future of $\sigma_0$ is called ``natural'' if $i< j$ whenever the vertex labelled $v_i$ is to the causal past of the vertex labelled $v_j$. A natural labelling is equivalent to a linear extension of the (partial) causal order of the vertices. A natural labelling, $v_1, v_2, \dots$ is also equivalent to a sequence of spatial surfaces, $\sigma_1, \sigma_2, \dots$ where the surface $\sigma_n$ is defined such that between it and $\sigma_0$, lie exactly the vertices $v_1, \dots v_n$. One can think of the natural labelling as giving an ``evolution'' rule for the spacelike surfaces: at time step $n$ the surface creeps forward by one ``elementary motion'' across vertex $v_n$. For the purpose of this paper, it is convenient to consider a fixed natural labelling. Nothing will depend on the natural labelling chosen, all mathematical quantities will be independent of the choice. The local field variables $\Phi$ live on the links. These field variables take only two values $\{0,1\}$, so that on each link there is a qbit Hilbert space spanned by the two field eigenstates $\{\ket{0}, \ket{1}\}$. As the field variables live on the links, it is convenient to have a labelling of the links. We choose a labelling $l_a$, $a = 1,2,\dots$, such that $l_{2i-1}$ and $l_{2i}$ are the left-going and right-going outgoing links, respectively, from vertex $v_i$ (see figure \ref{fig:lattice}). So, as vertex label $i$ increases from 1 to $n$, the link label $a$ runs from $1$ to $2n$. We denote the qbit Hilbert space related to link $l_a$ by $H_{l_a}$. The initial state $|\psi_0\rangle$ on surface $\sigma_0$ is an element of the $2^{2N}$ dimensional Hilbert space $H_{\sigma_0}$ which is a tensor product of the $2N$ 2-dimensional Hilbert spaces on each link cut by $\sigma_0$, $H_{\sigma_0}=\displaystyle{\bigotimes_{l_a\in\sigma_0}}H_{l_a}$. Similarly there is a $2^{2N}$ dimensional Hilbert space for each spacelike surface $\sigma_i$ and they are isomorphic via the isomorphisms, tied to the lattice, which map each link's qbit Hilbert space onto the Hilbert spaces for the links vertically above it on the lattice. In this way we can identify the Hilbert spaces $H_{\sigma_i}$ ($=\displaystyle{\bigotimes_{l_a\in\sigma_i}}H_{l_a}$) on each surface and describe the time evolution with a state evolving in a single Hilbert space $H_q$ ($\simeq H_{\sigma_i}$) of the system. \subsection{The unitary theory} In the standard unitary version of this local field theory, there is a local unitary evolution operator, $R_i$, for each $v_i$, which acts unitarily on the 4-dimensional factor of the Hilbert space associated to the two ingoing and two outgoing links for $v_i$, and acts as the identity operator on all other factors. The state vector is evolved from $\sigma_{i-1}$ to $\sigma_i$ by applying $R_i$ \cite{Destri:1987ze}. So in figure \ref{fig:lattice} we see that the surface $\sigma_n$ evolves `over' vertex $v_{n+1}$ to give us surface $\sigma_{n+1}$. Now if $l_j,l_k$ are the two links going `into' vertex $v_{n+1}$, and $l_{2(n+1)-1},l_{2(n+1)}$ the two outgoing links, the operator $R_{n+1}$ maps $H_{l_j}\otimes H_{l_k}$ to $H_{l_{2(n+1)-1}}\otimes H_{l_{2(n+1)}}$. Further, for the links in the intersection of $\sigma_n$ and $\sigma_{n+1}$, $R_{n+1}$ acts as the the identity. Since the surfaces $\sigma_n, \sigma_{n+1}$ only differ on the links $l_j,l_k,l_{2(n+1)-1},l_{2(n+1)}$, we can put this together to get $R_{n+1}:H_{\sigma_n}\rightarrow H_{\sigma_{n+1}}$. Since we have identified the Hilbert spaces $H_{\sigma_i}$, we regard $R_{n+1}$ as evolving a state in the `system Hilbert space' $H_q$, so we write: \begin{eqnarray} \ket{\psi_{n+1}}&=&R_{n+1}\ket{\psi_n}\nonumber \\ &=& R_{n+1} R_n \ldots R_1 \ket{\psi_0}\,. \label{eqn:quantum evolution} \end{eqnarray} We define the unitary time evolution operator, $U(n)$, by \begin{equation}\label{eq:Uoperator} U(n) \equiv R_n\; R_{n-1} \ldots R_1\;. \end{equation} To cast the theory into a quantum measure theory framework, we need to identify the space, $\Omega_q$ of histories, an event algebra, $\Buchstabe{A}_q$, of suitable subsets of $\Omega_q$ and the decoherence functional, $D_q(\,\cdot\,\,;\,\cdot\,)$. In the lattice field theory the set of histories, $\Omega_q$, is the set of all field configurations on the lattice to the future of $\sigma_0$. A field configuration, $\Phi$, is an assignment of $0$ or $1$ to every link, in other words $\Phi$ is a function from the infinite set of links, $\{l_a: a = 1,2,\dots\}$, to $\mathbb{Z}_2$. The events that we want to consider are those which refer to properties of the histories which are bounded in time. In other words for $A \subset \Omega_q$ to be an event there must exist an integer $m$ such that to determine whether or not a field configuration, $\Phi$ is in $A$ it is only necessary to know the values of $\Phi$ between $\sigma_0$ and $\sigma_m$. For example, the subset \begin{equation*} E_k =\{\Phi \in \Omega_q: \Phi(l_{2k}) = 1\} \end{equation*} is an event for any fixed $k$. But the subset \begin{equation*} E = \{\Phi \in \Omega_q: \exists k \ \text{s.t.} \ \Phi(l_{2k}) = 1 \} \end{equation*} is {\it not} an event (at least not for the purposes of the current paper). We want to consider all events that are bounded in time. To this end, for each positive integer $n$ we define $\Omega_q^n$ to be the set of field configurations, $\Phi^n$, on the first $2n$ links, $l_1,\dots l_{2n}$, outgoing from the first $n$ vertices $v_1,\dots v_n$. (Recall that we have chosen an arbitrary, but fixed, natural labelling of the vertices which gives unambiguous meaning to ``the first $2n$ links''.) We define the cylinder set $Cyl(\Phi^n)$ to be the set of all elements of $\Omega_q$ which coincide with $\Phi^n$ on $l_1,\dots l_{2n}$: \begin{equation*} Cyl(\Phi^n) \equiv \{\Phi \in \Omega_q | \Phi = \Phi^n\ {\text{when restricted to the first}} \ 2n \ {\text{links}} \}\;. \end{equation*} Each cylinder set, $Cyl(\Phi^n)$ is an event that is bounded in time: it is the event ``the first $2n$ values of the field agree with $\Phi^n$.'' The event algebra, $\Buchstabe{A}_q$, then, is the (unital) ring of sets generated, under finite union and intersection, by all the cylinder sets, $Cyl(\Phi^n)$, for all $n$ and all $\Phi^n \in \Omega^n_q$. Two cylinder sets have nonempty intersection if and only if one contains the other and the complement of a cylinder set (that for $\Phi^n$, say) is a disjoint union of finitely many cylinder sets (those for all the configurations on $l_1 \dots l_{2n}$ that are {\textit{not}} $\Phi^n$). Thus, all elements of $\Buchstabe{A}_q$ are finite, disjoint unions of cylinder sets. Given an event, $A \in \Buchstabe{A}_q$, there is indeed an integer, $m$, such that to determine whether or not a field configuration, $\Phi$ is in $A$ it is only necessary to know the values of $\Phi$ between $\sigma_0$ and $\sigma_m$. We will refer to the minimum such $m$ as the {\it {time extent}} of $A$. The time extent of the cylinder set $Cyl(\Phi^n)$ is clearly $n$ and the time extent of an event $A$ is no greater than the maximum of the time extents of the cylinder sets whose union $A$ is. Consider the example given previously, $E_k$. We can see that this is the union of all the cylinder sets for the $\Phi^k$ such that $\Phi^k(l_{2k}) = 1$: \begin{equation} E_k = \mathop{\bigcup_{\Phi^k\ {\text s.t.}} }_{\Phi^k(l_{2k}) =1} Cyl(\Phi^k)\;. \end{equation} The time extent of event $E_k$ is $k$. A cylinder set is an event which corresponds to the history of the field up to a finite time. For each cylinder set, $Cyl(\Phi^n)$, the {\textit {class}} operator, $C(\Phi^n)$ \cite{Hartle:1992as}, for that finite history is given by \begin{equation}\label{eq:classoperator} C(\Phi^n) \equiv P^H_{2n}({\Phi^n_{2n}})\; P^H_{2n-1}({\Phi^n_{2n-1}}) \dots P^H_2({\Phi^n_{2}})\; P^H_1({\Phi^n_{1}}) \; . \end{equation} $P^H_a({\Phi^n_a})$ is the projection operator onto the eigenspace corresponding to the value, $\Phi_a^n = 0$ or $1$, of $\Phi^n$ at link $l_a$, in the Heisenberg Picture: \begin{equation} P^H_a(\Phi^n_a) = U([(a+1)/2])^\dagger\; P_a(\Phi^n_a)\; U([(a+1)/2]) \end{equation} where $P_a(\Phi^n_a)$ is the Schr\"odinger Picture projector, $U(k)$ is the unitary time evolution operator (\ref{eq:Uoperator}) and $[\cdot]$ denotes integer part. The Schr\"odinger picture projector is \begin{equation} P_a(\Phi^n_a) = \ket{\Phi^n_a} \bra{\Phi^n_a} \; , \end{equation} acting on the factor of $H_q$ associated with $l_a$ (tensored with the identity operator on the other factors). Expressed in the Schr\"odinger Picture the class operator is \begin{align}\label{eq:classoperatoragain} C(\Phi^n) &= U(n) P_{2n}({\Phi^n_{2n}})\; P_{2n-1}({\Phi^n_{2n-1}})\; R_n \dots\nonumber\\ {}& \quad \quad \dots P_4({\Phi^n_{4}})\; P_3({\Phi^n_{3}}) \; R_2 \;P_2({\Phi^n_{2}})\; P_1({\Phi^n_{1}}) \;R_1\;, \end{align} which might be summarised by the slogan ``evolve, project, evolve, project...'' We define a useful vector valued amplitude for the finite history $\Phi^n$ by applying its class operator to the initial state, \begin{equation}\label{eq:unitaryamp} \ket{\Phi^n} \equiv C(\Phi^n) \ket{\psi_0}\; . \end{equation} This vector is sometimes referred to in the literature as a ``branch'' \cite{Hartle:1992as}. The decoherence functional, $D_q$, is defined on cylinder sets by the standard expression \cite{Hartle:1992as} \begin{equation}\label{eq:Dquantum} D_q(Cyl(\Phi^n)\,; Cyl(\m{\Phi}{}^m)) \equiv \braket{\Phi^n}{{}\,\m{\Phi}{}^m}\,. \end{equation} The decoherence functional is defined on the whole event algebra, $\Buchstabe{A}_q$, by additivity since all events are finite disjoint unions of cylinder sets. Although we have used the natural labelling that we chose for the vertices at the beginning, the decoherence functional thus constructed is independent of the chosen order and depends only on the vertices' causal order because the projectors and unitary evolution operators for spacelike separated vertices and links commute \cite{Dowker:2002wm}. Note that the properties of the projectors ensure that the formula (\ref{eq:Dquantum}) for the decoherence functional is consistent with the condition of additivity when one cylinder set is a disjoint union of other cylinder sets. For example, $Cyl(\Phi^n)$ is a disjoint union of all events $Cyl(\Phi^{n+1})$ such that $\Phi^{n+1}$ agrees with $\Phi^n$ on the first $2n$ links and the decoherence functional of $Cyl(\Phi^n)$ (with any other event $B$) is indeed given as a sum: \begin{equation} D_q(Cyl(\Phi^n)\,;B) = \mathop{\sum_{\Phi^{n+1}\ {\text s.t.}} }_{\Phi^{n+1}|_n = \Phi^n} D_q(Cyl(\Phi^{n+1})\,; B)\, , \end{equation} where the sum is over all four field configurations on the first $2(n+1)$ links which agree with $\Phi^n$ on the first $2n$ links. If the initial state is a mixed state then the decoherence functional is a convex combination of pure state decoherence functionals. This decoherence functional gives a level $2$\ measure, $\mu_q$, on $\Buchstabe{A}_q$ (see section \ref{sec:QMT}). \subsection{The collapse model with the Bell ontology} The above unitary quantum field theory inspired a collapse model field theory \cite{Dowker:2002wm} which, with the Bell ontology, can be understood as a level 1 (classical) measure theory in the Sorkin hierarchy (see section \ref{sec:QMT}) as follows. The space, $\Omega_c$ of all possible histories/formal trajectories is an identical copy of that for the quantum field theory, namely the set of all field configurations on the semi-infinite lattice to the future of $\sigma_0$. We will refer to field configurations in $\Omega_c$ as $\alpha$ in order to distinguish them from the elements of $\Omega_q$ which we refer to (as above) as $\Phi$. The event algebra $\Buchstabe{A}_c$ consists of finite unions of cylinder sets of elements of $\Omega_c$ and so is isomorphic to $\Buchstabe{A}_q$. The dynamics of the collapse model is given by a classical (level 1) measure. Since a level 1 measure is also level 2 -- each level of the hierarchy includes the levels below it -- a classical measure can also be given in terms of a decoherence functional and in this case the decoherence functional, $D_c$ is given as follows. Let $\alpha^n$ be a field configuration on the first $2n$ links. Define a vector valued amplitude $\ket{\alpha^n}\in H_q$ for each cylinder set $Cyl(\alpha^n)$: \begin{equation} \ket{\alpha^n} \equiv J_{2n}({\alpha^n_{2n}})\; J_{2n-1}({\alpha^n_{2n-1}})\; R_{n} \dots R_2\; J_2({\alpha^n_{2}})\; J_1({\alpha^n_{1}})\; R_1 \ket{\psi_0}\, , \end{equation} where $\ket{\psi_0}$ is the initial state on $\sigma_0$ and $J_a({\alpha^n_a})$ is the Kraus operator implementing a ``partial collapse'' onto the eigenspace corresponding to the value of $\alpha^n$ at link $l_a$. More precisely, \begin{align} J_a(0) &=\frac{1}{\sqrt{1+X^2}}(\ket{0}\bra{0} + X \ket{1}\bra{1}) \label{eqn:J(0)} \\J_a(1) &=\frac{1}{\sqrt{1+X^2}}\left(X \ket{0} \bra{0} + \ket{1}\bra{1}\right)\, \label{eqn:J(1)} \end{align} (where $0\le X\le 1$) acting on the factor of $H_q$ associated with link $l_a$ (tensored with the identity operator for the other factors). Then the decoherence functional, $D_c$ is defined on cylinder sets by \begin{equation}\label{eq:Dclassical} D_c(Cyl(\alpha^n)\,; Cyl(\m{\alpha}{}^n)) \equiv \braket{\alpha^n}{{}\,\m{\alpha}{}^n}\delta_{\alpha^n\, \m{\alpha}{}^n}\, , \end{equation} where $\delta_{\alpha^n\, \m{\alpha}{}^n}$ is a Kronecker delta which is 1 if the two field configurations are identical on all $2n$ links and zero otherwise. The decoherence functional is then extended to the whole event algebra, $\Buchstabe{A}_q$ by additivity since all events are finite disjoint unions of cylinder sets. In particular, if $m>n$, the cylinder set $Cyl(\Phi^n)$ with time extent $n$ is a disjoint union of cylinder sets with time extent $m$, and so it suffices to define $D_q$ as above for cylinder sets of the same time extent: $D_c(Cyl(\alpha^n)\,; Cyl(\m{\alpha}{}^m))$ is given by additivity. Again, the decoherence functional thus constructed is independent of the chosen natural labelling and depends only on the vertices' causal order because of spacelike commutativity of the evolution operators and Kraus operators. $D_c$ is well-defined, in particular the additivity condition is consistent with the definition (\ref{eq:Dclassical}). For example, consider \begin{equation*} D_c(Cyl(\alpha^n)\,;Cyl(\alpha^n))\,. \end{equation*} The event $Cyl(\alpha^n)$ is a disjoint union of all events $Cyl(\alpha^{n+1})$ for which $\alpha^{n+1}$ agrees with $\alpha^n$ on the first $2n$ links and indeed we have: \begin{equation} D_c(Cyl(\alpha^n)\,; Cyl(\alpha^n)) = \mathop{\sum_{\alpha^{n+1}\ {\text s.t.}}}_{\alpha^{n+1}|_n = \alpha^n}\; \mathop{\sum_{\m{\alpha}{}^{n+1}\ {\text s.t.}}}_{\m{\alpha}{}^{n+1}|_n = \alpha^n} D_c(Cyl(\alpha^{n+1})\,; Cyl(\m{\alpha}{}^{n+1}))\, . \end{equation} In verifying this, the crucial property is that of the Kraus operators: $J_0^2 + J_1^2 = 1$ and the fact that distinct histories have no interference, as expressed by the Kronecker delta. Note that without the Kronecker delta, equation (\ref{eq:Dclassical}) would not be a consistent definition of a decoherence functional satisfying additivity. This decoherence functional is level 1 (classical): it satisfies \begin{equation} D_c(Y\,; Z) = D_c(Y\cap Z\,; Y\cap Z) \end{equation} and this implies the Kolmogorov sum rule is satisfied by the measure $\mu_c$ defined by $\mu_c(Y) \equiv D_c(Y\,;Y)$. Being a level 1 measure, $\mu_c$ has a familiar interpretation as a probability measure. Indeed the measure $\mu_c$ defined on the cylinder sets is enough, via the standard methods of measure theory, to define a unique probability measure on the whole sigma algebra generated by the cylinder sets. There is, as yet, no analogous result for a quantal measure such as $\mu_q$. Moreover, there is, as yet, no consensus on how to {\textit {interpret}} a quantum measure theory. We will not address this important question here but refer to \cite{Sorkin:2006wq,Sorkin:2007,Dowker:2007zz} for a new proposal for an interpretation of quantum mechanics within the framework of quantum measure theory. \subsection{Quantum and Classical} In every collapse model there is a coupling between classical stochastic variables and a quantum state. How is this classical-quantum coupling manifested in the generalised measure theory form of the lattice collapse model just given? We now show that there is indeed a quantum measure lurking within and we will expose the nature of the interaction of the quantal variables with the classical variables. Consider a space of histories $\Omega_{qc}$ which is a direct product of the two spaces introduced above, $\Omega_{qc}= \Omega_q \times \Omega_c$, so that elements of $\Omega_{qc}$ are pairs of lattice field configurations, $(\Phi, \alpha)$. We will refer to the elements of $\Omega_q$ as {\textit{quantum histories/fields}} and those of $\Omega_c$ as {\textit {classical histories/fields}}. The event algebra $\Buchstabe{A}_{qc}$ is the ring of sets generated by the cylinder sets, $Cyl(\Phi^n, \alpha^n)$, where the cylinder set contains all pairs $(\Phi, \alpha)$ such that $\Phi$ coincides with $\Phi^n$ and $\alpha$ coincides with $\alpha^n$ on the first $2n$ links. We now construct a decoherence functional on $\Buchstabe{A}_{qc}$ by taking the unitary decoherence functional, $D_q$ on $\Buchstabe{A}_q$, defined above and ``tying'' the classical histories to the quantum histories by suppressing the decoherence functional by an amount that depends on how much the classical and quantum field configurations differ. The more they differ, the greater the suppression. In detail, define $D_{qc}$ on $\Buchstabe{A}_{qc}$ by first defining it on the cylinder sets: \begin{align}\label{eq:Dcoupled} D_{qc}(Cyl(\Phi^n,\alpha^n)\,;\, &Cyl(\m{\Phi}{}^n,\m{\alpha}{}^n)) \equiv \nonumber \\ {}&D_q(Cyl(\Phi^{n})\,; Cyl( \m{\Phi}{}^{n}))\; \frac{ X^{d(\Phi^n,\alpha^n)+ d(\m{\Phi}{}^n,\m{\alpha}{}^n)} } { (1+X^2)^{2n}} \; \delta_{\alpha^n\,\m{\alpha}{}^n} \end{align} where $0\le X \le 1$ and $d(\Phi^n,\alpha^n)$ is equal to the number of links on which $\Phi^n$ and $\alpha^n$ differ. As usual it suffices to define $D_{qc}$ for arguments which have the same time extent, $n$, because a cylinder set with time extent $m < n$ is a finite disjoint union of cylinder sets with time extent $n$. $D_{qc}$ is extended to the full event algebra by additivity. Checking that the definition (\ref{eq:Dcoupled}) of $D_{qc}$ on the cylinder sets is consistent with the property of additivity follows the same steps as for $D_c$ and $D_q$. $D_{qc}$ is level 2 in the Sorkin hierarchy, although it is clearly classical on $\Omega_c$. We now prove some lemmas regarding $D_{qc}$ which lay bare the structure of our collapse model of a lattice field in histories form. \begin{lemma} Let $(\Omega_q, \Buchstabe{A}_q, D_q)$, $(\Omega_c, \Buchstabe{A}_c, D_c)$ and $(\Omega_{qc}, \Buchstabe{A}_{qc}, D_{qc})$ be defined as above for the lattice field theory. Then the decoherence functional for the collapse model, $D_c$ is equal to $D_{qc}$ coarse grained over $\Omega_q$: \begin{equation} D_c(A\,;\m{A}) = D_{qc}(\Omega_q\times A\,; \Omega_q\times \m{A}) \ \ \forall A, \m{A} \in \Buchstabe{A}_c\;. \end{equation} \end{lemma} \begin{proof} It suffices to prove that \begin{equation} D_c(Cyl(\alpha^n)\,;Cyl(\m{\alpha}{}^n)) = \sum_{\Phi^n, \m{\Phi}{}^n} D_{qc}(Cyl(\Phi^n,\alpha^n)\,;Cyl(\m{\Phi}{}^n,\m{\alpha}{}^n))\;, \end{equation} where the double sum is over all field configurations, $\Phi^n$ and $\m{\Phi}{}^n$, on the first $2n$ links. The result follows by additivity because \begin{equation} \bigcup_{\Phi^n}Cyl(\Phi^n,\alpha^n) = \Omega_q \times Cyl(\alpha^n)\;. \end{equation} Recall the definition of $D_c$, \begin{equation*} D_c(Cyl(\alpha^n)\,; Cyl(\m{\alpha}{}^n)) = \braket{\alpha^n}{{}\,\m{\alpha}{}^n}\delta_{\alpha^n\, \m{\alpha}{}^n}\, , \end{equation*} where \begin{equation*} \ket{\alpha^n} = J_{2n}({\alpha^n_{2n}})\, J_{2n-1}({\alpha^n_{2n-1}}) \, R_n \dots R_2 \,J_2({\alpha^n_{2}})\, J_1({\alpha^n_{1}}) \,R_1 \ket{\psi_0}\, . \end{equation*} Each jump operator $J_a(\alpha^n_a)$ is a linear combination of the two projection operators $P_a(1)= \ket{1}\bra{1}$ and $P_a(0)= \ket{0}\bra{0}$ on link $l_a$ (see equations \ref{eqn:J(0)} and \ref{eqn:J(1)}). Substituting in this linear combination of projectors for each $J_a(\alpha^n_a)$ and expanding out, we see that the ket becomes a sum of $2^{2n}$ terms, one for each possible field configuration -- call it $\Phi^n$ -- on the $2n$ links. Each such term is precisely the vector valued amplitude $\ket{\Phi^n}$ (\ref{eq:unitaryamp}) and each term is weighted by a factor \begin{equation*} \frac{X^{d(\alpha^n, \Phi^n)}}{(1+X^2)^n} \end{equation*} from which the result follows. \end{proof} The next lemma shows that if we coarse grain $D_{qc}$ over the classical histories instead, we find a quantum theory exhibiting the symptoms of environmental decoherence. \begin{lemma}\label{lemma:decoh} Define a decoherence functional $\widetilde{D}_q$ on $\Omega_q$ by \begin{equation} \widetilde{D}_q(F\,; \m{F}) \equiv D_{qc}(F\times \Omega_c\,; \m{F}\times \Omega_c)\ \ \forall F, \m{F} \in \Buchstabe{A}_q \;. \end{equation} Then \begin{equation} \widetilde{D}_q(Cyl(\Phi^n)\,; Cyl(\m{\Phi}{}^n))= \left(\frac{2X}{1+X^2}\right)^{d(\Phi^n, \m{\Phi}{}^n)} D_q(Cyl(\Phi^n)\,; Cyl(\m{\Phi}{}^n))\;. \end{equation} \end{lemma} We leave the proof to the appendix. Note that the factor suppresses off-diagonal terms in the decoherence functional and so looks as if it is the result of environmental decoherence. \subsection{Equivalence to a model with environment} The system described by decoherence functional $D_{qc}$ on the joint space $\Omega_{qc}$ was not derived from any physical consideration but simply invented as a way to unravel the decoherence functional of the collapse model. However, once obtained, the urge to coarse grain $D_{qc}$ over the classical histories is irresistible and the ``approximately diagonal'' form of the resulting decoherence functional, $\widetilde{D}_q$ on $\Omega_q$ suggests it can be interpreted as having arisen from coupling to an environment. Indeed, the mathematics of collapse models and of open quantum systems that result from coarse graining over an ignored environment are known to be closely related and so it is of no surprise to discover that our current model can be understood in this way. Indeed, the classical histories in the collapse model can simply be reinterpreted as histories of an environment consisting of variables, one per link, that interact impulsively with the field there, and then have no further dynamics. Let the quantum lattice field, $\Phi$, interact with a collection of environment variables, one for each link, taking values $0$ or $1$. The space of histories for the whole system is $\Omega_{qe} \equiv \Omega_q \times \Omega_e$, where the space of environment histories, $\Omega_e$, is yet another copy of the same space of $\{0,1\}$-field configurations on the semi-infinite lattice. We denote an element of $\Omega_e$ by $E$, an environment configuration on the first $2n$ links by $E^n$, the corresponding cylinder set by $Cyl(E^n)$, and the value of the environment variable on link $a$ by $E^n_a$. In the standard state vector language, the Hilbert space of the whole system of field, $\Phi$, and the environment is $H_{qe} \equiv H_q \otimes H_e$ where the environment Hilbert space, $H_e$, is an infinite tensor product of qubit Hilbert spaces, $H_{e_a}$, $a = 1,2,\dots$, one for each link $l_a$ on the lattice to the future of $\sigma_0$. \begin{lemma} There is a unitary dynamics of this system such that the unitary decoherence functional which encodes it, $D_{qe}$, is equal to $D_{qc}$ if the environment histories are identified with the classical histories. \end{lemma} \begin{proof} The proof is by construction of such a dynamics. We add, to the unitary dynamics of the field $\Phi$, a one-time interaction between $\Phi$ and the environment variable on each link which establishes a partial correlation between them. Since each environment state lives on exactly one link, it interacts only once and is then fixed, which means that the decoherence functional is diagonal on the environment histories. We begin with the space of histories $\Omega_{qe} = \Omega_q \times \Omega_e$ and the Hilbert space $H_{qe} = H_q \otimes H_e$ where $H_e = \otimes_{a=1}^\infty H_{e_a}$ and each $H_{e_a}$ is a qubit space. The initial state is a tensor product: \begin{equation}\label{eq:Psi0} \ket{\Psi_0} = \ket{\psi_0}_q \otimes_{a =1}^\infty \ket{0}_{e_a} \end{equation} where $ \ket{\psi_0}_q \in H_q$ is the same initial state for the field $\Phi$ as we had before. After each elementary unitary evolution $R_i$ is applied over vertex $i$, two unitary ``partial measurement'' operators $U_{2i-1}$ and $U_{2i}$ -- to be defined -- are applied to the Hilbert spaces associated with the outgoing links $l_{2i-1}$ and $l_{2i}$, respectively. Consider a single link, $l_a$. The factor of the total Hilbert space associated with $l_a$ is the four-dimensional tensor product of the qubit space, $H_{q_a}$, of the $\Phi$ states on $l_a$ and the qubit space $H_{e_a}$. In the field representation, the basis of this link Hilbert space is $\{\ket{0}_{q_a}\ket{0}_{e_a}, \ket{1}_{q_a}\ket{0}_{e_a},\ket{0}_{q_a}\ket{1}_{e_a},\ket{1}_{q_a}\ket{1}_{e_a}\}$. The unitary partial measurement operator $U_a$ is defined by its action on this basis: \begin{align} U_a\ket{0}_q\ket{0}_e &= \frac{1}{\sqrt{1+X^2}}\; \ket{0}_q \big(\ket{0}_e + X \ket{1}_e\big)\nonumber\\ U_a\ket{1}_q\ket{0}_e &= \frac{1}{\sqrt{1+X^2}}\; \ket{1}_q \big(X\ket{0}_e + \ket{1}_e\big)\nonumber\\ U_a\ket{0}_q\ket{1}_e &= \frac{1}{\sqrt{1+X^2}}\; \ket{0}_q \big(X\ket{0}_e - \ket{1}_e\big)\nonumber\\ U_a\ket{1}_q\ket{1}_e &= \frac{1}{\sqrt{1+X^2}}\; \ket{1}_q \big(\ket{0}_e - X \ket{1}_e\big)\label{eq:defU}\;, \end{align} where $0\le X\le 1$ and we have suppressed the $a$ label on all the kets. $U_a$ acts as the identity on all other factors in the tensor product Hilbert space for the system. The action of $U_a$ is to leave $\Phi$ eigenstates alone and put the initial $\ket{0}_e$ environment state into a superposition of $\ket{0}_e$ and $\ket{1}_e$, so that the environment eigenstate that is correlated with the $\Phi$ eigenstate is relatively enhanced by a factor $X^{-1}$. For each cylinder set $Cyl(\Phi^n, E^n)$ we define a vector valued amplitude, $\ket{\Phi^n, E^n}_{qe} \in H_{qe}$ by evolving the state over each vertex, applying the unitary partial measurements on the outgoing links and projecting onto the values of $\Phi^N$ and $E^N$ on the links: \begin{align} \ket{\,\Phi^n, E^n}_{qe} \equiv \,& Q_{2n}(E^n_{2n})\, P_{2n}(\Phi^n_{2n}) \, Q_{2n-1}(E^n_{2n-1})\, P_{2n-1}(\Phi^n_{2n-1})\nonumber\\ {}&\ \ \ \ U_{2n}\, U_{2n-1}\, R_n \dots \nonumber\\ {}&\ \ \ \ \ \ \ \ \dots Q_2(E^n_2)\, P_2(\Phi^n_{2})\, Q_1(E^n_1) \,P_1(\Phi^n_{1})\nonumber\\ {}&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ U_2\, U_1 \,R_1 \ket{\Psi_0}\, , \end{align} where $\ket{\Psi_0}$ is defined in (\ref{eq:Psi0}), $P_a({\Phi^n_a})$ is the projection operator onto the eigenspace corresponding to the value of $\Phi^n$ at link $l_a$ and $Q_a(E^n_a)$ is the projection operator onto the eigenspace corresponding to the value of $E^n$ at link $l_a$. $P_a({\Phi^n_a})$ is only non-trivial on the factor in $H_q$ associated with link $l_a$ and $Q_a(E^n_a)$ is only non-trivial on the factor in $H_e$ associated with link $l_a$. As a consequence, the $P$ projectors and $Q$ projectors commute. The initial state is a product, each $U_a$ leaves $\Phi$-eigenstates alone and the $Q$ projectors act only on the environment states. We claim that therefore $\ket{\Phi^n, E^n}_{qe}$ is a product, \begin{equation} \ket{\Phi^n, E^n}_{qe} = \ket{\Phi^n}_q \ket{E^n}_e\,, \end{equation} where $\ket{\Phi^n}_q \in H_q$ is the vector valued amplitude (\ref{eq:unitaryamp}) for the plain vanilla unitary field theory and \begin{equation}\label{eq:ketE} \ket{E^n}_{e} = \frac{X^{d(\Phi^n, E^n)}}{(1+X^2)^n}\; \ket{E^n_1}_{e_1} \ket{E^n_2}_{e_2}\dots \ket{E^n_{2n}}_{e_{2n}} \end{equation} where we have left off the factors of $\ket{0}$ for all the infinitely many links to the future of $\sigma_n$, which play no role. The proof of this claim is given in the appendix. The decoherence functional, $D_{qe}$, for the total system is given by \begin{align} D_{qe}(Cyl(\Phi^n,E^n)\,;Cyl(\m{\Phi}{}^n,\m{E}{}^n)) &\equiv \braket{\Phi^n,E^n}{\m{\Phi}{}^n,\m{E}{}^n} \\ {}& = \braket{\Phi^n}{\m{\Phi}{}^n}_q \braket{E^n}{\m{E}{}^n}_e\,. \end{align} Using (\ref{eq:ketE}), we see that the decoherence functional is zero unless $E^n = \m{E}{}^n$ and we have \begin{align} D_{qe}(Cyl(\Phi^n,E^n)\,;\,&Cyl(\m{\Phi}{}^n,\m{E}{}^n)) = \nonumber\\ D_q(Cyl(\Phi^{n})\,; Cyl(\m{\Phi}{}^{n}))\; \frac{X^{d(\Phi^n,E^n)+ d(\m{\Phi}{}^n,\m{E}{}^n)}}{(1+X^2)^{2n}} \;\delta_{E^n\,\m{E}{}^n}\;. \end{align} As usual, we only need to define the decoherence functional for cylinder sets of equal time extent. We see that this is equal to $D_{qc}$, the decoherence functional of the collapse model (\ref{eq:Dcoupled}). \end{proof} The model is technically unitary and so falls into the category of ordinary quantum theory, but the classicality of the environment variables is achieved by the device of postulating an infinite environment and one-time interactions. \section{Discussion} None of the physics we have presented is new. We have merely provided a novel perspective on a known model that arises when spacetime and histories are given a central role. Di\'osi stressed that both classical variables and quantum state are present in a collapse model and advocates ascribing reality to them both \cite{Diosi:2004}. We have replaced the formalism of quantum state with quantum histories and by placing quantum and classical variables on the same footing in spacetime we can see more clearly the character of the interaction between them. We claim that the structure outlined above for the collapse model for a lattice field theory, is generic to collapse models. There is always, more or less hidden in the model, a space of histories which is a product of a space of quantum histories and a space of classical histories, with a decoherence functional on it. For example, in the case of the GRW model \cite{Ghirardi:1986mt} the classical histories are countable subsets of Galilean spacetime, to the future of some initial surface, $t=0$. The elements of such a countable subset are the ``collapse centres'' $(x_i, t_i), i = 1,2, \dots$. The probability distribution on these classical histories is given by a classical decoherence functional $D_c$, which is, essentially, set out in \cite{Kent:1998bc}. In order to follow the steps taken in this paper of unravelling $D_c$ into $D_{qc}$, the positive operators, Gaussians, that correspond to the classical events are expressed as integrals of projection operators and the evolution between collapses expressed using the Dirac-Feynman propagator as a sum over the histories. The quantum histories, then, are precisely the histories summed over in the Dirac-Feynman path integral: all continuous real functions $\gamma: [0,\infty] \rightarrow \RR$. The continuum limit of the GRW model is the continuous spontaneous localisation model for a single particle \cite{Diosi:1988a,Diosi:1988b} and this too can be cast into the generic form as can be seen from the formulation of the model in terms of a ``restricted propagator'' as described in references \cite{Mensky:1979, Mensky:1994, Diosi:1995}. Although the analysis in these references uses phase space path integrals, if it is the position operator whose eigenstates are collapsed onto, as is the case for the continuum limit of GRW, the path integrals can be transformed into configuration space path integrals. In this case, the quantum histories are again the continuous paths that contribute to the Dirac-Feynman sum-over-histories, but the classical histories are very noisy, and not continuous paths at all. Note that in the lattice field theory the spaces of classical and quantum histories in this case are isomorphic, whereas in the GRW model and its continuum limit the quantum and classical histories are very different. In all cases, however, it is the quantum histories that bear all the consequence of dynamical law encoded in a local spacetime action, whereas the classical histories are simply dragged along by being tied to the quantum histories. This state of affairs is illuminated further by considering coupling together two separate collapse models X and Y. Each model will contain both quantum and classical histories and the coupling between X and Y will be achieved by an appropriate term in the action involving the quantum histories alone. It is the quantum histories of X which directly touch the quantum histories of Y. The classical variables of X only react to the classical variables of Y because they are restricted to be close to the quantum variables which interact with the quantum variables of Y to which the classical variables of Y must, in their turn, be close. The present authors believe, with Hartle, Sorkin and others, that a spacetime approach to quantum mechanics will be essential to progress in quantum gravity and for this reason spacetime approaches should be carefully studied. Two important reasons for pursuing collapse models with the Bell ontology are that the models are already in spacetime form and the stochasticity involved is completely classical so all the familiar machinery of stochastic processes can be brought to bear: the stochasticity of collapse models causes no more interpretational difficulty than does the randomness of Brownian motion. The theory concerns the classical variables only and the quantum histories are relegated to some sort of auxiliary, hidden status, despite the fact that the dynamics of the model is most easily described in terms of these quantum histories. In order to pursue this direction, therefore, one must pay the price of ignoring the quantum histories as far as the ontology is concerned: ``Pay no attention to that man behind the curtain'' \cite{Oz:1939}. On the other hand, if the quantum histories are kept in the theory to be treated on the same footing, a priori, as the classical histories, then the question of the physical meaning of the quantum measure on them has to be wrestled with: what {\it is} the ontology in a quantum measure theory? But if {\textit {this}} thorny problem is to be tackled, then one might start by trying to address it in the case of unitary quantum mechanics in the first instance. It may be that an interpretation of the quantum measure can be discovered that, by itself, provides a solution to the interpretational problems of quantum mechanics, while yet maintaining unitary dynamics and without need of new quantum-classical couplings. \section{Acknowledgments} We thank Rafael Sorkin for invaluable discussions throughout this work. We thank Isabelle Herbauts for useful discussions. YG is supported by a PPARC studentship. FD is supported in part by the EC Marie Curie Research and Training Network, Random Geometry and Random Matrices MRTN-CT-2004-005616 (ENRAGE) and by Royal Society International Joint Project 2006/R2. \appendix \section{Appendix} \begin{proof}\; \textit{{Of Lemma}} \ref{lemma:decoh} Recall the definition of $D_{qc}$: \begin{align*} D_{qc}(Cyl(\Phi^n,\alpha^n)\,;\,& Cyl(\m{\Phi}{}^n,\m{\alpha}{}^n)) =\\ {}&D_q(Cyl(\Phi^{n})\,; Cyl(\m{\Phi}{}^{n}))\; \frac{X^{d(\Phi^n,\alpha^n)+ d(\m{\Phi}{}^n,\m{\alpha}{}^n)}}{(1+X^2)^{2n}} \;\delta(\alpha^n,\m{\alpha}{}^n)\;. \end{align*} When the sum is taken over all $\alpha^n$ and $\m{\alpha}{}^n$, field configurations on the first $2n$ vertices, it results in \begin{align} D_{qc}(Cyl(\Phi^n)\times \Omega_c\,;\,& Cyl(\m{\Phi}{}^n)\times \Omega_c) = \nonumber \\ {}& \frac{1}{(1+X^2)^{2n}}\; D_q(Cyl(\Phi^{n})\,; Cyl(\m{\Phi}{}^{n})) \sum_{\alpha^n} X^{d(\Phi^n,\alpha^n)+ d(\m{\Phi}{}^n,\alpha^n)}\;. \end{align} Let $d(\Phi^n, \m{\Phi}{}^n) = m$, which is the number of links on which the values of the two fields differ. For the duration of this proof only, we relabel the links on which the two fields differ $l_1,l_2,\dots l_m$ and the rest, on which the fields agree, are labelled $l_{m+1},\dots l_{2n}$. Consider the exponent ${d(\Phi^n,\alpha^n)+ d(\m{\Phi}{}^n,\alpha^n)}$. The first $m$ links contribute $m$ to the exponent whatever $\alpha^n$ is, because for each link, $\alpha^n$ will agree with exactly one of $\Phi^n$ and $\m{\Phi}{}^n$. Therefore \begin{equation} {d(\Phi^n,\alpha^n)+ d(\m{\Phi}{}^n,\alpha^n)} = m + 2\tilde{d}(\alpha^n, \Phi^n)\,, \end{equation} where $\tilde{d}$ is the number of the last $2n -m$ links on which $\alpha^n$ and $\Phi^n$ differ. The sum over $\alpha^n$ can be expressed as a multiple sum over the values of the $\alpha$ variable on each link in turn. We first do the sum over the values on the $m$ links on which $\Phi^n$ and $\m{\Phi}{}^n$ differ. The summand does not depend on the values on those links and so that gives a factor of $2^m$ \begin{equation} \sum_{\alpha^n} X^{d(\Phi^n,\alpha^n)+ d(\m{\Phi}{}^n,\alpha^n)} = 2^m X^m \sum_{\alpha^n_{m+1}} \dots \sum_{\alpha^n_{2n}} X^{2\tilde{d}(\alpha^n, \Phi^n)}\;. \end{equation} The remaining sum is over all $\alpha$ configurations on the last $2n-m$ links. There is one such configuration that agrees with $\Phi^n$ on all $2n-m$ links, $\binom{2n-m}{1}$ configurations that differ from $\Phi^n$ on one link, $\binom{2n-m}{2}$ that differ from $\Phi^n$ on two links, {\textit {etc}}. The remaining sum therefore gives $(1+X^2)^{2n-m}$ and we have \begin{equation} \sum_{\alpha^n} X^{d(\Phi^n,\alpha^n)+ d(\m{\Phi}{}^n,\alpha^n)} = 2^m X^m (1+X^2)^{2n-m}\,, \end{equation} and hence the result. \end{proof} \begin{claim} \begin{equation} \ket{\,\Phi^n, E^n}_{qe} = \frac{X^{d(\Phi^n, E^n)}}{(1+X^2)^n}\; \ket{\Phi^n}_q \ket{E^n_1}_{e_1} \ket{E^n_2}_{e_2}\dots \ket{E^n_{2n}}_{e_{2n}}\otimes_{a=2n+1}^\infty \ket{0}_{e_a} \,, \end{equation} where $\ket{\Phi^n}_q$ is given by (3.1). This is the claim in lemma 3. \end{claim} \begin{proof} We use induction. It is trivially true for $n=0$. We assume it is true for $n$. Let $\Phi^{n+1}|_n = \Phi^n$ and $E^{n+1}|_n = E^n$. Then \begin{align} \ket{\,\Phi^{n+1}, E^{n+1}}_{qe} ={}&\; Q_{2n+2}(E^{n+1}_{2n+2})\, P_{2n+2}(\Phi^{n+1}_{2n+2}) \, Q_{2n+1}(E^{n+1}_{2n+1})\, P_{2n+1}(\Phi^{n+1}_{2n+1})\nonumber\\ {}&\ \ \ \ U_{2n+2}\, U_{2n+1}\, R_{n+1}\ket{\Phi^n,E^n}_{qe}\,. \end{align} The $P$ projectors commute with the $Q$ projectors. The $P_{a}$ projectors also commute with the partial measurement operators $U_{a}$ as can be seen from the definition of $U$ (\ref{eq:defU}). So we have \begin{align} \ket{\,\Phi^{n+1}, E^{n+1}}_{qe} = \,&\,\frac{X^{d(\Phi^n, E^n)}}{(1+X^2)^n}\; Q_{2n+2}(E^{n+1}_{2n+2})\, Q_{2n+1}(E^{n+1}_{2n+1})\, U_{2n+2}\, U_{2n+1} \nonumber\\ {}&\ \ \ \ \ \ \ \left[\,P_{2n+2}(\Phi^{n+1}_{2n+2})\,P_{2n+1}(\Phi^{n+1}_{2n+1})\,R_{n+1} \ket{\Phi^n}_q \,\right]\nonumber \\ {}&\ \ \ \ \ \ \ \ \ \ \ \ \ \ket{E^n_1}_{e_1} \dots \ket{E^n_{2n}}_{e_{2n}} \ket{0}_{e_{2n+1}}\ket{0}_{e_{2n+2}} \otimes_{a=2n+3}^\infty \ket{0}_{e_a} \,. \end{align} The factor in square brackets is $\ket{\Phi^{n+1}}_q \in H_q$ and is unchanged by the $U$'s because it is an eigenstate of the field $\Phi$ on the links $l_{2n+1}$ and $l_{2n+1}$. The same factor is also unchanged by the $Q$'s which only act on the environment states. $U_{2n+1}$ turns $\ket{0}_{e_{2n+1}}$ into a linear combination of $\ket{0}_{e_{2n+1}}$ and $\ket{1}_{e_{2n+1}}$, enhancing the term which is correlated to the value $\Phi^{n+1}_{2n+1}$. Similarly for $U_{2n+2}$. Finally $Q_{2n+1}(E^{n+1}_{2n+1})$ projects onto the state $\ket{E^{n+1}_{2n+1}}_{e_{2n+1}}$ and similarly for $Q_{2n+2}(E^{n+1}_{2n+2})$ with the result \begin{equation} \ket{\,\Phi^{n+1}, E^{n+1}}_{qe} = \frac{X^{d(\Phi^n, E^n)}}{(1+X^2)^n}\; \frac{ X^{ 2 - \delta(\Phi^{n+1}_{2n+2}, E^{n+1}_{2n+2})- \delta(\Phi^{n+1}_{2n+1}, E^{n+1}_{2n+1}) }}{(1+X^2)} \, \ket{\Phi^{n+1}}_q \ket{E^{n+1}}_e \,. \end{equation} The $\delta$'s in the exponent of $X$ are Kronecker deltas and combining the factors of $X$ gives the result. \end{proof} \end{document}
arXiv
More bijections for Entringer and Arnold families June 2021, 29(2): 2187-2221. doi: 10.3934/era.2020112 Pullback attractors for stochastic recurrent neural networks with discrete and distributed delays Meiyu Sui 1, , Yejuan Wang 1,, and Peter E. Kloeden 2, School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, China Mathematisches Institut, Universität Tübingen, Tübingen 72076, Germany * Corresponding author: Yejuan Wang Received April 2020 Revised August 2020 Published June 2021 Early access October 2020 Fund Project: This work was supported by NSF of China (Grant Nos. 41875084, 11571153), the Fundamental Research Funds for the Central Universities under Grant Nos. lzujbky-2018-it58 and lzujbky-2018-ot03 In this paper, we investigate a class of stochastic recurrent neural networks with discrete and distributed delays for both biological and mathematical interests. We do not assume any Lipschitz condition on the nonlinear term, just a continuity assumption together with growth conditions so that the uniqueness of the Cauchy problem fails to be true. Moreover, the existence of pullback attractors with or without periodicity is presented for the multi-valued noncompact random dynamical system. In particular, a new method for checking the asymptotical compactness of solutions to the class of nonautonomous stochastic lattice systems with infinite delay is used. Keywords: Pullback attractor, multi-valued dynamical system, stochastic recurrent neural networks, variable delays, unbounded delays. Mathematics Subject Classification: Primary: 34A33; Secondary: 34D45, 34F05. Citation: Meiyu Sui, Yejuan Wang, Peter E. Kloeden. Pullback attractors for stochastic recurrent neural networks with discrete and distributed delays. Electronic Research Archive, 2021, 29 (2) : 2187-2221. doi: 10.3934/era.2020112 J.-P. Aubin and H. Franskowska, Set-Valued Analysis, Systems & Control: Foundations & Applications, 2. Birkhäuser Boston, Inc., Boston, MA, 1990. Google Scholar P. Balasubramaniam and R. Rakkiyappan, Global asymptotic stability of stochastic recurrent neural networks with multiple discrete delays and unbounded distributed delays, Appl. Math. Comput., 204 (2008), 680-686. doi: 10.1016/j.amc.2008.05.001. Google Scholar T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuss and J. Valero, Non-autonomous and random attractors for delay random semilinear equations without uniqueness, Discrete Contin. Dyn. Syst., 21 (2008), 415-443. doi: 10.3934/dcds.2008.21.415. Google Scholar T. Caraballo, F. Morillas and J. Valero, Pullback attractors of stochastic lattice dynamical systems with a multiplicative noise and non-Lipschitz nonlinearities, Differential and Difference Equations with Applications, Springer Proc. Math. Stat., Springer, New York, 47 (2013), 341-349. doi: 10.1007/978-1-4614-7333-6_27. Google Scholar T. Caraballo, F. Morillas and J. Valero, On differential equations with delay in Banach spaces and attractors for retarded lattice dynamical systems, Discrete Contin. Dyn. Syst., 34 (2014), 51-77. doi: 10.3934/dcds.2014.34.51. Google Scholar D. N. Cheban, Global Attractors of Non-Autonomous Dissipative Dynamical Systems: Interdisciplinary Mathematical Sciences, 1, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2004. doi: 10.1142/9789812563088. Google Scholar T. Chen, Global exponential stability of delayed Hopfield neural networks, Neural Netw., 14 (2001), 977-980. Google Scholar G. Chen, D. Li, L. Shi, O. van Gaans and S. Verduyn Lunel, Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays, J. Differential Equations, 264 (2018), 3864-3898. doi: 10.1016/j.jde.2017.11.032. Google Scholar A. Cichocki and R. Unbehauen, Neural Networks for Optimization and Signal Processing, Wiley, Chichester, 1993. Google Scholar H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Theory Related Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar F. Flandoli and B. Schmalfuss, Random attractors for the $3$D stochastic Navier-Stokes equation with multiplicative white noise, Stochast. Stochast. Rep., 59 (1996), 21-45. doi: 10.1080/17442509608834083. Google Scholar Y. Guo, Mean square global asymptotic stability of stochastic recurrent neural networks with distributed delays, Appl. Math. Comput., 215 (2009), 791-795. doi: 10.1016/j.amc.2009.06.002. Google Scholar X. Han and P. E. Kloeden, Non-autonomous lattice systems with switching effects and delayed recovery, J. Differential Equations, 261 (2016), 2986-3009. doi: 10.1016/j.jde.2016.05.015. Google Scholar S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice-Hall, Englewood Cliffs, 1994. Google Scholar J. Hu, S. Zhong and L. Liang, Exponential stability analysis of stochastic delayed cellular neural network, Chaos Solitons Fractals, 27 (2006), 1006-1010. doi: 10.1016/j.chaos.2005.04.067. Google Scholar C. Huang, Y. He and H. Wang, Mean square exponential stability of stochastic recurrent neural networks with time-varying delays, Comput. Math. Appl., 56 (2008), 1773-1778. doi: 10.1016/j.camwa.2008.04.004. Google Scholar P. E. Kloeden, Pullback attractors of nonautonomous semidynamical systems, Stoch. Dyn., 3 (2003), 101-112. doi: 10.1142/S0219493703000632. Google Scholar X. Li and X. Fu, Stability analysis of stochastic functional differential equations with infinite delay and its application to recurrent neural networks, J. Comput. Appl. Math., 234 (2010), 407-417. doi: 10.1016/j.cam.2009.12.033. Google Scholar X. Li, F. Li, X. Zhang, C. Yang and W. Gui, Exponential stability analysis for delayed semi-Markovian recurrent neural networks: A homogeneous polynomial approach, IEEE Trans. Neural Netw. Learn. Syst., 29 (2018), 6374-6384. doi: 10.1109/TNNLS.2018.2830789. Google Scholar G. Nagamani, S. Ramasamy and P. Balasubramaniam, Robust dissipativity and passivity analysis for discrete-time stochastic neural networks with time-varying delay, Complexity, 21 (2015), 47-58. doi: 10.1002/cplx.21614. Google Scholar G. Peng and L. Huang, Exponential stability of hybrid stochastic recurrent neural networks with time-varying delays, Nonlinear Anal. Hybrid Syst., 2 (2008), 1198-1204. doi: 10.1016/j.nahs.2008.09.012. Google Scholar T. Roska and L. O. Chua, Cellular neural networks with nonlinear and delay-type template elements and non-uniform grids, Int. J. Circuit Theory Appl., 20 (1992), 469-481. Google Scholar R. Sakthivel, R. Samidurai and S. M. Anthoni, Asymptotic stability of stochastic delayed recurrent neural networks with impulsive effects, J. Optim. Theory Appl., 147 (2010), 583-596. doi: 10.1007/s10957-010-9728-8. Google Scholar B. Schmalfuss, Measure Attractors of the Stochastic Navier-Stokes Equation, Report 258, Universität Bremen, Fachbereiche Mathematik/Informatik, Elektrotechnik/Physik, Forschungsschwerpunkt Dynamische Systeme, Bremen, 1991. Google Scholar Y. Sun and J. Cao, $P$th moment exponential stability of stochastic recurrent neural networks with time-varying delays, Nonlinear Anal. Real World Appl., 8 (2007), 1171-1185. doi: 10.1016/j.nonrwa.2006.06.009. Google Scholar M. Syed Ali and M. Marudai, Stochastic stability of discrete-time uncertain recurrent neural networks with Markovian jumping and time-varying delays, Math. Comput. Modelling, 54 (2011), 1979-1988. doi: 10.1016/j.mcm.2011.05.004. Google Scholar M. Syed Ali, Stochastic stability of uncertain recurrent neural networks with Markovian jumping parameters, Acta Math. Sci. Ser. B, 35 (2015), 1122-1136. doi: 10.1016/S0252-9602(15)30044-8. Google Scholar P. Venetianer and T. Roska, Image compression by delayed CNNs, IEEE Trans. Circuits Syst. I, 45 (1998), 205-215. Google Scholar C. Vidhya, S. Dharani and P. Balasubramaniam, Global asymptotic stability of stochastic reaction-diffusion recurrent neural networks with Markovian jumping parameters and mixed delays, J. Anal., 27 (2019), 277-292. doi: 10.1007/s41478-018-0123-4. Google Scholar L. Wan and Q. Zhou, Almost sure exponential stability of stochastic recurrent neural networks with time-varying delays, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 20 (2010), 539-544. doi: 10.1142/S0218127410025594. Google Scholar B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar Y. Wang and M. Sui, Finite lattice approximation of infinite lattice systems with delays and non-Lipschitz nonlinearities, Asymptot. Anal., 106 (2018), 169-203. doi: 10.3233/ASY-171444. Google Scholar Y. Wang and J. Wang, Pullback attractors for multi-valued non-compact random dynamical systems generated by reaction-diffusion equations on an unbounded domain, J. Differential Equations, 259 (2015), 728-776. doi: 10.1016/j.jde.2015.02.026. Google Scholar J. Wang, Y. Wang and D. Zhao, Pullback attractors for multi-valued non-compact random dynamical systems generated by semi-linear degenerate parabolic equations with unbounded delays, Stoch. Dyn., 16 (2016), 1750001, 49 pp. doi: 10.1142/S0219493717500010. Google Scholar S. Zhu, W. Luo and Y. Shen, Robustness analysis for connection weight matrices of global exponential stability of stochastic delayed recurrent neural networks, Circuits Systems Signal Process, 33 (2014), 2065-2083. doi: 10.1007/s00034-013-9735-8. Google Scholar Jingyu Wang, Yejuan Wang, Tomás Caraballo. Multi-valued random dynamics of stochastic wave equations with infinite delays. Discrete & Continuous Dynamical Systems - B, 2022 doi: 10.3934/dcdsb.2021310 Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343 Yejuan Wang. On the upper semicontinuity of pullback attractors for multi-valued noncompact random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3669-3708. doi: 10.3934/dcdsb.2016116 Zhigang Zeng, Tingwen Huang. New passivity analysis of continuous-time recurrent neural networks with multiple discrete delays. Journal of Industrial & Management Optimization, 2011, 7 (2) : 283-289. doi: 10.3934/jimo.2011.7.283 Udhayakumar Kandasamy, Rakkiyappan Rajan. Hopf bifurcation of a fractional-order octonion-valued neural networks with time delays. Discrete & Continuous Dynamical Systems - S, 2020, 13 (9) : 2537-2559. doi: 10.3934/dcdss.2020137 Yongkun Li, Bing Li. Pseudo compact almost automorphy of neutral type Clifford-valued neural networks with mixed delays. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021248 Mohamed Ali Hammami, Lassaad Mchiri, Sana Netchaoui, Stefanie Sonner. Pullback exponential attractors for differential equations with variable delays. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 301-319. doi: 10.3934/dcdsb.2019183 Leong-Kwan Li, Sally Shao, K. F. Cedric Yiu. Nonlinear dynamical system modeling via recurrent neural networks and a weighted state space search algorithm. Journal of Industrial & Management Optimization, 2011, 7 (2) : 385-400. doi: 10.3934/jimo.2011.7.385 Ruoxia Li, Huaiqin Wu, Xiaowei Zhang, Rong Yao. Adaptive projective synchronization of memristive neural networks with time-varying delays and stochastic perturbation. Mathematical Control & Related Fields, 2015, 5 (4) : 827-844. doi: 10.3934/mcrf.2015.5.827 Yangrong Li, Renhai Wang, Lianbing She. Backward controllability of pullback trajectory attractors with applications to multi-valued Jeffreys-Oldroyd equations. Evolution Equations & Control Theory, 2018, 7 (4) : 617-637. doi: 10.3934/eect.2018030 Ting Li. Pullback attractors for asymptotically upper semicompact non-autonomous multi-valued semiflows. Communications on Pure & Applied Analysis, 2007, 6 (1) : 279-285. doi: 10.3934/cpaa.2007.6.279 Jui-Pin Tseng. Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4693-4729. doi: 10.3934/dcds.2013.33.4693 Tingting Su, Xinsong Yang. Finite-time synchronization of competitive neural networks with mixed delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3655-3667. doi: 10.3934/dcdsb.2016115 Cheng-Hsiung Hsu, Suh-Yuh Yang. Traveling wave solutions in cellular neural networks with multiple time delays. Conference Publications, 2005, 2005 (Special) : 410-419. doi: 10.3934/proc.2005.2005.410 Yejuan Wang, Tomás Caraballo. Morse decomposition for gradient-like multi-valued autonomous and nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (8) : 2303-2326. doi: 10.3934/dcdss.2020092 Josef Diblík, Radoslav Chupáč, Miroslava Růžičková. Existence of unbounded solutions of a linear homogenous system of differential equations with two delays. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2447-2459. doi: 10.3934/dcdsb.2014.19.2447 Mustapha Yebdri. Existence of $ \mathcal{D}- $pullback attractor for an infinite dimensional dynamical system. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 167-198. doi: 10.3934/dcdsb.2021036 Jong Yeoul Park, Jae Ug Jeong. Pullback attractors for a $2D$-non-autonomous incompressible non-Newtonian fluid with variable delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2687-2702. doi: 10.3934/dcdsb.2016068 Ozlem Faydasicok. Further stability analysis of neutral-type Cohen-Grossberg neural networks with multiple delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1245-1258. doi: 10.3934/dcdss.2020359 Meiyu Sui Yejuan Wang Peter E. Kloeden
CommonCrawl
A handheld platform for target protein detection and quantification using disposable nanopore strips Real-time Colorimetric Quantitative Molecular Detection of Infectious Diseases on Smartphone-based Diagnostic Platform Kun Yin, Vikram Pandian, … Changchun Liu Integrating programmable DNAzymes with electrical readout for rapid and culture-free bacterial detection using a handheld platform Richa Pandey, Dingran Chang, … Leyla Soleymani Ultrasensitive and label-free biosensor for the detection of Plasmodium falciparum histidine-rich protein II in saliva Gita V. Soraya, Chathurika D. Abeyrathne, … Stephen J. Rogerson Portable real-time colorimetric LAMP-device for rapid quantitative detection of nucleic acids in crude samples G. Papadakis, A. K. Pantazis, … E. Gizeli A film-lever actuated switch technology for multifunctional, on-demand, and robust manipulation of liquids Chao Liang, Zihang Yang & Hanqing Jiang Single-tube collection and nucleic acid analysis of clinical samples for SARS-CoV-2 saliva testing Kyle H. Cole, Alexis Bouin, … Andrej Lupták A Prototype Antibody-based Biosensor for Measurement of Salivary MMP-8 in Periodontitis using Surface Acoustic Wave Technology John J. Taylor, Katrin M. Jaedicke, … Philip M. Preshaw Sensitive detection and quantification of SARS-CoV-2 in saliva Mustafa Fatih Abasiyanik, Blake Flood, … Evgeny Izumchenko Ultra-sensitive and rapid detection of nucleic acids and microorganisms in body fluids using single-molecule tethering Wen-Chih Cheng, Troy Horn, … Alfredo Celedon Trevor J. Morin1, William L. McKenna1, Tyler D. Shropshire1, Dustin A. Wride1, Joshua D. Deschamps1, Xu Liu1, Reto Stamm1, Hongyun Wang ORCID: orcid.org/0000-0001-8492-46621,2 & William B. Dunbar1 Scientific Reports volume 8, Article number: 14834 (2018) Cite this article Accessible point-of-care technologies that can provide immunoassay and molecular modalities could dramatically enhance diagnostics, particularly for infectious disease control in low-resource settings. Solid-state nanopores are simple and durable sensors with low-energy instrumentation requirements. While nanopore sensors have demonstrated efficacy for nucleic acid targets, selective detection and quantification of target proteins from sample background has not been demonstrated. We present a simple approach for electronic detection and quantification of target proteins that combines novel biomolecular engineering methods, a portable reader device and disposable nanopore test strips. The target of interest can be varied by swapping the binding domain on our engineered detection reagent, which eficiently binds in the bulk-phase to the target and subsequently generates a unique signature when passing through the pore. We show modularity of the detection reagent for two HIV antibodies, TNFα and tetanus toxin as targets. A saliva swab-to-result is demonstrated for clinically relevant HIV antibody levels (0.4–20 mg/liter) in under 60 seconds. While other strip-like assays are qualitative, the presented method is quantitative and sets the stage for simultaneous immunoassay and molecular diagnostic functionality within a single portable platform. To address disease control needs in resource-limited settings, the WHO Sexually Transmitted Diseases Diagnostics Initiative created the ASSURED criteria1: Affordable, Sensitive, Specific, User-friendly, Rapid and robust, Equipment-free or "untethered"2, and Deliverable to end-users. Although the study focused on molecular diagnosis of curable STDs, the same criteria is relevant for any prospective point-of-care (PoC) technology. To maximize impact, a technology positioned to achieve the ASSURED criteria should also permit multiplexing and support molecular and immunoassays. We developed a nanopore-based technology that could be a sensing platform upon which the ASSURED criteria can be realized, while supporting a multiplexed and multi-modal menu. Here we demonstrate a novel method for selective nanopore detection of protein targets from sample background, including saliva, serum and plasma3. The target binding domains are attached to DNA "scaffolds" using bioengineering methods that can accommodate a large menu of domain types, including peptides, aptamers, affibodies and antibodies. Thus, binding domains optimized for sensitivity and specificity within any other assay format can be incorporated into our methodology. Prior nanopore studies on protein detection were performed in ideal buffer conditions using unmodified solid-state nanopores4,5,6 or surface modified pores to enable target specificity7,8. These approaches, however, suffered from poor yield due to a high nanopore failure rate (90% loss in8). Methods similar to ours have been employed for selective target detection9,10,11,12, with only10 addressing quantification but through the use of fluorescent polarization data and not the nanopore data. In traditional diagnostics, the gold-standard laboratory technology is the enzyme-linked immunosorbent assay (ELISA). However, resource-limited settings lack the infrastructure to support standard ELISA protocols, which take hours. Several studies have sought to compress and integrate the ELISA protocol to achieve a Point-of-Care technology. By exchanging quantification for qualitative results, minimalist protocols can substantially reduce cost and permit multiplexing with low sample input in 15–20 min13,14, or produce a visual yes/no test with high sensitivity15. Recapitulating quantification, however, requires more instrumentation and/or human involvement due to the requirement for multiple wash steps16. Single molecule array technologies such as Simoa by Quanterix and other digital ELISA formats may outperform standard ELISA, but remain expensive, lab-confined and time intensive17,18,19. When qualitative results are acceptable and targets are in sufficient supply (>50 μg/liter), lateral flow immunoassays (LFAs) are an inexpensive sample-to-answer test that require no additional equipment and produce results in 10–15 min20,21. While LFAs are not amenable to multiplexing beyond a few test lines, inexpensive paper-based immunoassays are capable of multiplexing22. Since LFAs are often not sufficiently sensitive or specific, they do not "… meet the ideal ASSURED product profile"1. Instrument-fortified LFAs have demonstrated quantification and enhanced sensitivity and specificity. Commercial examples (Lepu Leccurate) and comparable technologies (Alere Triage system, Abbott i-STAT, Abaxis and Gyrolab centrifugal CD systems) were recently reviewed in the context of immunoassays for cardiovascular diseases (CVDs), the leading causes of death worldwide23, but these technologies are lab-confined. Next generation PoC technologies will likely be enabled by novel microfluidics and nanotechnologies. Published results have demonstrated unprecedented multiplexing from reduced sample volumes; however, the instrumentation required is often bulky, complex and expensive. A barcoding chip achieved a multiplexed panel (8–12) of plasma proteins from a finger prick of whole blood in 10 min, but an expensive microarray scanner was needed for quantification24. A plasmon resonance device achieved 8-log range and 0.4 ng/liter limit of detection, but required a multi-light spectral tool or cartridge-mounted movable stage25. A comprehensive review on microfluidic immunoassay chips that spanned optical, electrochemical and mechanical methods found common challenges26. Functionalized surface-based sensors that maintain target specificity in the presence of sample matrices was cited as a key challenge. The need for channel flow for mass transport to promote surface binding, common to many protocols, often translates to long (~1 hour) incubation periods25,27, though shorter periods have been achieved (15 min28). Another challenge was a common need for wash steps that, like ELISA, translates to more human interventions or higher instrumentation costs to automate the micofluidics. Closest to nanopores are electrochemical readouts from functionalized nanowires29 and nanoribbons30. While these sensors have demonstrated high sensitivity (low μg/liter), it is unclear if they can be fabricated at sufficiently high yield to keep device costs acceptably low. We achieved target specificity by engineering a molecule that has three functions: (1) to capture the target in bulk phase via a binding domain, (2) to shuttle the target through the pore via a charged polymeric scaffold attached to the binding domain, and (3) to generate a unique electrical signature when it passes through the pore with target bound. Specificity and sensitivity are governed by the affinity of the target binding domain, and the detectability and uniqueness of the electrical signature. Since binding between the target and engineered molecules happens by diffusion in the bulk phase chamber, incubation periods sufficient to achieve equilibrium are faster than for surface-binding assays that also rely on diffusion31. Including only 30 seconds of incubation, our saliva swab-to-result is demonstrated for 0.4–20 mg/liter HIV antibody in under 60 seconds total. We demonstrate selective detection of two different HIV antibodies (150 kDa), TNFα protein (52 kDa), and tetanus toxin protein (TT, 150 kDa). Using these data, we present a novel mathematical framework for target quantification. Test strips for single molecule sensing The reader device and test strips used in this study (Fig. 1a,b) were designed to support laboratory-based development protocols, while future iterations are being integrated to support sample-to-answer applications. The buffer is pipetted into one channel and reagents and buffer are pipetted into the other channel and the strip is inserted into the reader, which is connected to a laptop by a USB cable. Custom software then drives the reader to supply a voltage and record the ionic current through the pore. The voltage captures the molecule and drives it through the pore into the opposing chamber when a single charged molecule such as DNA diffuses sufficiently close to the pore (Fig. 1c). The passage "event" is detected by the reader circuitry as a temporary shift in the ionic current, which is quantified by the passage duration and maximum conductance depth, max δG (Fig. 1d). A population of single molecule events is recorded over time (Fig. 1e), earning our instrument the moniker "MOM" for Molecule Occlusion Meter (Supplementary Table S1). Our platform also performs comparably to the Molecular Devices 700B amplifier (Supplementary Table S2, Supplementary Figs S5 and S6), the most commonly used brand in nanopore research. We fabricated our own nanopore chips and verified that they perform comparably to state-of-the-art low-noise solid state nanopore devices32 (Supplementary Fig. 2, Supplementary Methods). Single-molecule sensing by inserting a disposable strip into a USB-enabled mobile device. (a) The 3D printed strip houses the fluidics and nanopore chip (Supplementary Fig. S1). Replaceable Ag/AgCl electrodes connect the fluidic channels with the circuitry in the reader device. Reagents and buffer are loaded into one channel, with buffer only in the other channel. (b) The strip is inserted into the handheld device that houses voltage-clamp circuitry, which supplies a voltage across the pore and measures ionic current through the pore. Data transfer is through a USB port for event detection and analysis on a PC or laptop (Supplementary Figs S3 and S4). (c) Schematic of diffusing DNA above the nanopore on the reagent-added side, during (i) an open nanopore period, and (ii) passage of a single DNA through the pore into the voltage-positive side. (d) A representative electrical current event caused by a single dsDNA passing through a nanopore, annotating (i) the open pore periods and (ii) the passage duration. Events are quantified by the passage duration and the maximum conductance depth (max δG = δI/V, V = 0.1 V). (e) An all-event scatter plot of max δG versus duration displays 1049 events of 5.6 kb dsDNA recorded in 5 minutes with a 22 nm diameter nanopore. Target detection by single molecule event sorting Selective detection of a target protein is achieved by sorting every recorded event into one of two categories: target positive or target negative. This is in contrast to the event-pattern mining used in9,10,11,12, where a non-trivial fraction of events (up to 85% in9) are trimmed, undermining quantification. These methods also require glass nanopipettes that are difficult to mass manufacture and ruggedize. By contrast, the structurally robust nanopores utilized in this study cover a wide range of diameters, spanning 18–46 nm for the 20 pores used for HIV Ab targets, and are amenable to manufacturing using high volume fabrication tools. Our method is achieved by first creating a detection reagent that generically combines a charged polymer scaffold molecule with a bi-functional fusion molecule. The fusion molecule has one domain for attachment to the scaffold, and a second domain for binding the target. The charged polymer, commonly single or double-stranded DNA (dsDNA), is chosen for facile nanopore capture and detection. We combined a 1074 bp dsDNA scaffold with a bisPNA-peptide as the bi-functional fusion molecule (Methods). The bisPNA was designed to bind centrally to the dsDNA with high stability33. The peptide sequence comprises an epitope within the third variable loop (V3) of the HIV-1 viral envelope glycoprotein34. People infected with genotype clade E HIV are known to generate primarily non-neutralizing antibodies to the V3 loop sequence, making it a good epitope for diagnosis35. Single-molecule sorting is accomplished when the target-bound detection molecule (DNA/PNA-peptide/antibody) creates a unique signature compared to other molecules present in sample passing through the nanopore. Other molecules include unbound detection reagent (DNA/PNA-peptide), any free sub-components of the detection reagent, free antibody target, molecules from diluted sample matrices when present, or any combinations thereof. Reagents comprising the detection molecule and known concentrations of target, including human monoclonal antibody 447-52D in a first set of experiments, and recombinant monoclonal antibody HGD65 in a second set of experiments, were tested as model targets. In each nanopore experiment, a set of reagents are sequentially measured on the same pore, with flushing of buffer between reagent pairs. Reagent sets here include positive controls, which include detection reagent and the target at varying concentrations, and negative controls, which exclude the target or some necessary component of the detection reagent. Prior to nanopore experiments, electrophoretic mobility shift assays (EMSAs) verified that full DNA/PNA-peptide/447-52D complex formation proceeded the incubation conditions employed (Fig. 2b, Methods, Supplementary Fig. S8). Nanopore experiments with positive control reagents revealed a unique population of events with larger max δG attributable to full DNA/PNA-peptide/447-52D complex, consistent with a unique EMSA band (Fig. 2a,b, reagent iv). Negative controls run on the same nanopore, including detection reagent in the absence of 447-52D and DNA/PNA without peptide but with 447-52D, generate event populations similar to DNA alone (Fig. 2a,b, reagents i–iii). These negative controls do not recapitulate the pronounced fraction of larger max δG events that are observed with positive control reagent sets. Selective detection of HIV antibody 447-52D using a single nanopore and an engineered detection reagent. (a) Contrasting molecular cartoons and recorded nanopore event signatures for (i) DNA alone, (ii) DNA/PNA-peptide detection reagent, (iii) DNA/PNA (sans peptide) with antibody and (iv) detection reagent bound by antibody. Each reagent (i–iv) was measured sequentially on the same 27 nm diameter pore. In between each reagent, the nanopore was flushed with clean buffer for 5 minutes. The number of events N and recording periods were: (i) N = 332 in 16 min, (ii) N = 434 in 13 min, (iii) N = 353 in 18 min, (iv) N = 441 in 19 min. (b) EMSA and nanopore event plots display consistent trends for (i–iv). Unlabelled lanes were not analyzed on the nanopore in this study. The gel upper band that is indicative of full complex (DNA/PNA-peptide/antibody) matches the pronounced increase in nanopore events with larger max δG values. In particular, while less than 2% of events for (i–iii) are above max δG = 3 nS, the fraction increases to 23% for full complex (iv), a trend used for positive detection with 99% confidence as described in the main text. The variable Q denotes the fraction of events tagged as target positive, with 99% confidence interval \(2.58\sqrt{Q\mathrm{(1}-Q)/N}\). We implemented our previously developed method to calculate target positive vs. negative36 (Supplementary Methods). The first step is to create an event-based criterion with which to sort events as target positive or target negative. Based on the data in Fig. 2, events were target positive if max δG > 3 nS. Next, each event has random variable Z defined to be 1 when the event is tagged, and 0 if not tagged. While Z has a Bernoulli distribution, the fraction of N events meeting the criteria \(Q={\sum }_{j=1}^{N}\,{Z}_{j}/N\) has a Binomial distribution with 99% confidence interval \({Q}_{\ast }=2.58\sqrt{Q\mathrm{(1}-Q)/N}\). We refer to Q as the tagged-fraction of events. Applying the criteria to reagents (i-iii) produces Q ± Q* values (after converting to percentage) of 0.6 ± 1.1%, 0.2 ± 0.6% and 1.1 ± 1.5%, respectively, while reagent (iv) measured 23.1 ± 5.2%. By assigning a false positive threshold on the fractional scale as Qf.p. = 0.02 (2%), the negative control reagents produce negative results while the positive control with both detection reagent and 447-52D present produced a positive result with 99% confidence, since 0.231−0.052 > 0.02. For the data in Fig. 2 and all subsequent nanopore data sets, the test for presence of a target protein is positive with 99% confidence if Q−Q* > Qf.p.; otherwise, the result is negative. The negative controls for each pore or across of set of pores are used to identify the max δG threshold such that Q ≤ Qf.p., keeping Qf.p. = 0.02. All reagents with detection reagent and target present are then tested using Q − Q* > Qf.p. and the same max δG threshold. For each 99% positive result, we report the sensing time-to-positive (TTP), defined as the first time at which Q − Q* > Qf.p. is true and remains true. The TTP for reagent (iv) in Fig. 2 is 1.35 min. Of the seven total reagent sets run on this pore, only the two with full complex present (i.e., the two positive controls) produced positive results (pore P1, Supplementary Table S3, Supplementary Fig. S7). Comparable reagent sets to those shown in Fig. 2 were tested on two other nanopores (P2-P3, Supplementary Table S3, Supplementary Fig. S7). Nanopore and EMSA assays determined that the peptide located at the bisPNA terminus produced superior target binding performance compared to an interior location (Supplementary Methods). The reagents were also tested using traditional benchtop nanopore setups (P1-P3) and the MOM and test strip setup (P4-P6, Supplementary Table S4) to compare performance. In all cases, the presence of DNA/PNA-peptide/447-52D registered positive with 99% confidence as a distinct population above negative control reagents, even in the presence of diluted serum above the nanopore (1:1000, 1:333, 1:250; Supplementary Fig. S9). One reagent set with a longer scaffold (3250 bp) was also tested for target binding efficiency, which reduced the incubation period to less than 10 seconds and produced a 99% positive with TTP of only 7 seconds (P6, Supplementary Fig. S10). EMSA verified full complex formation of the second HIV Ab target HGD65, followed by seven separate nanopore experiments (pores P7-P13, Supplementary Tables S5 and S12, Supplementary Figs S11–14). Two shorter scaffolds (217 bp, 108 bp) were also tested in four nanopore experiments (pores 14–17, Supplementary Table S19). All eleven nanopore experiments (P7-P17) resulted in 99% positive target detection for all HGD65 concentrations tested, spanning two logs (0.1–20 nM) above the pore with TTP values from seconds to minutes (Supplementary Tables). Positive detection and efficient TTP were also observed for three nanopore experiments with 1:250 dilution of serum (P18-P20, Supplementary Table S26) and six nanopore experiments with 1:100 dilution of saliva (P21-P26, Supplementary Tables S30 and S37). Two of the experiments (P25, P26) utilized 30 second incubations (10% saliva), achieving sample-to-positive in less than 60 seconds total for 0.266-13.3 nM HGD65 (Supplementary Fig. S26). Concentrations reported here are after a dilution step (pre-dilution concentrations are reported in the referenced Supplementary Tables). Target quantification from sorted single molecules We measured a known set of HGD65 concentrations while keeping scaffold/fusion concentration constant. A set of reagents were sequentially measured, with adequate flushing of buffer between each pair of reagents, on a single nanopore to generate data for modeling. A model was then fit to the data, and model-based predictions were tested against the known target concentrations. The variable X is defined as the target-to-scaffold/fusion concentration ratio, and the tagged fraction of events is modeled as Q vs. X for sub-saturating and saturating conditions. We derived a phenomenological model (Supplementary Methods) for the fraction of tagged events Ftag(X): $$\begin{array}{c}{F}_{{\rm{tag}}}(X)=\frac{\alpha {q}_{1}{F}_{{\rm{bound}}}(X)}{1+({q}_{1}-\mathrm{1)}{F}_{{\rm{bound}}}(X)},\\ \,\,\,\,\,\,{\rm{with}}\,{F}_{{\rm{bound}}}(X)={q}_{2}+1+X-\sqrt{{({q}_{2}+1+X)}^{2}-4X}.\end{array}$$ There are three parameters: α is the probability of tagging an event caused by a scaffold/fusion/target molecule (i.e., the true positive probability); q1 is the ratio of capture rate constants between the scaffold/fusion/target and the unbound scaffold/fusion; and q2 is the ratio of Kd to known total scaffold/fusion concentration. The three parameters (α, q1, q2) are fit to the data using direct simplex search optimization. The Q vs. X data and model fit for pore P15 are shown in Fig. 3b as a representative example. Measuring and modeling the fraction of tagged events as a function of HGD65 target-to-scaffold/fusion ratio. (a) EMSA assay showed increasing fraction of target-bound scaffold/fusion molecules as HGD65 antibody concentration increases, using a 217 bp/PNA-peptide detection reagent. (b) The data are the tagged fraction of recorded events with the 95% confidence interval \((Q\pm 1.96\sqrt{Q\mathrm{(1}-Q)/N})\) at six different target-to-scaffold/fusion ratios X. The fitted three-parameter model Ftag(X) is defined in Eq. (1). The reagent for each ratio X was measured sequentially on the same pore (P15, Supplementary Table S19), with 5 minutes of event-free buffer only recording in between reagents. The number of events N and ordered recording periods for each X were: (i) X = 0, N = 475 in 7 min, (ii) X = 2.8, N = 819 in 17 min, (iii) X = 1.4, N = 685 in 14 min, (iv) X = 0.7, N = 615 in 13 min, (v) X = 0.35, N = 516 in 10 min, (vi) X = 3.7, N = 663 in 16 min, (vii) X = 7.4, N = 783 in 14 min. Model fitting error was 2.2% using a normalized root-mean-square (Supplementary Table S20). The Q data for X = (0.35, 0.7, 1.4) are sufficiently below saturation to permit concentration estimation, as described in the main text. The results are estimated (0.53 ± 0.07, 1.4 ± 0.05, 2.8 ± 0.3) nM compared to knowns (0.7, 1.4, 2.8) nM (Supplementary Table S21). Thus, a single pore nominally has a 2-log detection range and 1-log quantitative range. A simplified two parameter model ignores the difference in capture rate constants between target-bound and unbound scaffold/fusion molecules by setting q1 = 1, resulting in Ftag(X) = αFbound(X). Using a normalized root-mean-square (NRMS) error to quantify model fitting performance, the three-parameter model (NRMS = 2.2%) outperformed the two-parameter model (NMRS = 14%) for P15 in Fig. 3. In fact, the three-parameter model outperformed the two-parameter model aross all experiments with quantitative data, including ten buffer-only control experiments (Supplementary Tables S6, S13, S20) and six experiments with saliva background (Supplementary Tables S31, S38). The three- and two-parameter models were also adapted to fit to the capture rate of tagged events (Supplementary Methods), which generally did not perform as well as modeling the Q data. Of related published research9,10,11,12, only10 claimed quantitative results; however, only fluorescent polarization data was modeled by fitting Kd with the two parameter model Fbound(X). For the purpose of estimating Kd between the target and binding domain, the two parameter model was consistent with ELISA (Supplementary Methods). Specifically, Kd = 0.86 ± 0.58 nM from 5 experiments with 1074 bp scaffold and Kd = 0.24 ± 0.25 nM from 8 experiments with 217 bp scaffold, while duplicate ELISAs run on a plates coated with PNA-peptide and peptide alone produced Kd = 0.4 ± 0.2 nM and Kd = 2.2 ± 1.1 nM, respectively (Methods). Concentration estimation is based on inversion of model Eq. (1) by replacing Ftag with Q and solving for X, given the measured data Q and model parameters values (α, q1, q2). Performance of this method and estimation using the two parameter model are assessed and compared in detail in the Supplementary Methods. Interestingly, single-pore estimation performance is not uniformly distributed within the predictive range, but appears optimal for 0.5 < X < 5 (Supplementary Fig. S23). Model-based estimation using the tagged event rate was also examined (Supplementary Fig. S22). Demonstrating reagent modularity with TNFα and tetanus toxin The modularity of our method is showcased by applying it to two other targets, TNFα and tetanus toxin (TT). First, a 309 bp/bisPNA-affibody construct was used as a detection reagent for TNFα (Methods). A sandwich ELISA tested the binding affinity between the affibody and TNFα both without and with the 1 M LiCl nanopore recording buffer present, resulting in a conserved Kd ≈ 0.2 nM. EMSA verified full DNA/PNA-affibody/TNFα complex formation (Fig. 4a). We followed this with triplicate nanopore experiments (P29 in Fig. 4b,c; P30 and P31 in Supplementary Table S52, Supplementary Fig. S28). Measuring and modeling the fraction of tagged events as a function of TNFα-to-scaffold/fusion ratio, X. (a) EMSA assay shows increasing fraction of target-bound scaffold/fusion molecules as TNFα protein concentration increases, using a 309 bp/PNA-affibody detection reagent (Methods, Supplementary Fig. S27). (b) All-event scatter plot of max δG versus duration for reagents with varying X. Each reagent was measured sequentially on the same 25–30 nm diameter pore (P29, Supplementary Table S52), with buffer only flushing and recording between reagent pairs. The number of events and recording periods for each reagent are: (X = 0) N = 518 in 10 min; (X = 12.5) N = 1222 in 10 min; (X = 6) N = 1336 in 10 min; (X = 3) N = 1565 in 10 min; (X = 1) N = 819 in 10 min; (X = 0.5) N = 350 in 4 min; and (X = 0.25) N = 690 in 10 min. Using the X = 0 reagent to establish the tagging criteria max δG > 3.56 nS (dashed line), all X > 0 values were positive with 99% confidence above a false positive of 2%, with the exception of X = 0.25 which was not positive. The time to positive (TTP) values: X = 12.5 in 6 sec; X = 6 in 5 sec; X = 3 in 10 sec; X = 1 in 41 sec; and X = 0.5 in 2.3 min. (c) The positively detected data in (b) were subsequently modeled using the three-parameter model Ftag(X), resulting in parameter values (α, q1, q2) = (0.35, 0.17, 0.11) and NRMS fitting error of 3.8% (Supplementary Table S53). Results with HGD65 antibody target show that saturation approaches 100% bound detection reagent (Fig. 3), but both EMSA and nanopore results with TNFαTNFα in Fig. 4 suggest that a conserved fraction of detection reagent are not target bound. Further optimization in detection reagent construction will address this. Across three nanopore experiments, five TNFα concentrations within the 1.5–37.5 nM range (78–1950 μg/liter) resulted in 99% positive detection with TTP values from 4 seconds to 5 minutes, while 0.75 nM was not positively detected. The model fit produced Kd ≈ 0.2 nM consistent with the ELISA results. Combined target estimates of (1.7 ± 0.02, 2.9 ± 0.1, 9.4 ± 0.9) for known concentrations (1.5, 3, 9) nM resulted in a mean coefficient of variation of 9% and a mean percent error of 7% (Supplementary Tables S53–S55, Supplementary Fig. S29). TNFα, a cell signaling protein involved in systemic inflammation, is considered elevated above 2 ng/liter37, 4-log lower than demonstrated here. A post incubation scaffold/fusion/target concentration step upstream of nanopore sensing is under development to reach the needed detection sensitivity. For the final protein target, the detection reagent was engineered to incorporate a full-length antibody as the binding domain for tetanus toxin (TT) (Methods). The scaffold-to-antibody coupling method can be applied to any available antibody or antibody fragment since it utilizes lysines endogenous to the binding domain. A biotin moiety was also added to the scaffold end opposite the binding domain, permitting coupling the detection reagent to streptavidin coated magnetic beads. We demonstrate: (1) spiking TT into plasma, (2) capturing TT with the detection reagent bound to streptavidin beads, (3) washing out plasma background, and (4) cleaving off the bead with a simple restriction enzyme digest of the DNA scaffold. Post cleavage, the TT bound complexes were verified by EMSA (Supplementary Fig. S30). In replicate experiments, TT was positively detected at 4.5 and 12 nM above the pore in buffer only (P32–P34, Supplementary Table S56, Supplementary Fig. S31) and at 360 nM TT following incubation in 10% and 20% plasma (P35 and P36, Supplementary Table S57, Supplementary Fig. S32). As an alternative to restriction enzyme digestion, we tested release of detection reagent from the bead using UV light (15 sec). The UV flash cleaved a photocleavable linker incorporated between the scaffold and biotin for this purpose (Supplementary Fig. S30). A key design element for future ASSURED technologies will be exploiting microfluidics for efficient integrated preparation on varying sample input types. For example, a microfluidic "purification" chip demonstrated capture-wash-release of cancer biomarkers from physiological solutions upstream of integrated sensors30. The power consumption using the UV light at 15 sec was 21mWh. Future test strip designs could integrate an inexpensive and fast UV-based capture-wash-release protocol upstream of the nanopore sensors. Combining estimates to test quantification enhancement Future test strips will incorporate multiple nanopores, and we anticipate that aggregate quantitative estimates from a multi-pore strip will have superior performance to that of single nanopore estimates. To test this, we averaged the HGD65 antibody target estimates from separate single nanopore experiments at common concentrations, and examined the precision by computing the coefficient of variation (CV) and trueness by computing the percent error (PE). Such combined estimates capture our experimental reproducibility due to the following conditions: Five different people performed experiments, each with a unique device. Each experiment used a new test strip and nanopore chip. The nanopore size range varied across all 15 quantitative experiments. Each experiment was run on a unique date and with a unique set of prepared reagents, with the exception of P14 and P15 (common date and reagents). Combined single-pore estimates were generated at eleven different known target concentrations, with the estimates vs. known values reported (Table 1, Fig. 5). At each known target concentration, a combined estimate was generated by taking the average and standard deviation of values from at least two separate single-nanopore experiments. Across all eleven target concentrations, the combined estimates produce a mean CV = 21% and mean |PE| = 11%. Table 1 Combining HGD65 concentration estimates from separate experiments. Combining single-nanopore estimates of HGD65 target concentration to gauge accuracy and precision. The combined estimate data plotted at each known target concentration is the average and standard deviation of at least two estimates generated from separate single-nanopore experiments (data in Table 1). The concentrations are above the nanopore after dilution. Single-pore estimates were combined where a common scaffold/fusion reagent was used (7 pores P7-P13, and 2 pores P14-P15) or by the presence of 1% saliva background above the pore (6 pores P21-P26). The R2 value assesses how closely the average estimates are to the zero error line, with the three-parameter model (R2 = 0.985, Supplementary Table S45) outperforming the two-parameter model (R2 = 0.953, Supplementary Table S46 and Fig. S24). We note that the methods and materials presented here support laboratory-based development methods, but not sample-to-answer quantification since a set of controls are required prior to measuring unknowns (this is discussed further below). FDA guidance recommends 15% PE and CV calculated using a minimum of 5 replicate estimates per known concentration38. Our approach will likely meet these performance targets, since a multi-pore strip will eliminate conditions 1 and 4 and therefore should perform better than the combined estimates reported here. The one experiment pair that eliminated condition 4 (pores P14-P15) produced mean CV = 8% and mean |PE| = 4% (Supplementary Table S45), well below the 15% requirement. Both qualitative and quantitative ranges with HGD65 as a model antibody are reported as the target concentrations at incubation to reflect the ranges that would be possible for sample-to-answer protocols without an upstream concentration step during sample preparation (Table 2). The reported detection range is across all single-pore results, while the quantitative range only includes target concentrations for which there were replicate estimates for averaging. Table 2 Summary of HGD65 detection and quantification performance. The qualitative range overlaps with the Ab range found in HIV-positive specimens. Specifically, three single-pore experiments (P23, P25, P26) spanned 2.66–133 nM (0.4–20 mg/liter) HGD65 at incubation in 10% saliva, while the equivalent 4–200 mg/liter for neat saliva substantially overlaps with the 0.5–100+ mg/liter IgG range present in HIV positive salivary specimens39,40. An ELISA-based approach achieved 10 mg/liter detection from whole blood, in contrast to the 4 mg/liter lower limit for saliva demonstrated here14. Additionally, the lowest concentration detected above the pore after dilution was 0.1 nM (15 μg/liter) HGD65 in buffer only (P7, P8) and 0.266 nM (40 μg/liter) in 1% saliva (P21-P23, P25, P26; Fig. 5, Supplementary Methods). Here we demonstrate 2-log qualitative range for detection of target proteins on a single pore. A multi-pore strip using parallelized chambers would increase the range by using different scaffold/fusion molecules at different concentrations in each chamber. The scaffold lengths of 3250 bp (P6), 1074 bp (P13), 217 bp (P15) and 108 bp (P17) in our study produced capture rate constants of 390, 250, 10 and 0.8 (minnM)−1, respectively (Supplementary Tables). Thus a multi-chamber strip could span 3-log in detection reagent concentration, e.g., from 0.1 nM 3250 bp/fusion to 100 nM 108 bp/fusion, while generating approximately equivalent event rates across the pores. Combined with the 2-log range per pore, this would result in a qualitative range of at least 5-log. The qualitative range can also be expanded by varying the voltage and buffer composition. The predictive range per nanopore can be addressed in part by adding parallel chambers and nanopores and employing a larger detection reagent concentration range, as proposed above. The primary limitation is the need for at least three controls to fit the model parameters prior to measuring the unknown concentration. An automated approach would thus require multiple flushing and measurement stages, adding time and device complexity. Potentially, if the target and detection reagent are sufficiently well characterized, a table-lookup of one or more of the model parameters would expedite this process. We also developed a novel two-stage method that streamlines the quantitative protocol (Supplementary Methods). Nanopore data is recorded in two sequential stages: (I) known detection reagent with the unknown target amount, followed by (II) the contents of (I) with a known target-to-scaffold/fusion ratio spiked into the chamber. The method is based on the two-parameter model but eliminates the need for model fitting and requires only the single control (II) run after the unknown. As before, the stage (I) data must be below saturation (X < 5). As representative examples, the known target concentrations (0.665, 1.33) nM from the saliva sample group (P21-P26) had combined two-stage estimates with mean CV = 27% and mean |PE| = 8% (Supplementary Table S49). This is comparable in performance to the three-parameter model combined-estimates (spiked saliva in Table 2; red diamonds, Fig. 5), which is a promising result considering only a single control is required for quantification using the two-stage method. The more complicated a laboratory procedure and instrumentation, the more difficult and costlier it is to develop an easy-to-use and inexpensive version for a resource-limited site2. Future development will focus on engineering a sample-to-answer nanopore-based solution that can be successfully deployed in low-resource settings13,41. HIV Ab assays The 1074 bp and other length dsDNA fragments were generated by PCR amplification from linearized plasmid template and purified using the Viogene gel/PCR Isolation System (Viogene cat. #GP1002). All scaffold lengths for the HIV antibody target experiments contain a centrally located sequence (AGGGGAGGAAAA) complementary to a base-pairing strand within bisPNA. The bisPNA molecule consists of two halves separated by a flexible PEG linker, the first half binds to the cognate site using Watson-Crick base pairing and the second half binds via Hoogsteen face pairing42. Two terminal lysines are included on each end to enhance complex stability with DNA33. A cysteine residue was incorporated into either the flexor region (PNA0) or at the Hoogsteen face pairing end (PNA1) to provide a chemical handle for covalent coupling of the maleimide-containing peptide to the PNA prior to hybridizing with dsDNA. The two bisPNA designs examined had the following sequence and structure (N-term left, to C-term right): $$\begin{array}{c}{{\rm{PNA}}}^{0}:\mathrm{KK} \mbox{-} O \mbox{-} \mathrm{TCCCCTCCTTTT} \mbox{-} O \mbox{-} \mathrm{Cys} \mbox{-} O \mbox{-} \mathrm{TTTTJJTJJJJT} \mbox{-} O \mbox{-} \mathrm{KK}\\ {{\rm{PNA}}}^{1}:\mathrm{KK} \mbox{-} O \mbox{-} \mathrm{TCCCCTCCTTTT} \mbox{-} \mathrm{OOO} \mbox{-} \mathrm{TTTTJJTJJJJT} \mbox{-} O \mbox{-} \mathrm{KK} \mbox{-} \mathrm{OO} \mbox{-} \mathrm{Cys}\end{array}$$ where K is the amino acid lysine, O represents a 2-aminoethoxy-2-ethoxy acetic acid linker, and Cys is a cysteine. To form bisPNA-peptide, the PNA is first incubated with TCEP to reduce any intermolecular disulfide bonds, and then reacted with a 10 to 50-fold excess of peptide. The end terminated peptide sequence KSIHIGPGRAFYTT (JPT, Germany) is the highly immunogenic V3 loop of the HIV envelope glycoprotein. The PNA0-peptide used a PEG terminated peptide sequence, PEG5-peptide, creating the two PNA strands flanking the peptide strand. The PNA1-peptide used Ttds-peptide (PNABio), with the peptide at the C-terminal end of the PNA. The PNA-peptide conjugate was then purified by reverse phase HPLC and confirmed by electrospray ionization mass spectrometry (PNABio). The purified bisPNA-peptide conjugate was allowed to incubate with dsDNA fragments of stated lengths at a 60-fold molar excess for 2 hours at 60 C to form a dsDNA/PNA-peptide complex (10 mM sodium phosphate, pH 7.4). Excess bisPNA-peptide was cleaned up by centrifugation in a 100 kDa filter (EMD Millipore, UFC570024). Successful invasion of the bisPNA-peptide onto dsDNA of different lengths was verified via electrophoretic mobility shift assays (EMSA) on a 6% TBE gel (ThermoFisher, EC62652) at 100 V for 20 minutes then 150 V for 80 minutes (Supplementary Figs S8, S12 and S13). The anti-V3 antibodies 447D (Los Alamos National Labs, USA) and HGD65 (cat. # Ab00270, Absolute Antibody Limited, UK) were used as model antibodies. ELISA were performed to test affinity between the antibodies and the PNA-peptide and peptide alone (Supplementary Secs. 10.1.2 and 12.3). In nanopore experiments, antibody and detection reagents were mixed at molar ratios ranging from 0.1:1 to 20:1, and then diluted into recording buffer (1 M LiCl, 10 mM Tris-HCL, pH 8.8, 1 mM EDTA) to the reported concentrations. Tabulated reagents and detection results are reported as: 447D buffer only (Supplementary Table S3); 447D serum background (Supplementary Table S4); HGD65 buffer only (Supplementary Tables S5, S12, S19 and S50); HGD65 serum background (Supplementary Tables S26); HGD65 saliva background (Supplementary Tables S30, S37). Incubation periods varied from 10 seconds to 15 minutes with buffer only. In experiments with serum, detection reagent was incubated in 20% sample without Ab for 5 min, and spiked with Ab for 15 min or 5 min (where stated), prior to dilution into nanopore buffer. Saliva samples were not tested with 447-52D. In experiments with saliva background, HGD65 and detection reagent (217 bp DNA/PNA-peptide, 13.3 nM) were mixed at the stated molar ratios in 100 μL of 1 M LiCl, 10 mM Tris-HCL (pH 8.8), 1 mM EDTA, and 10% donor saliva. Samples were incubated on the bench top at room temperature for 5 minutes or 30 seconds, then 100 μL samples were diluted 10X to 1 mL with 1 M LiCl, 10 mM Tris-HCL, and 1 mM EDTA, maintaining antibody:reagent molar ratios indicated above with final concentrations of detection reagent and saliva in all samples being 1.33 nM and 1%, respectively. Saliva was collected from an HIV-negative donor individual using a 1 mL plastic transfer pipette (Kinglake #10751) and stored in a 1.5 mL Eppendorf tube on ice. Saliva was warmed to room temperature and vortexed to resuspend settled particles before use. TNFα assay The affibody against human TNFα (Affibody AB, Catalog No. 10.084101.0050) contains a C-terminal cysteine with a sulfhydryl group used to conjugate to the N-terminus of a bisPNA via an intermediate succinimidyl 4-(N-maleimidomethyl)cyclohexane-1-carboxylate (SMCC) linker. The bisPNA (PNABio) has sequence detailed in Supplementary Information Sec. 13.1.1. To form a dsDNA-PNA-affibody complex, the purified affibody-bisPNA conjugate was allowed to incubate with a 309 bp dsDNA fragment from the aforementioned template DNA, containing the complementary sequence to the Watson-Crick strand of the bisPNA at a 60-fold molar excess for 2 hours at 60 C (10 mM sodium phosphate, pH 7.4). Excess bisPNA-affibody was cleaned up by centrifugation in a 100 kDa filter (EMD Millipore, UFC570024). Successful invasion of the bisPNA-affibody onto dsDNA was verified via EMSA on a 6% TBE gel (ThermoFisher, EC62652) at 200 V for 40 minutes (Supplementary Fig. S27). Following purification of the binding reagent, it was allowed to incubate with a titration series of human TNFα (ThermoFisher, PHC3016) for a period of 1 hour at room temperature (10 mM sodium phosphate, pH 7.4). Confirmation of TNFα binding was conducted on a similarly run TBE gel. Following the incubation period between 50 nM scaffold/fusion and each TNFα concentration, the reagents were diluted 16.7-fold into nanopore buffer for experimentation. To compare Kd quantification values and salt tolerance between the affibody and TNFα, sandwich ELISA was performed and compared to nanopore results (Supplementary Information Sec. 13.1.2). Tabulated reagents and nanopore detection results are reported in Supplementary Table S52. Tetanus toxin assay A 712 bp DNA (biotin-712-thiol) was made in house using PCR from the aforementioned template DNA. The forward primer was modified to incorporate biotin on one end of the DNA and the reverse primer was modified to incorporate a thiol on the other. The thiol modified end of the DNA was utilized to attach it to an anti-Tetanus toxin IgG1 antibody (α-TTAb) utilizing SMCC/NHS-ester chemistry. The biotin modified end of the DNA was utilized to attach the DNA-antibody conjugate to Streptavidin coated magnetic beads. A restriction enzyme digest site was engineered at the biotin conjugated end of the DNA to allow for cleavage of the DNA-antibody conjugate from the streptavidin beads after Tetanus toxin (TT) (List Biological Laboratories, Inc. Cat #190B) binding in plasma. The DNA-Ab conjugate was prepared by first reacting the α-TTAb with sulfo-SMCC (sulfosuccinimidyl 4-(N-maleimidomethy)cyclohexane-1-carboxylate; Thermo Fisher, Cat #22322) in order to install a maleimide. Briefly, the α-TTAb was diluted to 100 μM in 30 μl of 10 mM sodium phosphate pH 7.4 (NaPi) followed by the addition of 0.5 μl of 70 mM sulfo-SMCC (in DMSO) to achieve a 10:1 molar ratio of SMCC to α-TTAb. The reaction was incubated at room temperature for 1 h. The α-TTAb-maleimide adduct was then purified using cation exchange liquid chromatography and concentrated to 50–100 μM using a 30 Kd molecular weight cutoff spin column. The purified α-TTAb-maleimide was then reacted with biotin-712-thiol. Biotin-712-thiol was diluted in 10 mM NaPi, followed by the addition of α-TTAb-maleimide at a final concentration of 5–10 μM (200:1 α-TTAb-maleimide to DNA). The reaction was incubated at room temperature for 1–2 h. Streptavidin coated magnetic beads (Invitrogen Cat # 65001) were washed 3X with magnetic bead wash buffer following the protocol provided with the beads. 5 μg of the biotinylated DNA-antibody conjugate were incubated with 1 mg of the washed beads overnight at 4 °C to ensure complete conjugation to the beads. The beads were washed 3X with 100 μl NaPi, and divided up into the appropriate volume to yield aliquots containing 280 ng of DNA-antibody conjugate bound to beads. The 280 ng aliquots were then used for all TT nanopore experiments without and with spiked plasma. TT studies with plasma were carried out using 20 nM 712 bp DNA or DNA-antibody conjugate (280 ng) on bead with a final concentration of 600 nM TT in NaPi and pH 7.4. Absent plasma, TT studies were carried out using 50 nM DNA-antibody conjugate with a final concentration of 75 and 200 nM TT. Control sample sets included 712 bp DNA plus TT, DNA-antibody conjugate without TT and DNA-antibody conjugate with TT. Where stated, the control samples were spiked into 10 or 20% pooled human plasma. All samples were incubated at room temperature for 45 minutes. After incubation the samples were washed 1X with 100 μL NaPi, removing plasma background. Following the wash, samples were cleaved off the bead by restriction enzyme digest (New England BioLabs Cat #R3193L) yielding DNA with a length of 594 bp. Post cleavage, reagents were quality checked using EMSA (Supplementary Fig. S30) to confirm full complex formation (10% polyacrylamide, 120 mV in 1x TBE for 45 min). The cleaved samples were diluted into lithium chloride buffer for nanopore analysis at the final concentrations reported in Supplementary Tables S56 and S57. Nanopore experiments Nanopore chips and pore diameter were fabricated and characterized at the Stanford Nanofabrication Facility and Nano Shared Facilities, following the process flow and methods described in the Supplementary Information. Experiments were conducted at 23 °C in 10 mM Tris with 1 mM EDTA, at pH 8.8, and 1 M LiCl, with either a custom flow cell or the test strip. Using commercially available equipment, a voltage-clamp amplifier (MultiClamp 700B, Molecular Devices, Sunnyvale, CA) was used to apply transmembrane voltage and measure ionic current, with the 4-pole Bessel filter set at 30 kHz. A digitizer (Digidata 1440A, Molecular Devices) stored data sampled at 250 kHz. The hardware comprised within the handheld Molecule Occlusion Meter (MOM) (30 kHz bandwidth, 100 kHz sample rate, Supplementary Methods). Each reagent was added at the reported concentration into the voltage-negative chamber during nanopore experiments. The 3D printed test strips (Formlabs Form 2 printer, Clear resin, FLGPCL02, Supplementary Methods) comprised 50 μL channels, with the reagent-loaded channel flushed with 200 μL buffer in between reagents. Channel overflow was removed via Kimwipe (KIMTECH). Buffer only is recorded for 5 minutes. If the buffer only event rate exceeds 2 min −1, which was uncommon, the channels are re-flushed and the buffer-only recording restarted to ensure sufficiently clear channels between reagents. All numerical analysis and data processing was done offline using custom code written in Matlab (2017, The MathWorks). Recording with the 700B/Digidata setup used the AxoScope software (Molecular Devices). Real-time event detection was done on the MOM. Plotting and recoding of data was done with custom software termed the Pore Operating Program (POP, Python). Events were flagged and extracted if any sample fell below 6 times the standard deviation (σ) of the open channel signal, with σ computed using 500 event-free samples and computed once for every 500,000 sample window. Each extracted event contains: (i) all samples adjacent to the sample(s) below 6σ up to the first samples below 1σ, and (ii) adding 200 samples prior to and after each event. Extracted events were rejected from subsequent analysis if they do not return to within 1σ (e.g., by truncation during data recording), or if the noise exceeds 3σ in the adjacent (open channel) samples. The open channel conductance values were used to track the evolution of the estimated nanopore size36. The reported event duration is the time-width at half maximum depth. Each open channel duration is the time between every pair of kept extracted events, and these durations were used to compute the capture rate by fitting an exponential distribution to the data. The same fitting method was applied for the subset of tagged events to quantify tagged capture rate, where reported and modeled. The mathematical methods for assigning a positive result for target detection, the corresponding time-to-positive, and for quantification of target concentration are detailed in the Supplementary Information. All relevant data are within the paper and its Supplementary Information files. Data for the main figures and for Supplementary Information tables and figure are available at https://doi.org/10.6084/m9.figshare.7131695. Kettler, H., White, K. & Hawkes, S. Mapping the landscape of diagnostics for sexually transmitted infections. Tech. Rep., World Health Organization Sexually Transmitted Diseases Diagnostics Initiative (2004). Urdea, M. et al. Requirements for high impact diagnostics in the developing world. Nature 444(1), 73–79 (2006). Morin, T. J., Heller, D. A. & Dunbar, W. B. Target detection with nanopore, US 15/160,697. US Patent Office (2016). Plesa, C. et al. Fast translocation of proteins through solid state nanopores. Nano Letters 13, 658–663 (2013). Larkin, J., Henley, R. Y., Muthukumar, M., Rosenstein, J. K. & Wanunu, M. High-Bandwidth Protein Analysis Using Solid-State Nanopores. Biophysj 106, 696–704 (2014). Waduge, P. et al. Nanopore-Based Measurements of Protein Size, Fluctuations, and Conformational Changes. ACS Nano 11, 5706–5716 (2017). Lepoitevin, M., Bechelany, M., Balanzat, E., Janot, J.-M. & Balme, S. Non-Fluorescence label protein sensing with track-etched nanopore decorated by avidin/biotin system. Electrochimica Acta 211, 611–618 (2016). Yusko, E. C. et al. Real-time shape approximation and fingerprinting of single proteins using a nanopore. Nature Nanotechnology 12, 360–367 (2017). Bell, N. A. W. & Keyser, U. F. Digitally encoded DNA nanostructures for multiplexed, single-molecule protein sensing with nanopores. Nature Nanotechnology 11, 645–651 (2016). Kong, J., Bell, N. A. W. & Keyser, U. F. Quantifying Nanomolar Protein Concentrations Using Designed DNA Carriers and Solid-State Nanopores. Nano Letters 16, 3557–3562 (2016). Bell, N. A. W. & Keyser, U. F. Specific protein detection using designed DNA carriers and nanopores. Journal of the American Chemical Society 137, 2035–2041 (2015). Sze, J. Y. Y., Ivanov, A. P., Cass, A. E. G. & Edel, J. B. Single molecule multiplexed nanopore protein screening in human serum using aptamer modified DNA carriers. Nature. Communications 8, 209–10 (2017). Chin, C. D. et al. Microuidics-based diagnostics of infectious diseases in the developing world. Nature Medicine 17, 1015–1019 (2011). Laksanasopin, T. et al. A smartphone dongle for diagnosis of infectious diseases at the point of care. Science Translational Medicine 7, 273re1–273re1 (2015). de la Rica, R. & Stevens, M. M. Plasmonic ELISA for the ultrasensitive detection of disease biomarkers with the naked eye. Nature Nanotechnology 7, 821–824 (2012). Song, Y. et al. Integrative volumetric bar-chart chip for rapid and quantitative pointof-care detection of myocardial infarction biomarkers. Lab on a chip 16, 2955–2962 (2016). Rissin, D. M. et al. Single-molecule enzyme-linked immunosorbent assay detects serum proteins at subfemtomolar concentrations. Nature Biotechnology 28, 595–599 (2010). Rissin, D. M. et al. Multiplexed single molecule immunoassays. Lab on a chip 13, 2902–10 (2013). Piraino, F., Volpetti, F., Watson, C. & Maerkl, S. J. A Digital-Analog Microuidic Platform for Patient-Centric Multiplexed Biomarker Diagnostics of Ultralow Volume Samples. ACS Nano 10, 1699–1710 (2016). Ngom, B., Guo, Y., Wang, X. & Bi, D. Development and application of lateral ow test strip technology for detection of infectious agents and chemical contaminants: a review. Analytical and Bioanalytical Chemistry 397, 1113–1135 (2010). Hedström, J. et al. Urinary trypsinogen-2 test strip for acute pancreatitis. The Lancet 347, 729–730 (1996). Pollock, N. R. et al. A paper-based multiplexed transaminase test for low-cost, pointof-care liver function testing. Science Translational Medicine 4, 152ra129–152ra129 (2012). Hu, J. et al. Portable microuidic and smartphone-based devices for monitoring of cardiovascular diseases at the point of care. Biotechnology Advances 34, 305–320 (2016). Fan, R. et al. Integrated barcode chips for rapid, multiplexed analysis of proteins in microliter quantities of blood. Nature Biotechnology 26, 1373–1378 (2008). Inci, F. et al. Multitarget, quantitative nanoplasmonic electrical field-enhanced resonating device (NE 2RD) for diagnostics. Proceedings of the National Academy of Sciences 112, E4354–E4363 (2015). Han, K. N., Li, C. A. & Seong, G. H. Microuidic Chips for Immunoassays. Annual Review of Analytical Chemistry 6, 119–141 (2013). Chiriacò, M. S. et al. EIS microuidic chips for ow immunoassay and ultrasensitive cholera toxin detection. Lab on a chip 11, 658–663 (2011). Kling, A. et al. Multianalyte Antibiotic Detection on an Electrochemical Microuidic Platform. Analytical Chemistry 88, 10036–10043 (2016). Chang, H.-K. et al. Rapid, Label-Free, Electrical Whole Blood Bioassay Based on Nanobiosensor Systems. ACS Nano 5, 9883–9891 (2011). Stern, E. et al. Label-free biomarker detection from whole blood. Nature Nanotechnology 5, 138–142 (2010). Sheehan, P. E. & Whitman, L. J. Detection Limits for Nanoscale Biosensors. Nano Letters 5, 803–807 (2005). Lee, M.-H. et al. A Low-Noise Solid-State Nanopore Platform Based on a Highly Insulating Substrate. Scientific Reports 4, 7448–7 (2014). Bentin, T., Larsen, H. J. & Nielsen, P. E. Combined triplex/duplex invasion of doublestranded DNA by "tail-clamp" peptide nucleic acid. Biochemistry 42, 13987–13995 (2003). Chakraborty, K. et al. Design of immunogens that present the crown of the HIV-1 V3 loop in a conformation competent to generate 447-52D-like antibodies. Biochemical Journal 399, 483–491 (2006). Wyatt, R. et al. The antigenic structure of the HIV gp120 envelope glycoprotein. Nature 393, 705–711 (1998). Morin, T. J. et al. Nanopore-based target sequence detection. PloS ONE 11, e0154426–21 (2016). Michalaki, V., Syrigos, K., Charles, P. & Waxman, J. Serum levels of IL-6 and TNF-α correlate with clinicopathological features and patient survival in patients with prostate cancer. British Journal of Cancer 1–5 (2004). Administration, U. F. & Drug. Guidance for industry, bioanalytical method validation (https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/ucm070107.pdf, 2001). Mortimer, P. P. & Parry, J. V. Detection of antibody to HIV in saliva: a brief review. Clinical and diagnostic virology 2, 231–243 (1994). Hodinka, R. L., Nagashunmugam, T. & Malamud, D. Detection of Human Immunodeficiency Virus Antibodies in Oral Fluids. Clinical and Diagnostic Laboratory Immunology 5, 419–426 (1998). Boppart, S. A. & Richards-Kortum, R. Point-of-care and point-of-procedure optical imaging technologies for primary care and global health. Science Translational Medicine 6, 253rv2–253rv2 (2014). Griffith, M. C., Risen, L. M., Greig, M. J. & Lesnik, E. A. Single and bis peptide nucleic acids as triplexing agents: binding and stoichiometry. Journal of the American Chemical Society 117, 831–832 (1995). This work done in Santa Cruz was performed and financially supported by Two Pore Guys, Inc (2PG) and by NIH project 1R43EB022518. For work on reagent preparation and proofing: Hanna Weisenberger, Kristen Brosamer, Cindy Huynh, Avery Cromwell, Alex Hirschi. For nanopore experimentation: Xu Liu, Christopher O'Donnell, Tsumugi Gebo, Melissa Cruz, Travis Leffert, Alex Hirschi. For contribution to nanopore chip fabrication: Armin Darvish. For contribution to the handheld performance assessment: Mikael Bengtsson. For contribution to the test strip performance assessment: Christopher O'Donnell, Travis Leffert. We appreciate Samantha Forde (NES) for editing the manuscript. Two Pore Guys Inc., Santa Cruz, CA, USA Trevor J. Morin, William L. McKenna, Tyler D. Shropshire, Dustin A. Wride, Joshua D. Deschamps, Xu Liu, Reto Stamm, Hongyun Wang & William B. Dunbar Baskin School of Engineering, University of California, Santa Cruz, CA, USA Hongyun Wang Trevor J. Morin William L. McKenna Tyler D. Shropshire Dustin A. Wride Joshua D. Deschamps Xu Liu Reto Stamm William B. Dunbar T.J.M. is primary inventor of the method; T.J.M., W.L.McK., T.D.S., D.A.W. and J.D.D. designed the reagents; T.J.M., W.L.McK., T.D.S., D.A.W., J.D.D. and W.B.D. designed the experiments; X.L. designed and fabricated nanopore chips; R.S. designed and built the handheld and test strip; H.W. and W.B.D. designed the mathematical methods; W.B.D. analyzed the data and wrote the paper. Correspondence to William B. Dunbar. The authors declare competing financial interests: all co-authors are employees of Two Pore Guys, Inc. (2PG), with the exception of H.W. who is a consultant for 2PG. The authors have no non-financial interests to declare. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Morin, T.J., McKenna, W.L., Shropshire, T.D. et al. A handheld platform for target protein detection and quantification using disposable nanopore strips. Sci Rep 8, 14834 (2018). https://doi.org/10.1038/s41598-018-33086-7 Tetanus Toxin Point-of-care Technology Lateral Flow Immunoassay (LFAs) A reversibly gated protein-transporting membrane channel made of DNA Swarup Dey Adam Dorey Hao Yan A portable analog front-end system for label-free sensing of proteins using nanowell array impedance sensors Muhammad Tayyab Pengfei Xie Mehdi Javanmard Nanopore-based technologies beyond DNA sequencing Yi-Lun Ying Zheng-Li Hu Yi-Tao Long Nature Nanotechnology (2022) Elucidating the dynamics of polymer transport through nanopores using asymmetric salt concentrations Martin Charron Lucas Philipp Vincent Tabard-Cossa Nano Research (2022) Digital immunoassay for biomarker concentration quantification using solid-state nanopores Liqun He Daniel R. Tessier
CommonCrawl
\begin{document} \thanks{The author was supported by the Alexander von Humboldt Foundation} \keywords{weighted Fourier multipliers, weighted inequalities, Littlewood-Paley square functions, Muckenhoupt weights} \subjclass{42B25 (42B15)} \begin{abstract} The paper provides a complement to the classical results on Fourier multipliers on $L^p$ spaces. In particular, we prove that if $q\in (1,2)$ and a function $m:\mathbb{R} \rightarrow \mathbb{C}$ is of bounded $q$-variation uniformly on the dyadic intervals in $\mathbb{R}$, i.e. $m\in V_q({\mathcal D})$, then $m$ is a Fourier multiplier on $L^p(\mathbb{R}, wdx)$ for every $p\geq q$ and every weight $w$ satisfying Muckenhoupt's $A_{p/q}$-condition. We also obtain a higher dimensional counterpart of this result as well as of a result by E. Berkson and T.A.~ Gillespie including the case of the $V_q({\mathcal D})$ spaces with $q>2$. New weighted estimates for modified Littlewood-Paley functions are also provided.\\[-8ex] \end{abstract} \renewcommand{\textup{2010} Mathematics Subject Classification}{\textup{2010} Mathematics Subject Classification} \maketitle \section{Introduction and Statement of Results} For an interval $[a,b]$ in $\mathbb{R}$ and a number $q\in [1,\infty)$ denote by $V_q([a,b])$ the space of all functions $m:[a,b] \rightarrow \mathbb{C}$ of bounded $q$-variation over $[a,b]$, i.e., $$\|m\|_{V_q([a,b])}:= \sup_{x\in [a,b]}|m(x)| + \|m\|_{\textrm{Var}_q([a,b])} <\infty, $$ where $\|m\|_{\textrm{Var}_q([a,b])}:=\sup\{ (\sum_{i=0}^{n-1} |m(t_{i+1}) - m(t_i)|^q )^{1/q}\}$ and the supremum is taken over all finite sequences $a=:t_0< t_1<...< t_n := b$ $(n\in \mathbb{N})$. We write ${\mathcal D}$ for the dyadic decomposition of $\mathbb{R}$, i.e., ${\mathcal D}:=\left\{ \pm(2^k,2^{k+1}]: k\in \mathbb{Z} \right\}$, and set $$V_q({\mathcal D}):=\left\{m:\mathbb{R}\rightarrow \mathbb{C} :\quad \sup_{I\in {\mathcal D}} \|m_{|I}\|_{V_q(I)} <\infty \right\} \quad \quad (q\in [1,\infty)).$$ Moreover, let $A_p(\mathbb{R})$ $(p\in [1,\infty))$ be the class of weights on $\mathbb{R}$ which satisfy the Muckenhoupt $A_p$ condition. Denote by $[w]_{A_p}$ the $A_p$-constant of $w\in A_p(\mathbb{R})$. If $w \in A_\infty(\mathbb{R}):=\cup_{p\geq 1} A_p(\mathbb{R})$ we write $M_p(\mathbb{R},w)$ for the class of all multipliers on $L^p(\mathbb{R}, w)$ $(p> 1)$, i.e., $$M_p(\mathbb{R}, w) := \left\{ m\in L^\infty(\mathbb{R}): T_m \textrm{ extends to a bounded operator on } L^p(\mathbb{R}, w) \right\}.$$ Here $T_m$ stands for the Fourier multiplier with the symbol $m$, i.e., $(T_mf) \widehat{} = m \widehat{f}$ $(f\in S(\mathbb{R})).$ Note that $M_p(\mathbb{R},w)$ becomes a Banach space under the norm $\|m\|_{M_p(\mathbb{R},w)} := \|T_m\|_{{\mathcal L}(L^p(\mathbb{R}, w))}$ $(m\in M_p(\mathbb{R}, w) )$. The main result of the paper is the following complement to results due to D.~Kurtz \cite{Kurtz80}, R. Coifman, J.-L. Rubio de Francia, S. Semmes \cite{CRdeFS88}, and E. Berkson, T. Gillespie \cite{BeGi98}. \begin{thA}\label{main}\emph{ $(i)$ Let $q\in (1,2]$. Then, $V_q({\mathcal D})\subset M_p(\mathbb{R},w)$ for every $p\geq q$ and every Muckenhoupt weight $w\in A_{p/q}(\mathbb{R})$. \\ $(ii)$ Let $q>2$. Then, $V_q({\mathcal D})\subset M_p(\mathbb{R}, w)$ for every $2\leq p <(\frac{1}{2} - \frac{1}{q})^{-1}$ and every Muckenhoupt weight $w\in A_{p/2}$ with $s_w> (1- p(\frac{1}{2} - \frac{1}{q}))^{-1}$.} \end{thA} Here, for every $w\in A_\infty(\mathbb{R})$, we set $s_w:=\sup\{ s\geq 1: w \in RH_s(\mathbb{R}) \}$ and we write $w\in RH_s(\mathbb{R})$ if $$\sup_{a<b} \left( \frac{1}{b-a}\int_a^b w(x)^s dx \right)^{1/s} \left(\frac{1}{b-a} \int_a^b w(x) dx\right)^{-1} <\infty.$$ Recall that, by the reverse H\"older inequality, $s_w \in (1, \infty]$ for every Muckenhoupt weight $w\in A_\infty(\mathbb{R})$. For the convenience of the reader we repeat the relevant material from the literature, which we also use in the sequel. Recall first that in \cite{Kurtz80} D. Kurtz proved the following weighted variant of the classical Marcinkiewicz multiplier theorem. \begin{theorem}[{\cite[Theorem 2]{Kurtz80}}]\label{Kurtz} $V_1({\mathcal D}) \subset M_p(\mathbb{R},w)$ for every $p\in (1,\infty)$ and every Muckenhoupt weight $w\in A_p(\mathbb{R})$. \end{theorem} As in the unweighted case, Theorem \ref{Kurtz} is equivalent to a weighted variant of the Littlewood-Paley decomposition theorem, which asserts that for the square function $S^{\mathcal D}$ corresponding to the dyadic decomposition ${\mathcal D}$ of $\mathbb{R}$, $\|S^{\mathcal D} f\|_{p,w}\eqsim \|f\|_{p,w}$ $(f\in L^p(\mathbb{R},w))$ for every $p\in (1,\infty)$ and $w\in A_p(\mathbb{R})$; see \cite[Theorem 1]{Kurtz80}, and also \cite[Theorem 3.3]{Kurtz80}. Here and subsequently, if ${\mathcal I}$ is a family of disjoint intervals in $\mathbb{R}$, we write $S^{\mathcal I}$ for the Littlewood-Paley square function corresponding to ${\mathcal I}$, i.e., $S^{\mathcal I} f : = \left(\sum_{I\in {\mathcal I}} |S_If|^2 \right)^{1/2}$ $(f\in L^2(\mathbb{R})).$ Recall also that in \cite{RdeF85} J.-L. Rubio de Francia proved the following extension of the classical Littlewood-Paley decomposition theorem. \begin{theorem}[{\cite[Theorem 6.1]{RdeF85}}]\label{RdeFtheorem} Let $2< p <\infty$ and $w\in A_{p/2}(\mathbb{R})$. Then for an arbitrary family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$ the square function $S^{\mathcal I}$ is bounded on $L^p(\mathbb{R}, wdx)$. \end{theorem} Applying Rubio de Francia's inequalities, i.e. Theorem \ref{RdeFtheorem}, R. Coifman, J.-L. Rubio de Francia, and S. Semmes \cite{CRdeFS88} proved the following extension and improvement of the classical Marcinkiewicz multiplier theorem. (See Section 2 for the definition of $R_2({\mathcal D})$.) \begin{theorem}[{\cite[Th\'eor\`eme 1 and Lemme 5]{CRdeFS88}}] \label{CRdeFS} Let $2\leq q<\infty$. Then, $V_q({\mathcal D})\subset M_p(\mathbb{R})$ for every $p\in (1,\infty)$ such that $|\frac{1}{p} - \frac{1}{2}|<\frac{1}{q}$. Furthermore, $R_2({\mathcal D}) \subset M_2(\mathbb{R}, w)$ for every $w\in A_1(\mathbb{R})$. \end{theorem} Subsequently, a weighted variant of Theorem \ref{CRdeFS} was given by E. Berkson and T. Gillespie in \cite{BeGi98}. According to our notation their result can be formulated as follows. \begin{theorem}[{\cite[Theorem 1.2]{BeGi98}}] \label{BeGi} Suppose that $2\leq p<\infty$ and $w\in A_{p/2}(\mathbb{R})$. Then, there is a real number $s>2$, depending only on $p$ and $[w]_{A_{p/2}}$, such that $\frac{1}{s}>\frac{1}{2} - \frac{1}{p}$ and $V_q({\mathcal D}) \subset M_p(\mathbb{R}, w)$ for all $1\leq q <s$. \end{theorem} Note that the part $(i)$ of Theorem A fills a gap which occurs in Theorem \ref{Kurtz} and the weighted part of Theorem \ref{CRdeFS}. The part $(ii)$ identifies the constant $s$ in Berkson-Gillespie's result, i.e., Theorem \ref{BeGi}, as $(\frac{1}{2} - \frac{1}{s'_w p})^{-1}$, where $s'_w:=\frac{s_w}{s_w-1}$, and in general, this constant is best possible. Except for some details, the proofs given below reproduce well-known arguments from the Littlewood-Paley theory; in particular, ideas which have been presented in \cite{Kurtz80}, \cite{CRdeFS88}, \cite{RdeF85}, and \cite{Xu96}. A new point of our approach is the following result on weighted estimates for modified Littlewood-Paley functions $S^{\mathcal I}_q(\cdot) :=(\sum_{I\in{\mathcal I}} |S_I (\cdot)|^{q'})^{1/q'}$ $(q\in (1,2])$, which may be of independent interest. \begin{thB}\label{LPineq}{\it{ $(i)$ Let $q\in (1,2)$, $p>q$, and $w\in A_{p/q}(\mathbb{R})$. Then, there exists a constant $C>0$ such that for any family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$ $$\|S^{\mathcal I}_q f\|_{p,w} \leq C\|f\|_{p,w} \quad\quad (f\in L^p(\mathbb{R}, wdx)).$$ Moreover, for every $q\in (1,2)$, $p>q$ and ${\mathcal V} \subset A_{p/q}(\mathbb{R})$ with $\sup_{w\in {\mathcal V}}[w]_{A_{p/q}}<\infty$ $$\sup\left\{ \|S^{\mathcal I}_qf\|_{p,w}: w\in {\mathcal V}, \; {\mathcal I} \textrm{ a family of disjoint intervals in }\mathbb{R}, \; \|f\|_{p,w}=1 \right\} <\infty.$$ $(ii)$ For any family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$ and every Muckenhoupt weight $w \in A_{1}(\mathbb{R})$, the operator $S^{\mathcal I}_2$ maps $L^2(\mathbb{R},wdx)$ into weak-$L^2(\mathbb{R},wdx)$, and $$\sup\left\{ \|S^{\mathcal I}_2f\|_{L_w^{2,\infty}}: w\in {\mathcal V}, \; {\mathcal I} \textrm{ a family of disjoint intervals in }\mathbb{R}, \|f\|_{L_w^{2}}=1\right\}<\infty$$ for every ${\mathcal V} \subset A_{1}(\mathbb{R})$ with $\sup_{w\in {\mathcal V}}[w]_{A_1}<\infty$. Moreover, if $q\in (1,2)$, then for any well-distributed family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$ and every Muckenhoupt weight $w\in A_1(\mathbb{R})$, the operator $S_q^{\mathcal I}$ maps $L^q(\mathbb{R},wdx)$ into weak-$L^q(\mathbb{R},wdx)$. }} \end{thB} Recall that a family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$ is {\it well-distributed} if there exists $\lambda>1$ such that $\sup_{x\in \mathbb{R}}\sum_{I\in{\mathcal I}} \chi_{\lambda I}(x) < \infty$, where $\lambda I$ denotes the interval with the same center as $I$ and length $\lambda$ times that of $I$. Note that the validity of the $A_1$-weighted $L^2$-estimates for square function $S^{\mathcal I}=S^{\mathcal I}_2$ corresponding to an arbitrary family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$, i.e., $$\|S^{\mathcal I}_2 f\|_{2,w} \leq C_{w}\|f\|_{2,w} \quad\quad (f\in L^2(\mathbb{R}, wdx), \; w\in A_1(\mathbb{R})),$$ is conjectured by J.-L. Rubio de Francia in \cite[Section 6, p.10]{RdeF85}; see also \cite[Section 8.2, p. 187]{Duo01}. Theorem B$(ii)$, in particular, provides the validity of the weak variant of Rubio de Francia's conjecture. Notice that in contrast to the square function operators $S_2^{\mathcal I}$, in general, operators $S^{\mathcal I}_q$ $(q\in[1,2))$ are not bounded on (unweighted) $L^{q}(\mathbb{R})$; see \cite{CoTa01}. Moreover, in \cite{Quek03} T.S. Quek proved that if ${\mathcal I}$ is a well-distributed family of disjoint intervals in $\mathbb{R}$, then the operator $S^{\mathcal I}_q$ maps $L^q(\mathbb{R})$ into $L^{q,q'}(\mathbb{R})$ for every $q\in (1,2)$. Note that this result is in a sense sharp, i.e., $L^{q, q'}(\mathbb{R})$ cannot be replaced by $L^{q,s}(\mathbb{R})$ for any $s<q'$; see \cite[Remark 3.2]{Quek03}. Therefore, Theorem B provides also a weighted variant of this line of researches. Cf. also relevant results given by S.V. Kisliakov in \cite{Kis08}. Furthermore, as a consequence of our approach we also get a higher dimensional analogue of Theorem A, see Theorem C in Section 4, which extends earlier results by Q. Xu \cite{Xu96}; see also M. Lacey \cite[Chapter 4]{Lac07}. Since the formulation of Theorem~C is more involved and its proof is essentially the iteration of one-dimensional arguments we refer the reader to Section 4 for more information. The part $(ii)$ of Theorem A is a quantitative improvement of \cite[Theorem 1.2]{BeGi98} due to E. Berkson and T. Gillespie. Furthermore, we present an alternative approach based on a version of the Rubio de Francia extrapolation theorem that holds for limited ranges of $p$ which was recently given in \cite{AuMa07}. The organisation of the paper is well-reflected by the titles of the following sections. However, we conclude with an additional comment. The proof of Theorem~A is based on weighted estimates from the part $(i)$ of Theorem B. To keep the pattern of the proof of the main result of the paper, Theorem A, more transparent, we postpone the proof of Theorem B$(ii)$ to Section 3. \section{Proofs of Theorems B$(i)$ and A} We first introduce auxiliary spaces which are useful in the proof of Theorem~A. Let $q\in[1,\infty)$. If $I$ is an interval in $\mathbb{R}$ we denote by ${\mathcal E}(I)$ the family of all step functions from $I$ into $\mathbb{C}$. If $m:=\sum_{J\in {\mathcal I}} a_J \chi_J$, where ${\mathcal I}$ is a decomposition of $I$ into subintervals and $(a_J)\subset \mathbb{C}$, write $[m]_{q}:=(\sum_{J\in {\mathcal I}} |a_J|^q )^{1/q}$. Set ${\mathcal R}_q(I):=\left\{ m \in {\mathcal E}(I) : [m]_q\leq 1 \right\}$ and $${\mathcal R}_q({\mathcal D}):= \left\{ m:\mathbb{R} \rightarrow \mathbb{C}:\,\, m_{|I} \in {\mathcal R}_q(I)\,\,\textrm{for every } I\in {\mathcal D} \right\}.$$ Moreover, let $$R_q(I):=\left\{ \sum_{j} \lambda_j m_j : \, m_j \in {\mathcal R}_q(I), \,\, \sum_j |\lambda_j|<\infty \right\}$$ and $$\|m\|_{R_q(I)} :=\inf\left\{\sum_j |\lambda_j| :\, m=\sum_j \lambda_j m_j, \,\, m_j\in {\mathcal R}_q(I) \right\}\quad \left(m\in R_q(I) \right).$$ Note that $\left(R_q(I), \|\cdot\|_{R_q(I)}\right)$ is a Banach space. Set $$R_q({\mathcal D}):=\left\{m:\mathbb{R} \rightarrow \mathbb{C}:\, \sup_{I\in {\mathcal D}} \|m_{|I}\|_{R_q(I)} <\infty \right\} \quad \quad (q\in [1,\infty)).$$ In the sequel, if ${\mathcal I}$ is a family of disjoint intervals in $\mathbb{R}$, we write $S^{\mathcal I}_1 f:=\sup_{I\in {\mathcal I}}|S_I f|$ ($f\in L^1(\mathbb{R})$) and $S^{\mathcal I}_r f :=(\sum_{I\in{\mathcal I}} |S_I (f)|^{r'})^{1/r'}$ $(r\in (1,2], f\in L^r(\mathbb{R}))$. We next collect main ingredients of the proof of Theorem B$(i)$, which provides crucial vector-valued estimates for weighted multipliers in the proof of Theorem A; see e.g. \eqref{vectorestim}. Lemma \ref{CarHunt} is a special version of the result on weighted inequalities for Carleson's operator given by J.-L. Rubio de Francia, F. J. Ruiz and J. L. Torrea in \cite{RuRuTo86}; see also \cite[Remarks 2.2, Part III]{RuRuTo86}. \begin{lemma}[{\cite[Theorem 2.1, Part III]{RuRuTo86}}]\label{CarHunt} Let $s\in (1,\infty)$ and $w\in A_s(\mathbb{R})$. Then, there exists a constant $C>0$ such that for any family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$ $$\|S^{\mathcal I}_1 f\|_{s,w} \leq C\|f\|_{s,w} \quad (f\in L^s(\mathbb{R}, wdx)).$$ Moreover, for every $s>1$ and every set ${\mathcal V} \subset A_s(\mathbb{R})$ with $\sup_{w\in {\mathcal V}}[w]_{A_s}<\infty$ $$\sup\left\{ \|S^{\mathcal I}_1\|_{s,w}: w\in{\mathcal V},\; {\mathcal I} \textrm{ a family of disjoint intervals in }\mathbb{R} \right\} <\infty.$$ \end{lemma} \begin{remark}\label{upperbound} The second statement of Lemma \ref{CarHunt} can be obtained from a detailed analysis of the constants involved in the results which are used in the proof of \cite[Theorem 2.1$(a)\Rightarrow (b)$, Part III]{RuRuTo86}, i.e., the weighted version of the Fefferman-Stein inequality and the reverse H\"older inequality. Recall the weighted version of the Fefferman-Stein inequality, which in particular says that for every $p\in (1,\infty)$ and every Muckenhoupt weight $w\in A_p(\mathbb{R})$ there exists a constant $C_{p,w} >0$, which depends only on $p$ and $[w]_{A_p}$, such that \begin{equation}\label{FS} \int_{\mathbb{R}} M f(t)^p \, w(t) \; dt\leq C_{p,w} \int_{\mathbb{R}} M^\# f(t)^p \, w(t) \; dt \quad (f\in L^p(\mathbb{R})\cap L^p(\mathbb{R},w) ), \end{equation} where $M$ and $M^\sharp$ denote the Hardy-Littlewood maximal operator and the Fefferman-Stein sharp maximal operator, respectively; see \cite[Theorem, p.41]{Jo83}, or \cite[Theorem 2.20, Chapter IV]{GCRu85}. We emphasize here that the constant $C_{p,w}$ on the right-hand side of this inequality is not given explicitly in the literature, but it can be obtained from a detailed analysis of the constants involved in the results which are used in the proof of \eqref{FS}, $\sup_{w\in{\mathcal V}} C_{p,w} <\infty$ for every subset ${\mathcal V}\subset A_p(\mathbb{R})$ with $\sup_{w\in {\mathcal V}}[w]_{A_p}<\infty$. Furthermore, it should be noted that if ${\mathcal V} \subset A_p(\mathbb{R})$ with $\sup_{w\in {\mathcal V}}[w]_{A_p}<\infty$, then there exists $\epsilon>0$ such that ${\mathcal V} \subset A_{p-\epsilon}(\mathbb{R})$ and $\sup_{w\in {\mathcal V}}[w]_{A_{p-\epsilon}} <\infty$. It can be directly obtained from a detailed analysis of the constants involved in main ingredients of the proof of the reverse H\"older inequality. Cf., e.g., \cite[Lemma 2.3]{LeOm12}. We refer the reader to \cite[Chapter IV]{GCRu85} and \cite[Chapter 7]{Duo01} for recent expositions of the results involved in the proof of the reverse H\"older inequality and the Fefferman-Stein inequality, which originally come from \cite{CoFe74}, and \cite{Mu72}, \cite{Mu74}. \end{remark} The next lemma is a special variant of Rubio de Francia's extrapolation theorem; see \cite[Theorem 3]{RdeF85}. For the convenience of the reader we rephrase \cite[Theorem 3]{RdeF85} here in the context of Muckenhoupt weights merely. \begin{lemma}[{\cite[Theorem 3]{RdeF84}}]\label{extrapol} Let $\lambda$ and $r$ be fixed with $1\leq \lambda \leq r <\infty$, and let $\mathcal{S}$ be a family of sublinear operators which is uniformly bounded in $L^r(\mathbb{R},wdx)$ for each $w\in A_{r/\lambda}(\mathbb{R})$, i.e., $$\int|Sf|^r wdx \leq C_{r,w} \int |f|^rwdx \quad (S\in \mathcal{S}, \, w\in A_{r/\lambda}(\mathbb{R})).$$ If $\lambda <p, \alpha < \infty$ and $w \in A_{p/\lambda}(\mathbb{R})$, then $\mathcal{S}$ is uniformly bounded in $L^p(\mathbb{R}, wdx)$ and even more: \begin{equation*}\label{eq1extr} \int(\sum_{j}|S_jf_j|^\alpha)^{p/\alpha} wdx \leq C_{p,\alpha,w} \int (\sum_{j} |f_j|^\alpha)^{p/\alpha} wdx\quad \left( f_j\in L^p(\mathbb{R}, wdx),\,\, S_j\in \mathcal{S} \right). \end{equation*} \end{lemma} Combining Lemma \ref{CarHunt} with Theorem \ref{RdeFtheorem} we get the intermediate weighted estimates for operators $S_q^{\mathcal I}$ $(q\in(1,2))$ stated in Theorem B$(i)$. For the background on the interpolation theory we refer the reader to \cite{BeLo77}; in particular, see \cite[Chapter 4 and Section 5.5]{BeLo77}. \begin{proof}[{\bf{Proof of Theorem B$(i)$}}] Fix $q\in(1,2)$ and $w\in A_{2/q}(\mathbb{R})$. By the reverse H\"older inequality, $w\in A_{2/r}(\mathbb{R})$ for some $r\in(q,2)$. Note that there exist $p\in(2,q')$ and $s>1$ such that $\frac{p}{q'} \frac{1}{p} + (1 - \frac{p}{q'})\frac{1}{s} = \frac{1}{r}$. Therefore, combining Theorem \ref{RdeFtheorem} with Lemma \ref{CarHunt}, by complex interpolation, the operator $S^{{\mathcal I}}_{({2q'}/{p})'}$ is bounded on $L^r(\mathbb{R}, v)$ for every $v\in A_1(\mathbb{R})$. Since $p>2$, the same conclusion holds for $S^{\mathcal I}_{q}$. By Rubio de Francia's extrapolation theorem, Lemma \ref{extrapol}, we get that $S^{\mathcal I}_{q}$ is bounded on $L^2(\mathbb{R}, v)$ for every $v\in A_{2/r}(\mathbb{R})$. According to our choice of $r$, we get the boundedness of $S^{\mathcal I}_{q}$ on $L^2(\mathbb{R}, w)$. Since the weight $w$ was taken arbitrarily, we can again apply Rubio de Francia's extrapolation theorem, Lemma \ref{extrapol}, to complete the proof of the first statement. The second statement follows easily from a detailed analysis of the first one. For a discussion on the character of the dependence of constants in Rubio de Francia's iteration algorithm, we refer the reader to \cite{DGPP05}, or \cite[Section 3.4]{CrMaPe11}. See also the comment on the reverse H\"older inequality in Remark \ref{upperbound}. \end{proof} Note that $R_q(I) \varsubsetneq V_q(I)$ for every interval $I$ in $\mathbb{R}$ and $q\in [1,\infty)$. However, the following reverse inclusions hold for these classes. \begin{lemma}[{\cite[Lemme 2]{CRdeFS88}}]\label{C-RdeF-S} Let $1\leq q< p<\infty$. For every interval $I$ in $\mathbb{R}$, $V_q(I) \subset R_p(I)$ with the inclusion norm bounded by a constant independent of $I$. \end{lemma} The patterns of the proofs of the parts $(i)$ and $(ii)$ of Theorem A are essentially the same. Therefore, we sketch the proof of the part $(ii)$ below. \begin{proof}[{\bf{Proof of Theorem A}}] $(i)$ We only give the proof for the more involved case $q\in (1,2)$; the case $q=2$ follows simply from Theorem \ref{CRdeFS} and interpolation arguments presented below; see also Remark \ref{rems3} below. Fix $q\in (1,2)$. We first show that for every subset ${\mathcal V}\subset A_{{2}/{q}}(\mathbb{R})$ such that $\sup_{w\in {\mathcal V}} [w]_{A_{{2}/{q}}} <\infty$ we have $$\sup\left\{ \|T_{m\chi_I}\|_{2,w}: m\in R_q({\mathcal D}), \|m\|_{R_q({\mathcal D})}\leq 1, w\in {\mathcal V}, I\in {\mathcal D} \right\} <\infty.$$ Fix ${\mathcal V}\subset A_{{2}/{q}}(\mathbb{R})$ with $\sup_{w\in {\mathcal V}} [w]_{A_{{2}/{q}}}<\infty$. Note that, by the definition of the $R_q$-classes, it is sufficient to prove the claim with $R_q({\mathcal D})$ replaced by ${\mathcal R}_q({\mathcal D})$ . Fix $m\in {\mathcal R}_q({\mathcal D})$ and set $ m\chi_I =:\sum_{J\in {\mathcal I}_{I}} a_{I,J} \chi_J$ for every $I\in {\mathcal D}$, where ${\mathcal I}_I={\mathcal I}_{I,m}$ is a decomposition of $I$ and $(a_{I,J})_{J\in {\mathcal I}_{I}}\subset \mathbb{C}$ is a sequence with $\sum_{J\in {\mathcal I}_{I}}|a_{I,J}|^q\leq 1$. Note that $T_{m\chi_I}f = \sum_{J} a_{I,J} S_Jf$ and $\|T_{m\chi_I}f\|_{2,w} \leq \|S_q^{{\mathcal I}_I}f\|_{2,w}$ for every $I\in {\mathcal D}$, $w\in {\mathcal V}$ and $f\in L^{2}(\mathbb{R},w)$. Therefore, by Lemma \ref{CarHunt}, our claim holds. By interpolation argument, we next sharpen this claim and prove that for every subset ${\mathcal V}\subset A_{2/q}(\mathbb{R})$ with $\sup_{w\in{\mathcal V}}[w]_{A_{2/q}}<\infty$ there exists $\alpha=\alpha(q, {\mathcal V})>1$ such that \begin{equation}\label{step1} \sup\left\{ \|T_{m\chi_I}\|_{2,w}: m\in R_{\alpha q}({\mathcal D}), \|m\|_{R_{\alpha q}({\mathcal D})}\leq 1, w\in {\mathcal V}, I\in {\mathcal D} \right\} <\infty. \end{equation} Note that, by the reverse H\"older inequality, see also Remark \ref{upperbound}, there exists $\alpha>1$ such that $w^\alpha \in A_{2/q}(\mathbb{R})$ $(w\in {\mathcal V})$ and $\sup_{w\in{\mathcal V}}[w^\alpha]_{A_{2/q}}<\infty$. From what has already been proved and Plancherel's theorem, for every $I\in {\mathcal D}$ and $w\in {\mathcal V}$ the bilinear operators $$ R_q(I)\times L^{2}(\mathbb{R}, w^\alpha dx) \ni (m, f) \mapsto T_mf\in L^{2}(\mathbb{R}, w^\alpha dx) $$ $$ L^\infty(\mathbb{R})\times L^{2}(\mathbb{R})\ni (m, f) \mapsto T_mf\in L^{2}(\mathbb{R}) $$ are well-defined and bounded uniformly with respect to $w\in {\mathcal V}$ and $I\in {\mathcal D}$. Therefore, by complex interpolation, $\left(R_q(I), L^\infty(\mathbb{R})\right)_{[\frac{1}{\alpha}]} \subset M_{2}(\mathbb{R}, w)$. However, it is easy to check that $R_{\alpha q}(I)\subset \left(R_q(I), L^\infty(\mathbb{R})\right)_{[\frac{1}{\alpha}]}$ with the inclusion norm bounded by a constant independent of $I\in {\mathcal D}$. We thus get \eqref{step1}. In consequence, by Lemma \ref{C-RdeF-S}, it follows that \begin{equation}\label{vectorestim} \sup\left\{ \|T_{m\chi_I}\|_{2,w}: m\in V_{q}({\mathcal D}),\; \|m\|_{V_q({\mathcal D})}\leq 1,\; w\in {\mathcal V}, I\in {\mathcal D} \right\} <\infty \end{equation} for every subset ${\mathcal V}\subset A_{2/q}(\mathbb{R})$ with $\sup_{w\in{\mathcal V}}[w]_{A_{2/q}}<\infty$. Hence, we can apply a truncation argument based on Kurtz' weighted variant of Littlewood-Paley's inequality. Namely, fix $w\in A_{2/q}(\mathbb{R})$, $m\in V_q({\mathcal D})$ with $\|m\|_{V_q({\mathcal D})}\leq 1$, and $f\in L^2(\mathbb{R})\cap L^2(\mathbb{R},w)$, $g\in L^2(\mathbb{R},w)\cap L^2(\mathbb{R}, w^{-1})$. Note that $gw\in L^2(\mathbb{R})$ and $A_{2/q}(\mathbb{R})\subset A_{2}(\mathbb{R})$. Therefore, combining the Cauchy-Schwarz inequality and Kurtz' result, \cite[Theorem 1]{Kurtz80}, we get: \begin{eqnarray*} |(T_m f,g)_{L^2(\mathbb{R},w)}| & = & \left|\sum_{I\in{\mathcal D}} \int_{\mathbb{R}} S_I(T_mf) S_I(gw) dx\right| \\ & \leq & C \left\| (\sum_{I\in{\mathcal D}} |T_{m\chi_I}S_If|^2)^{1/2} \right\|_{2,w} \left\| (\sum_{I\in{\mathcal D}} |S_I (gw)|^2)^{1/2} \right\|_{2,w^{-1}}\\ & \leq & C \|f\|_{2,w} \|g\|_{2,w}, \end{eqnarray*} where $C$ is an absolute constant independent of $m$, $f$ and $g$. Now the converse of H\"older inequality and a density argument show that $m\in M_2(\mathbb{R}, w)$. Consequently, $V_q({\mathcal D})\subset M_2(\mathbb{R}, w)$, and Rubio de Francia's extrapolation theorem, Lemma \ref{extrapol}, yields $V_q({\mathcal D}) \subset M_p(\mathbb{R}, w)$ for every $p>q$ and every Muckenhoupt weight $w\in A_{p/q}(\mathbb{R})$. It remains to prove that $V_q({\mathcal D}) \subset M_q(\mathbb{R}, w)$ for every $w\in A_1(\mathbb{R})$. Fix $m\in V_q({\mathcal D})$ and $w\in A_1(\mathbb{R})$. Then, by Theorem \ref{CRdeFS} (see also Remark \ref{rems3}), $T_m$ is bounded on $L^r(\mathbb{R})$ for every $r\in (1,\infty)$. From what has already been proved, $T_m$ is bounded on $L^r(\mathbb{R}, w)$ for every $r>q$. Therefore, the boundedness of $T_m$ on $L^q(\mathbb{R}, w)$ follows by the reverse H\"older inequality for $w$ and a similar interpolation argument as before. This completes the proof of the part $(i)$.\\ $(ii)$ Fix $q>2$ and $s>\frac{q}{2}$. Let ${\mathcal V}_{s}:=\{w\in A_1(\mathbb{R}): w\in RH_s(\mathbb{R})\}$. Note that there exists $r=r_s>q$ such that $\frac{1}{s}\frac{1}{2}+\frac{1}{s'} \frac{1}{r} <\frac{1}{q}.$ Fix $w \in {\mathcal V}_{s}$. By Theorem \ref{CRdeFS}, the bilinear operators $$R_r({\mathcal D}) \times L^2(\mathbb{R})\ni (m,f) \mapsto T_m f \in L^2(\mathbb{R})$$ $$R_2({\mathcal D}) \times L^2(\mathbb{R}, w^{s}) \ni (m,f) \mapsto T_mf \in L^2(\mathbb{R}, w^{s})$$ are well-defined and bounded. By interpolation, it follows that $$M_2(\mathbb{R}, w) \supset \left(R_2({\mathcal D}), R_r({\mathcal D})\right)_{[\frac{1}{s}]} \supset R_{\alpha q}(I) $$ uniformly with respect to $I\in {\mathcal D}$, where $\alpha= \alpha_s:= (\frac{1}{2s} + \frac{1}{s' r})^{-1}/q>1$. As in the corresponding part of the proof of $(i)$, by truncation and duality arguments, we get $R_{\alpha q}({\mathcal D})\subset M_2(\mathbb{R}, w).$ Consequently, since $\alpha_s>1$ for every $s>\frac{q}{2}$, by Lemma \ref{C-RdeF-S}, \begin{equation} V_q({\mathcal D})\subset M_2(\mathbb{R}, w)\quad \quad \textrm{for every } w \in \bigcup_{s>\frac{q}{2}} {\mathcal V}_{s}(\mathbb{R}). \end{equation} Note that this is precisely the assertion of $(ii)$ for $p=2$. We can now proceed by extrapolation. Since for every $s>\frac{q}{2}$ we can rephrase ${\mathcal V}_{s}$ as $A_{\frac{2}{2}}(\mathbb{R})\cap RH_{(\frac{2s'}{2})'}(\mathbb{R})$, by \cite[Theorem 3.31]{CrMaPe11}, we get \begin{equation}\label{pextra} V_q({\mathcal D})\subset M_p(\mathbb{R}, w)\quad \textrm{for every } s>\frac{q}{2}, \,\,2<p < 2s', \textrm{ and } w \in A_{\frac{p}{2}}(\mathbb{R})\cap RH_{(\frac{2s'}{p})'}(\mathbb{R})\,. \end{equation} Finally, it is easy to see that for every $2\leq p < \frac{1}{2} - \frac{1}{q}= 2(\frac{q}{2})'$ and $w \in A_{p/2}(\mathbb{R})$ with $s_w >(1-p(\frac{1}{2} - \frac{1}{q}))^{-1} = (\frac{2}{p}(\frac{q}{2})')'$ there exists $s=s_{p,w} >\frac{q}{2}$ such that $p<2s'$ and $w \in RH_{(\frac{2s'}{p})'}$. Therefore, \eqref{pextra} completes the proof of $(ii)$. \end{proof} \begin{remark}\label{rems3} In the proof of Theorem A we use Theorem \ref{CRdeFS} due to Coifman, Rubio de Francia, Semmes. Note that the patterns of all proofs are essentially the same. Indeed, we can rephrase the proof of \cite[Th\'eor\`eme 1]{CRdeFS88} as follows. First recall that $M_p(\mathbb{R})=M_{p'}(\mathbb{R})$ for every $p\in (1,\infty)$. Let $r\geq 2$. By the Littlewood-Paley decomposition theorem, Rubio de Francia's inequalities, and Plancherel's theorem, the bilinear operators $$ R_2({\mathcal D}) \times L^r(\mathbb{R})\ni (m, f)\mapsto T_m f\in L^r(\mathbb{R})$$ $$ L^\infty(\mathbb{R})\times L^2(\mathbb{R}) \ni (m,f) \mapsto T_m f \in L^2(\mathbb{R})$$ are well defined and bounded. Therefore, by interpolation, $\left(R_2({\mathcal D}),L^\infty(\mathbb{R})\right)_{[\theta(r)]} \subset M_p(\mathbb{R})$, where $\theta(r)\in (0, 1)$ and $p$ such that $\frac{1}{p}=\theta(r) \frac{1}{r} + (1-\theta(r)) \frac{1}{2}$. Note that if $p\geq 2$ and $q$ satisfies $\frac{1}{q}>\frac{1}{2} - \frac{1}{p}$, then there exists $r>2$ such that $R_{\alpha q}(I) \subset \left(R_2({\mathcal D}),L^\infty(\mathbb{R})\right)_{[\theta(r)]}$ for an appropriate $\alpha>1$ and uniformly with respect to $I\in {\mathcal D}$. Indeed, $\frac{1}{2}\theta(r) \searrow \frac{1}{2} - \frac{1}{p}$ as $r\rightarrow \infty$. Therefore, Lemma \ref{C-RdeF-S} completes the proof of Theorem \ref{CRdeFS}$(i)$. \end{remark} \section{Proof of Theorem B$(ii)$} We obtain the proof of Theorem B$(ii)$ by means of a Banach function space analogue of Kurtz' weighted variant of Littlewood-Paley inequalities and the Fefferman-Stein inequality; see Lemma \ref{wellextr} below. Note that without loss of generality in the proof of Theorem B$(ii)$ one can consider only families consisting of bounded intervals in $\mathbb{R}$. For a bounded interval $I\in {\mathcal I}$ we write ${\mathcal W}_I$ for Whitney's decomposition of $I$ (see \cite[Section 2]{RdeF85} for the definition). Note also that each decomposition ${\mathcal W}_I$, $i\in {\mathcal I}$, is of dyadic type. Furthermore, the family ${\mathcal W}^{\mathcal I}:=\bigcup_{I\in {\mathcal I}} {\mathcal W}_I$ is well-distributed, i.e., $$\sup_{x\in \mathbb{R}} \sum_{I\in {\mathcal W}^{\mathcal I}} \chi_{2I}(x)\leq 5.$$ We refer the reader primarily to \cite{BeSh88} for the background on function spaces. In the sequel, let $\mathbb{E}$ denote a rearrangement invariant Banach function space over $(\mathbb{R},dx)$. Recall that, by Luxemburg's representation theorem \cite[Theorem 4.10, p.62]{BeSh88}, there exists a rearrangement invariant Banach function space $\overline{\mathbb{E}}$ over ($\mathbb{R}_+, dt$) such that for every scalar, measurable function $f$ on $\mathbb{R}$, $f\in \mathbb{E}$ if and only if $f^*\in \overline\mathbb{E}$, where $f^*$ stands for the decreasing rearrangement of $f$. In this case $\|f\|_\mathbb{E}=\|f^*\|_{\overline\mathbb{E}}$ for every $f\in \mathbb{E}$. Following \cite{LiTz79}, we define the \it {lower} \rm and \it {upper Boyd indices} \rm respectively by $$ p_\mathbb{E} := \lim_{t\to \infty} \frac{\log t}{\log h_\mathbb{E}(t)} \quad\quad \textrm{and}\quad\quad q_\mathbb{E} := \lim_{t\to 0+} \frac{\log t}{\log h_\mathbb{E}(t)}, $$ where $h_{\mathbb{E}}(t)=\|D_t\|_{{\mathcal L} (\overline\mathbb{E})}$ and $D_t: \overline\mathbb{E}\rightarrow\overline\mathbb{E}$ $(t>0)$ is the \it {dilation operator} \rm defined by $$ D_tf(s)=f(s/t), \qquad 0<t<\infty, \quad f\in \overline\mathbb{E}. $$ One always has $1\le p_\mathbb{E}\le q_\mathbb{E}\le\infty$, see for example \cite[Proposition 5.13, p.149]{BeSh88}, where the Boyd indices are defined as the reciprocals with respect to our definitions. Let $w$ be a weight in $A_\infty(\mathbb{R})$. Then we can associate with $\mathbb{E}$ and $w$ a rearrangement invariant Banach function space over $(\mathbb{R}, wdx)$ as follows $$ \mathbb{E}_w =\{ f:\mathbb{R} \rightarrow \mathbb{C} \;\; {\rm{measurable }}: f^*_w \in \overline{\mathbb{E}} \}, $$ and its norm is $\|f\|_{\mathbb{E}_w}=\| f^*_w\|_{\overline\mathbb{E}}$, where $f^*_w$ denotes the decreasing rearrangement of $f$ with respect to $wdx$. For further purposes, recall also that examples of rearrangement Banach function spaces are the Lorentz spaces $L^{p,q}$ ($1\leq p$, $q\leq \infty$). Note that $L^{p,\infty}_w=\textrm{weak}-L^p(\mathbb{R}, w)$ for every $p\in(1,\infty)$ and $w\in A_\infty(\mathbb{R})$. The Boyd indices can be computed explicitly for many examples of concrete rearrangement invariant Banach function spaces, see e.g. \cite[Chapter 4]{BeSh88}. In particular, we have $p_\mathbb{E} = q_\mathbb{E} = p$ for $\mathbb{E}:=L^{p,q}$ ($1< p<\infty$, $1\leq q\leq \infty$); see \cite[Theorem 4.6]{BeSh88}. \begin{lemma}\label{wellextr} Let $\mathbb{E}$ be a rearrangement invariant Banach function space on $(\mathbb{R}, dx)$ such that $1< p_{\mathbb{E}}, q_\mathbb{E}<\infty$. Then the following statements hold. \begin{itemize} \item [(i)] For every Muckenhoupt weight $w\in A_{p_\mathbb{E}}(\mathbb{R})$ there exists a constant $C_{w, \mathbb{E}}$ such that for any family ${\mathcal I}$ of disjoint bounded intervals in $\mathbb{R}$ \begin{equation}\label{wellextr1} {C^{-1}_{\mathbb{E}, w}} \|S^{\mathcal I} f\|_{\mathbb{E}_w}\leq \|S^{{\mathcal W}^{{\mathcal I}}}f\|_{\mathbb{E}_w}\leq {C_{\mathbb{E}, w}} \|S^{\mathcal I} f\|_{\mathbb{E}_w} \end{equation} and \begin{equation}\label{FSfs} \|M f\|_{\mathbb{E}_w}\leq {C_{\mathbb{E}, w}} \| M^\sharp f\|_{\mathbb{E}_w} \end{equation} for every $f\in \mathbb{E}_w$. \noindent Moreover, if ${\mathcal V}\subset A_{p_\mathbb{E}}(\mathbb{R})$ with $\sup_{w\in {\mathcal V}} [w]_{A_{p_{\mathbb{E}}}}<\infty$, then $\sup_{w\in {\mathcal V}} {C_{\mathbb{E}, w}}<\infty$. \item [(ii)] For every $r\in (1,\infty)$ and every Muckenhoupt weight $w\in A_{p_\mathbb{E}}(\mathbb{R})$ there exists a constant $C_{r,\mathbb{E},w}$ such that for any family ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$ \begin{equation}\label{wellextr2} \left\| \left( \sum_{I\in {\mathcal I}} |S_I f_I|^r \right)^{1/r} \right\|_{\mathbb{E}_w} \leq C_{r,\mathbb{E},w} \left\| \left( \sum_{I\in {\mathcal I}} |f_I|^r \right)^{1/r} \right\|_{\mathbb{E}_w} \end{equation} for every $(f_I)_{I\in {\mathcal I}}\subset \mathbb{E}_w(l^r({\mathcal I}))$. \end{itemize} \end{lemma} The proof follows the idea of the proof of \cite[Lemma 6.3]{RdeF85}, i.e., it is based on the iteration algorithm of the Rubio de Francia extrapolation theory. We refer the reader to \cite{CrMaPe11} for a recent account of this theory; in particular, see the proofs of \cite[Theorems 3.9 and 4.10]{CrMaPe11}. We provide below main supplementary observations which should be made. \begin{proof}[{\bf{Proof of Lemma \ref{wellextr}}}] Note that we can restrict ourself to finite families ${\mathcal I}$ of disjoint bounded intervals in $\mathbb{R}$. The final estimates obtained below are independent of ${\mathcal I}$, and a standard limiting argument proves the result in the general case. According to \cite[Theorem 3.1]{Kurtz80}, for every Muckenhoupt weight $w\in A_2(\mathbb{R})$ there exists a constant $C_{2,w}$ such that \begin{equation}\label{2weightest} C^{-1}_{2,w} \|S^{\mathcal I} f\|_{L^2(\mathbb{R},w)} \leq \|S^{{\mathcal W}^{{\mathcal I}}}f \|_{L^2(\mathbb{R},w)} \leq C_{2,w} \|S^{\mathcal I} f\|_{L^2(\mathbb{R},w)}\quad\quad (f\in {L^2(\mathbb{R},w)}). \end{equation} Moreover, one can show that $\sup_{w\in{\mathcal V}}C_{2,w}<\infty$ for every subset ${\mathcal V}\subset A_2(\mathbb{R})$ with $\sup_{w\in {\mathcal V}}[w]_{A_2}<\infty$. Therefore, we are in a position to adapt the extrapolation techniques from $A_2$ weights; see for example the proof of \cite[Theorem 4.10, p. 76]{CrMaPe11}. Fix $\mathbb{E}$ and $w\in A_{p_\mathbb{E}}(\mathbb{R})$ as in the assumption. Let $\mathbb{E}_{w}'$ be the associate space of $\mathbb{E}_w$, see \cite[Definition 2.3, p. 9]{BeSh88}. Let ${\mathcal R}= {\mathcal R}_w :\mathbb{E}_{w} \rightarrow \mathbb{E}_{w}$ and ${\mathcal R}'= {\mathcal R}'_w :\mathbb{E}_{w}' \rightarrow \mathbb{E}_{w}'$ be defined by \begin{align*} {\mathcal R} h (t) & = \sum_{j=0}^{\infty} \frac{M^j h(t)}{2^j \|M\|^j_{\mathbb{E}_w}}, \quad 0\leq h\in \mathbb{E}_{w}, \text{ and} \\ {\mathcal R}' h (t) & = \sum_{j=0}^{\infty} \frac{S^j h(t)}{2^j \|S\|^j_{\mathbb{E}'_w}}, \quad 0\leq h\in \mathbb{E}'_{w}, \end{align*} where $Sh:= M(h w)/w$ for $h\in \mathbb{E}'_{w}$. As in the proof of \cite[Theorem 4.10, p. 76]{CrMaPe11} the following statements are easily verified: \begin{itemize} \item [(a)] For every positive $h\in\mathbb{E}_w$ one has \begin{align*} & h\leq {\mathcal R} h \text{ and } \| {\mathcal R} h\|_{\mathbb{E}_{w}}\leq 2 \|h\|_{\mathbb{E}_{w}} , \text{ and } \\ & {\mathcal R} h\in A_1 \text{ with } [{\mathcal R} h]_{A_1}\leq 2\|M\|_{\mathbb{E}_w} . \end{align*} \item [(b)] For every positive $h\in\mathbb{E}_w'$ one has \begin{align*} & h\leq \mathbb{R}' h \text{ and } \| {\mathcal R}' h\|_{\mathbb{E}_{w}'}\leq 2 \|h\|_{\mathbb{E}_{w}'} , \text{ and }\\ & ({\mathcal R}' h) w\in A_1 \text{ with } [({\mathcal R}' h) w]_{A_1}\leq 2\|S\|_{\mathbb{E}_{w}'} . \end{align*} \end{itemize} The last lines in $(a)$ and $(b)$ follow from the estimates $M({\mathcal R} h) \leq 2 \|M\|_{\mathbb{E}_w} {\mathcal R} h$ and $M(({\mathcal R}' h)w) \leq 2 \|S\|_{\mathbb{E}'_w} ({\mathcal R}' h)w)$, respectively, which in turn follow from the definitions of ${\mathcal R}$ and ${\mathcal R}'$. Note that $f\in L^2(\mathbb{R}, w_{|f|,h})$ for every $f\in \mathbb{E}_w$ and every positive $h\in \mathbb{E}'_w$, where $w_{g,h}:=({\mathcal R} g)^{-1}({\mathcal R}' h) w$ for every $0\leq g \in \mathbb{E}_w$ and $0\leq h \in \mathbb{E}'_w$. Moreover, by Boyd's interpolation theorem, the Hilbert transform is bounded on $\mathbb{E}_w$. Therefore, by the well-known identity relating partial sum operators $S_I$ and the Hilbert transfopozostałyrm, since ${\mathcal I}$ is finite, we get that $S^{\mathcal I} f \in \mathbb{E}_w$ for every $f\in \mathbb{E}_w$. Similarly, combining Kurtz' inequalities, \cite[Theorem 3.1]{Kurtz80}, with Boyd's interpolation theorem, we conclude that $S^{{\mathcal W}_I} f \in \mathbb{E}_w$ $(I\in {\mathcal I})$, and consequently $S^{{\mathcal W}^{{\mathcal I}}} f \in \mathbb{E}_w$ for every $f\in \mathbb{E}_w$. Finally, a close analysis of the proof of \cite[Theorem 4.10]{CrMaPe11} shows that we can take $$C_{\mathbb{E}, w}:= 4 \sup\{ C_{2,w_{g,h}}: \, 0\leq g\in \mathbb{E}_w, 0\leq h\in \mathbb{E}'_w, \|g\|_{\mathbb{E}_w}\leq 2, \|h\|_{\mathbb{E}_w'} =1\}.$$ Recall that for every $p\in (1,\infty)$ there exists a constant $C_p>0$ such that $\|M\|_{L^p_w} \leq C_{p} [w]^{p'/p}_{A_p}$ for every Muckenhoupt weight $w \in A_p(\mathbb{R})$; see \cite{Bu93}. A detailed analysis of Boyd's interpolation theorem shows that $\sup_{w\in {\mathcal V}}\max(\|M\|_{\mathbb{E}_w}, \|S\|_{\mathbb{E}_w}) <\infty$ for every ${\mathcal V}\subset A_{p_{\mathbb{E}}}(\mathbb{R})$ with $\sup_{w\in {\mathcal V}}[w]_{A_{p_\mathbb{E}}} <\infty$. By the so-called reverse factorization (or by H\"older's inequality; see e.g. \cite[Proposition 7.2]{Duo01}), and by properties $(a)$ and $(b)$, we obtain that $w_{g,h} \in A_2(\mathbb{R})$ and \begin{equation*} [w_{g,h}]_{A_2}\leq [{\mathcal R} g]_{A_1}[({\mathcal R}' h)w]_{A_1} \leq 4\|M\|_{\mathbb{E}_w} \|S\|_{\mathbb{E}_{w}'} \end{equation*} for every $0\leq g \in \mathbb{E}_w$ and $0\leq h \in \mathbb{E}'_w$. Therefore, on account of the remark on the constants $C_{2,w}$ in \eqref{2weightest}, we get the desired boundedness property of constants $C_{\mathbb{E}, w}$. This completes the proof of \eqref{wellextr1}. Note that, by the weighted Fefferman-Stein inequality, see Remark \ref{upperbound}, and the basic inequality $M^\sharp f\leq 2 Mf$ $(f\in L_{\textrm{loc}}^1(\mathbb{R}))$, the analogous reasoning as before yields \eqref{FSfs}. For the proof of the part $(ii)$, for fixed $r\in (1,\infty)$ it is sufficient to apply Rubio de Francia's extrapolation algorithm from $A_r$ weights in the same manner as above. \end{proof} Let ${\mathcal W}$ be a well-distributed family of disjoint intervals in $\mathbb{R}$, i.e., there exists $\lambda >1$ such that $\sup_{x\in \mathbb{R}} \sum_{I\in {\mathcal I}} \chi_{\lambda I} (x) <\infty$. Following \cite[Section 3]{RdeF85}, consider the smooth version of $S^{{\mathcal W}}$, $G=G^{{\mathcal W}}$, defined as follows: let $\phi$ be an even, smooth function such that $\hat \phi (\xi) = 1$ on $\xi\in [-\frac{1}{2}, \frac{1}{2}]$ and $\supp \hat\phi \subset [-\lambda/2,\lambda/2]$. Let $\phi_I(x):= e^{2\pi i c_I\cdot x} |I|\phi(|I|x)$ $(x\in \mathbb{R})$, where $c_I$ stands for the center of an interval $I\in {\mathcal W}$ and $|I|$ for its length. Then, $$Gf:=G^{{\mathcal W}} f : = \left( \sum_{I\in {\mathcal W}} |\phi_I\star f |^2 \right)^{1/2}\quad (f\in L^2(\mathbb{R})).$$ Since $\widehat{\phi_I}(\xi)=1$ for $\xi\in I$, and $\widehat{\phi_I}(\xi)=0$ for $\xi\notin \lambda I$, by Plancherel's theorem, $G$ is bounded on $L^2(\mathbb{R})$. Recall that the crucial step of the proof of \cite[Theorem 6.1]{RdeF85} consists in showing that the Hilbert space-valued kernel related with $G$ satisfies weak-$(D'_2)$ condition (see \cite[Part IV(E)]{RdeF85} for the definition). This leads to the following pointwise estimates for $G$: \begin{equation}\label{pointest} M^\sharp (Gf)(x) \leq C M(|f|^2)(x)^{1/2} \quad \quad (\textrm{ a.e. } x\in \mathbb{R}) \end{equation} for every $f\in L^\infty(\mathbb{R})$ with compact support, and a constant $C$ depending only on $\lambda$. In particular, $G$ is bounded on $L^p(\mathbb{R}, w)$ for every $p>2$ and every Muckenhoupt weight $w\in A_{p/2}(\mathbb{R})$. \begin{proof}[{\bf{Proof of Theorem B$(ii)$}}] We can assume that ${\mathcal I}$ is a finite family of bounded intervals in $\mathbb{R}$. By a standard limiting arguments we easily get the general case. We start with the proof of the statement of Theorem B$(ii)$ for $q=2$. Recall that $p_\mathbb{E} = q_\mathbb{E} = 2$ for $\mathbb{E}:=L^{2,\infty}$; see \cite[Theorem 4.6]{BeSh88}. Fix $w \in A_1(\mathbb{R})$ and $f\in L^\infty(\mathbb{R})$ with compact support. Note that the classical Littlewood-Paley theory shows that $G^{{\mathcal W}_I}$ is bounded on $L^{2}_w$ for every $I\in {\mathcal I}$. Consequently, $G=G^{{\mathcal W}^{\mathcal I}}$ maps $L^2_w$ into itself. Therefore, combining Lemma \ref{wellextr}, Lebesgue's differentiation theorem and \eqref{pointest} we get \begin{eqnarray*} \|S^{\mathcal I} f\|_{L^{2,\infty}_w} & \leq & C_w \|S^{{\mathcal W}^{\mathcal I}} f\|_{L^{2,\infty}_w} \leq C_w \|G f\|_{L^{2,\infty}_w} \leq C_w \|M(G f)\|_{L^{2,\infty}_w}\\ &\leq & C_w \|M^\sharp (G f)\|_{L^{2,\infty}_w} \leq C_w \|M (|f|^2)^{1/2}\|_{L^{2,\infty}_w} = C_w \|M (|f|^2)\|_{L^{1,\infty}_w}^{1/2}\\ &\leq& C_w \|f\|_{L^2_w}, \end{eqnarray*} where $C_w$ is an absolute constant independent on ${\mathcal I}$ and $f$. The last inequality follows from the fact that the Hardy-Littlewood maximal operator $M$ is of weak $(1,1)$ type. Furthermore, one can show that for every subset ${\mathcal V} \subset A_1(\mathbb{R})$ with $\sup_{w\in{\mathcal V}} [w]_{A_1}<\infty$ we have $\sup_{w\in {\mathcal V}} C_w<\infty$. Since $S^{\mathcal I}$ is continuous on $L^2_w$ and the space of all functions in $L^\infty(\mathbb{R})$ with compact support is dense in $L^2_w$ we get the desired boundedness for $S^{\mathcal I}$. This completes the proof of the statement of Theorem B$(ii)$ for $q=2$. We now proceed by interpolation to show the case of $q \in (1,2)$. Let ${\mathcal W}$ be a well-distributed family of disjoint intervals in $\mathbb{R}$, and $G$ denote the corresponding smooth varsion of $S^{\mathcal W}$. First, it is easily seen that $|\phi_I\star f|\leq (\int \phi dx)Mf$ for every $f\in L_{loc}^1(\mathbb{R})$ and $I\in {\mathcal W}$. Moreover, analysis similar to the above shows that $G$ maps $L^2_w(\mathbb{R})$ into $L^{2, \infty}_w(\mathbb{R})$ for every $w\in A_1(\mathbb{R})$. Therefore, for every $w\in A_1(\mathbb{R})$ the operators $$ L^1_w \ni f \mapsto (\phi_I \star f)_{I\in {\mathcal W}} \in L^{1,\infty}_w(l^\infty) $$ $$ L_w^2 \ni f \mapsto (\phi_I \star f)_{I\in {\mathcal W}} \in L^{2,\infty}_w(l^2) $$ are bounded. Fix $q\in (1,2)$ and $w\in A_1(\mathbb{R})$. By interpolation arguments, we conclude that the operator $$L_w^q \ni f \mapsto (\phi_I \star f)_{I\in {\mathcal W}} \in L^{q,\infty}_w(l^{q'})$$ is well-defined and bounded. To show it one can proceed analogously to the proof of a relevant result \cite[Lemma 3.1]{Quek03}. Therefore, we omit details here. Since $p_\mathbb{E} = q_\mathbb{E} = q$ for $\mathbb{E}:=L^{q,\infty}$, see \cite[Theorem 4.6]{BeSh88}, by Lemma \ref{wellextr}$(ii)$, for every $w\in A_1(\mathbb{R})$ we get \begin{eqnarray*} \|S_q^{\mathcal W} f\|_{L_w^{q,\infty}} & = & \left\|(\sum_{I\in {\mathcal W}} |S_I (\phi_I\star f)|^{q'})^{1/q'} \right\|_{L_w^{q,\infty}} \\ &\leq & C_{q,w} \left\|(\sum_{I\in {\mathcal W}} | \phi_I\star f|^{q'})^{1/q'}\right\|_{L_w^{q,\infty}} \leq C_{q,w} \|f\|_{L_w^q}, \end{eqnarray*} where $C_{q,w}$ is an absolute constant. This completes the proof. \end{proof} \begin{remark} We conclude with the relevant result on $A_2$-weighted $L^2$-estimates for square functions $S^{\mathcal I}$ corresponding to arbitrary families ${\mathcal I}$ of disjoint intervals in $\mathbb{R}$, i.e., $\|S^{\mathcal I}\|_{2,w} \leq C \|f\|_{2,w}$ $(f\in L_w^2)$. According to \cite[Part IV(E)(ii)]{RuRuTo86}, these weighted endpoint estimates can be reached by interpolation provided that ${\mathcal I}$ is a family such that $S^{\mathcal I}$ admits an extension to a bounded operator on (unweighted) $L^p(\mathbb{R})$ for some $p<2$. This observation leads to a natural question: for which partitions ${\mathcal I}$ of $\mathbb{R}$ do there exist {\it local} variants of the Littlewood-Paley decomposition theorem, i.e., there exists $r\geq 2$ such that $S^{\mathcal I}$ is bounded on $L^p(\mathbb{R})$ for all $|\frac{1}{p}- \frac{1}{2}|<\frac{1}{r}$. Recall that L. Carleson, who first noted the possible extension of the classical Littlewood-Paley inequality for other types of partitions of $\mathbb{R}$, proved in the special case ${\mathcal I}:=\{ [n, n+1): n\in \mathbb{Z} \}$ that the corresponding square function $S^{\mathcal I}$ is bounded on $L^p(\mathbb{R})$ only if $p\geq 2$; see \cite{Carleson67}. Moreover, it should be noted that such lack of the boundedness of the square function $S^{\mathcal I}$ on $L^p(\mathbb{R})$ for some $p<2$ occurs in the case of decompositions of $\mathbb{R}$ determined by sequences which are in a sense not too different from lacunary ones. Indeed, applying the ideas from \cite[Section 8.5]{EdGa77}, we show below that even in the case of the decomposition ${\mathcal I}$ of $\mathbb{R}$ determined by a sequence $(a_j)^\infty_{j=0}\subset (0, \infty)$ such that $a_{j+1} - a_{j}\sim \lambda^{\phi(j){j}}$, where $\lambda>1$ and $\phi(j)\rightarrow 0^+$ arbitrary slowly as $j \rightarrow \infty$, the square function $S^{\mathcal I}$ is not bounded on $L^p(\mathbb{R})$ for every $p<2$. If $I$ is a bounded interval in $\mathbb{R}$, set $f_I$ for the function with $\widehat{f_I} = \chi_I$. Then, $|f_I| = \left|\frac{\sin(|I| \pi \cdot)} {\pi (\cdot)}\right|$, and for every $p>2$ and every $\epsilon >0$ there exists $c>0$ such that $$\frac{1}{c} |I|^{1/p'} \leq \|f_I\|_p \leq c |I|^{1/p'}$$ for all intervals $I$ with $|I|>\epsilon$. This simply observation allows to express \cite[Theorem 8.5.4]{EdGa77} for decompositions of $\mathbb{R}$ instead of $\mathbb{Z}$. Namely, if $a=(a_j)_{j=0}^\infty\subset (0, \infty)$ is an increasing sequence such that $a_j - a_{j-1} \rightarrow \infty$ as $j\rightarrow \infty$, and ${\mathcal I}_a:=\{ ( - a_0, a_0)\} \cup \{ \pm [a_{j-1}, a_{j})\}_{j\geq 1}$, then the boundedness of $S^{{\mathcal I}_a}$ on $L^{p'}(\mathbb{R})$ for some $p>2$, $1/p' + 1/p =1$, implies that there exists a constant $C_p>0$ such that \begin{equation}\label{equ2} a_k^{2/p'} \leq C_p \sum_{j=1}^{k}(a_j - a_{j-1})^{2/p'} \quad (k\geq 1). \end{equation} Moreover, it is straightforward to adapt the idea of the proof of \cite[Corollary 8.5.5]{EdGa77} to give the following generalization. Let $a=(a_j)_{j=0}^\infty\subset (0,\infty)$ be an increasing sequence such that $a_{j+1} - a_{j}\sim \lambda^{\psi(j)}$, where $\lambda>1$, the function $\psi\in {\mathcal C}^1([0,\infty))$ is increasing and satisfies the condition: $\psi(s)/s \rightarrow 0$ and $\psi '(s) \rightarrow 0$ as $s\rightarrow \infty$. If the square function $S^{{\mathcal I}_a}$ were bounded on $L^{p'}(\mathbb{R})$ for some $p>2$, then \eqref{equ2} yields $$ \left( \int_0^{k-1} \lambda^{\psi(s)} ds\right)^{2/p'}\leq C_p \int_0^{k+1} \lambda^{\psi(s)2/p'}ds\quad (k\geq 1). $$ However, this leads to a contradiction with the assumptions on $\psi$. \end{remark} \section{Higher dimensional analogue of Theorem A} The higher dimensional extension of the results due to Coifman, Rubio de Francia and Semmes \cite{CRdeFS88} was established essentially by Q. Xu in \cite{Xu96}; see also M. Lacey \cite[Chapter 4]{Lac07}. We start with higher dimensional counterparts of some notions from previous sections. Here and subsequently, we consider only bounded intervals with sides parallel to the axes. Let $q\geq 1$ and $d\in \mathbb{N}$. For $h>0$ and $1\leq k \leq d$ we write $\Delta^{(k)}_h$ for the difference operator, i.e., $$\left( \Delta^{(k)}_h m \right) (x):=m(x+ he_k) - m(x)\quad\quad (x\in \mathbb{R}^d)$$ for any function $m:\mathbb{R}^d\rightarrow \mathbb{C}$, where $e_k$ is the $k$-th coordinate vector. Suppose that $J$ is an interval in $\mathbb{R}^d$ and set $\overline{J} =:\Pi_{i=1}^{d}[a_i,a_i+h_i]$ with $h_i>0$ $(1\leq i \leq d)$. We write $$(\Delta_J m):= \left(\Delta^{(1)}_{h_1}...\Delta^{(d)}_{h_d}m \right) (a),$$ where $a:=(a_1, ..., a_d)$ and $m:\mathbb{R}\rightarrow \mathbb{C}$. Moreover, for an interval $I$ in $\mathbb{R}^d$ and a function $m: \mathbb{R}^d \rightarrow \mathbb{C}$ we set $$\|m\|_{\textrm{Var}_q(I)}:= \sup_{{\mathcal J}} \left( \sum_{J\in {\mathcal J}} |\Delta_{{J}} m|^q\right)^{1/q},$$ where ${\mathcal J}$ ranges over all decompositions of $I$ into subintervals. Following Q. Xu \cite{Xu96}, see also \cite[Section 4.2]{Lac07}, the spaces $V_q(I)$ for intervals in $\mathbb{R}^d$ are defined inductively as follows. The definition of $V_q(I)$ $(q\in [1,\infty))$ for one-dimensional intervals is introduced in Section 1. Suppose now that $d\in \mathbb{N}\setminus\{1\}$ and fix an interval $I=I_1\times ... \times I_d$ in $\mathbb{R}^d$. For a function $m:\mathbb{R}^d \rightarrow \mathbb{C}$, we write $m\in V_q(I)$ if $$\|m\|_{{V_q}(I)}:= \sup_{x\in I}|m(x)| \, +\, \sup_{x_1\in I_1} \|m(x_1, \cdot)\|_{V_q(I_2\times ... \times I_d)} \, + \, \|m\|_{\textrm{Var}_q(I)} < \infty.$$ Subsequently, ${\mathcal D}^d$ stands for the family of the dyadic intervals in $\mathbb{R}^d$. The definition of the spaces $(V_q({\mathcal D}^d), \|\cdot\|_{V_q({\mathcal D}^d)} )$ $(d \geq 2,\,\, q\in[1,\infty))$ is quite analogous to the corresponding ones in the case of $d=1$ from Section 1. For a Banach space $X$, an interval $I$ in $\mathbb{R}$ and $q\geq 1$, we consider below the vector-valued variants $V_q(I; X)$, ${\mathcal R}_q(I; X)$, and $R_q(I; X)$ of the spaces $V(I)$, ${\mathcal R}_q(I)$, and $R(I)$, respectively. Note that $V_q(I;X)\subset R_p(I;X)$ for any $1\leq q <p$ and any interval $I$ in $\mathbb{R}$ with the inclusion norm bounded by a constant depending only on $p$ and $q$; see \cite[Lemma 2]{Xu96}. Moreover, higher dimensional counterparts of these spaces we define inductively as follows: let $I:=\Pi_{i=1}^d I_i$ be a closed interval in $\mathbb{R}^d$ ($d\geq 2)$. Set $\widetilde{R}_q(I):= R_q(I_1; \widetilde{R}_q(I_2\times ...\times I_{d}))$ and $\widetilde{V}_q(I) := V_q(I_1; \widetilde{V}_q( I_2\times ...\times I_{d}))$, where $\widetilde{R}_q(I_d):=R(I_d)$ and $\widetilde{V}_q(I_d):= V_q(I_d)$. Recall also that for any $1\leq q <p$ and any interval $I$ in $\mathbb{R}^d$ $(d\geq 1)$ we have \begin{equation}\label{inclusion} V_q(I)\subset \widetilde{V}_q(I) \subset \widetilde{R}_p(I) \end{equation} with the inclusion norm bounded by a constant independent of $I$. Finally, we denote by $A^*_p(\mathbb{R}^d)$ $(p\in [1,\infty))$ the class of weights on $\mathbb{R}^d$ which satisfy the strong Muckenhoupt $A_p$ condition. Note that, in the case of $d=1$, $A^*_p(\mathbb{R})$ is the classical Muckenhoupt $A_p(\mathbb{R})$ class ($p\in [1,\infty)$). We refer the reader, e.g., to \cite{Kurtz80} or \cite[Chapter IV.6]{GCRu85} for the background on $A^*_p$-weights. The following complement to \cite[Theorem $(i)$]{Xu96} is the main result of this section. \begin{thC}\label{ndim}{\it{ Let $d\geq 2$ and $q\in (1,2]$. Then, $V_q({\mathcal D}^d)\subset M_p(\mathbb{R}^d,w)$ for every $p\geq q$ and every weight $w\in A^*_{p/q}(\mathbb{R}^d)$.\\ $(ii)$ Let $d\geq 2$ and $q>2$. Then, $V_q({\mathcal D}^d)\subset M_p(\mathbb{R}^d, w)$ for every $2\leq p <(\frac{1}{2} - \frac{1}{q})^{-1}$ and every weight $w\in A^*_{p/2}(\mathbb{R}^d)$ with $s_w> (1- p(\frac{1}{2} - \frac{1}{q}))^{-1}$. }} \end{thC} \begin{lemma}\label{main} For every $d\in \mathbb{N}$, $q\in(1,2]$, $p> q$, and every subset ${\mathcal V} \subset A^*_{p/q}(\mathbb{R}^d)$ with $\sup_{w\in {\mathcal V}}[w]_{A^*_{p/q}(\mathbb{R}^d)} <\infty$ we have $\widetilde{R}_{q}({\mathcal D}^d) \subset M_p(\mathbb{R}^d, w)$ $(w\in {\mathcal V})$ and $$\sup\left\{ \|T_{m\chi_I} \|_{p,w}: m\in \widetilde{R}_{q}({\mathcal D}^d),\; \|m\|_{\widetilde{R}_{q}({\mathcal D}^d)}\leq 1,\; w\in {\mathcal V}, \; I\in {\mathcal D}^d \right\} <\infty.$$ \end{lemma} Here $\widetilde{R}_q({\mathcal D}^d)$ $(q\geq 1)$ stands for the space of all functions $m$ defined on $\mathbb{R}^d$ such that $m\chi_I\in \widetilde{R}_q(I)$ for every $I\in{\mathcal D}^d$ and $\sup_{I\in {\mathcal D}^d}\|m\chi_I\|_{\widetilde{R}_{q}(I)} < \infty$. Define $\widetilde{V}_q({\mathcal D}^d)$ similarly. The classes $\widetilde{R}_q({\mathcal D}^d)$ and $A_p^*(\mathbb{R}^d)$ are well adapted to iterate one-dimensional arguments from the proof of Theorem A$(i)$. Therefore, below we give only main supplementary observations should be made. \begin{proof}[{\bf{Proof of Lemma \ref{main}}}] We proceed by induction on $d$. The proof of the statement of Lemma \ref{main} for $d=1$ and $p=2$ is provided in the proof of Theorem A$(i)$. The general case of $d=1$ and $p>q$ follows from this special one by means of Rubio de Francia's extrapolation theorem; see Lemma \ref{extrapol}. Assume that the statement holds for $d\geq 1$; we will prove it for $d+1$. Let $m\in \widetilde{R}_{q}({\mathcal D}^{d+1})$ with $\|m\|_{\widetilde{R}_{q}({\mathcal D}^{d+1})}\leq 1$. By approximation, we can assume that $m_I\in {\mathcal R}_{q}(I_1; \widetilde{R}_{q}(I_2\times ... \times I_{d+1}) )$ for every $I:=I_1\times ... \times I_{d+1} \in {\mathcal D}^{d+1}$. Set $m_I :=\sum_{J\in{\mathcal I}_{I}} \gamma_{I,J} a_{I, J} \chi_J$, where $\gamma_{I,J} \geq 0$ with $\sum_{J}\gamma_{I,J}^{q} \leq 1$ and $a_{I, J}\in \widetilde{R}_{q}(I_2\times ... \times I_{d+1})$ with $\|a_{I,J}\|_{ \widetilde{R}_{q}(I_2\times ... \times I_{d+1})} = 1$ for every $I\in {\mathcal D}^{d+1}$. Here ${\mathcal I}_{I}$ stands for a decomposition of $I_1$ corresponding to $m_I$. Let $q\in(1,2]$, $p\geq q'$ and ${\mathcal V}_{q,p} \subset A^*_{p/q}(\mathbb{R}^{d+1})$ with $\sup_{w\in {\mathcal V}}[w]_{A^*_{p/q}(\mathbb{R}^{d+1})}<\infty$. By Lebesque's differentiation theorem, for every $w\in A^*_{r}(\mathbb{R}^{d+1})$ $(r>1)$ one can easily show that $w(\cdot, y)\in A_{p/q}(\mathbb{R})$, $w(x,\cdot)\in A_{p/q}^*(\mathbb{R}^{d})$, and $[w(\cdot,y)]_{A_{p/q}(\mathbb{R})}, [w(x, \cdot)]_{A_{p/q}^*(\mathbb{R}^{d})}\leq [w]_{A_{p/q}^*(\mathbb{R}^{d+1})}$ for almost every $y\in R^d$ and $x\in \mathbb{R}$; see e.g. \cite[Lemma 2.2]{Kurtz80}. Therefore, by induction assumption, for every $q\in(1,2]$ and $ p\in[q',\infty)\setminus \{2\}$ there exists a constant $C_{q,p}>0$ independent of $m$ and $w\in {\mathcal V}_{q,p}$ such that for every $w \in {\mathcal V}_{q,p}$: \begin{equation}\label{est1} \sup\left\{ \|T_{a_{I,J}}\|_{p,w(x,\cdot)} : J\in {\mathcal I}_{I}, \;I\in {\mathcal D}^{d+1}\right\}\leq C_{q,p} \quad \quad \textrm{for a.e. } x \in \mathbb{R}. \end{equation} Let $f(x,y):=\phi(x) \rho(y)$ ($(x,y)\in \mathbb{R}^{d+1}$), where $\phi\in S(\mathbb{R})$ and $\rho\in S(\mathbb{R}^d)$. Note that the set of functions of this form is dense in $L^{q'}(\mathbb{R}^{d+1}, w)$. Indeed, by the strong doubling and open ended properties of $A_p^*$-weights, we get $(1+|\cdot|)^{-dr} w\in L^1(\mathbb{R}^d)$ $(r>1, w\in A_r^*(\mathbb{R}^d))$; see e.g. \cite[Chapter IX, Proposition 4.5]{Torch04}. Hence, this claim follows from the standard density arguments. Moreover, we have $T_{m_I}f =\sum_J \gamma_{I,J} S_{J}\phi T_{ a_{I,J}}\rho $. In the sequel, we consider the case of $q\in (1,2)$ and $q=2$ separately. For $q\in (1,2)$, by Fubini's theorem, we get \begin{equation*} \|T_{m_I} f\|^{q'}_{q', w} \leq \sum_{J\in {\mathcal I}_I} \int_\mathbb{R} |S_J \phi|^{ q'} \int_{\mathbb{R}^d} |T_{a_{I, J}} \rho|^{q'} w dy dx \quad \quad (w\in {\mathcal V}_{q,q'}, I\in {\mathcal D}^{d+1}). \end{equation*} Therefore, by Theorem B$(i)$ and \eqref{est1}, we conclude that $$\sup\left\{ \|T_{m_I}\|_{q', w}: w\in {\mathcal V}_{q,q'},\; m\in \widetilde{R}_{q}({\mathcal D}^{d+1}), \; \|m\|_{\widetilde{R}_{q}({\mathcal D}^{d+1})}\leq 1, \;I\in {\mathcal D}^{d+1}\right\}<\infty.$$ Consequently, by Rubio de Francia's extrapolation algorithm, see \cite[Theorem 3]{RdeF84} or \cite[Chapter 3]{CrMaPe11}, the same conclusion holds for all $p>q$. For $q=2$, by Fubini's theorem and Minkowski's inequality, we conclude that \begin{equation*} \|T_{m_I} f\|^{p}_{p, w} \leq \int_\mathbb{R} |S^{{\mathcal I}_{I}} \phi(x)|^{p} \left( \sum_{J\in {\mathcal I}_{I}} \gamma^2_{I,J} \|T_{a_{I, J}} \rho\|^{2}_{p, w(x,\cdot)} \right)^{\frac{p}{2}} dx \quad (w\in {\mathcal V}_{2,p}, I\in {\mathcal D}^{d+1}). \end{equation*} for every $p>2$. Hence, by Theorem \ref{RdeFtheorem} and \eqref{est1}, we get the statement of Lemma \ref{main} also for $q=2$. \end{proof} \begin{proof}[{\bf{Proof of Theorem C}}] Note first that for every ${\mathcal V} \subset A_1^*(\mathbb{R}^{d})$ with $\sup_{w\in {\mathcal V}}[w]_{A_1^*}<\infty$, by the reverse H\"older inequality, there exists $s >1$ such that $w^s \in A_1^*(\mathbb{R}^{d})$ $(w\in {\mathcal V})$ and $\sup_{p\geq 2, w\in {\mathcal V}}[w^s]_{A_{p/2}^*}<\infty$. Thus, by Lemma \eqref{main} and an interpolation argument similar to that in the proof of Theorem A$(i)$, we get $$\sup\left\{ \|T_{m\chi_I}\|_{2, w}: w\in {\mathcal V},\; m\in \widetilde{R}_{2}({\mathcal D}^{d}), \; \|m\|_{\widetilde{R}_{2}({\mathcal D}^d)}\leq 1, \;I\in {\mathcal D}^{d}\right\}<\infty .$$ Therefore, as in the proof of Theorem A$(i)$, one can show that for every $q\in(1,2]$ and every subset ${\mathcal V} \subset A^*_{2/q}(\mathbb{R}^d)$ with $N:=\sup_{w\in {\mathcal V}}[w]_{A^*_{2/q}} <\infty$, there exists a constant $\alpha = \alpha(d,q,N)>1$ such that $\widetilde{R}_{\alpha q}({\mathcal D}^d) \subset M_2(\mathbb{R}^d, w)$ $(w\in {\mathcal V})$ and $$\sup\left\{ \|T_{m\chi_I} \|_{2,w}: m\in \widetilde{R}_{\alpha q}({\mathcal D}^d),\; \|m\|_{\widetilde{R}_{\alpha q}({\mathcal D}^d)}\leq 1,\; w\in {\mathcal V}, \; I\in {\mathcal D}^d \right\} <\infty.$$ Now, by means of \eqref{inclusion}, Kurtz' weighted variant of Littlewood-Paley's inequalities, \cite[Theorem 1]{Kurtz80}, and Rubio de Francia's extrapolation theorem, \cite[Theorem 3]{RdeF84}, the rest of the proof of $(i)$ runs analogously to the corresponding part of the proof of Theorem A$(i)$. Consequently, by $(i)$, the proof of the part $(ii)$ follows the lines of the proof of Theorem A$(ii)$. \end{proof} \end{document}
arXiv